Yes, there is some progress. I pushed some code to a clone of python-on-a-chip:
http://code.google.com/r/markdewing-epi ... e/checkoutIt is enough to compile for Epiphany and run the PyMite interpreter on the simulator.
Steps to run:
1. Download/checkout the code
2. Be sure to be running Python 2.6. (Build system will not work if running 2.7) You might need to compile and install it, preferably somewhere out of the way so it doesn't interfere with the system python, and then put that place on your path.
3. Make sure the E-SDK is on your path
4. Build with 'make PLATFORM=epiphany all'
5. Run an interactive session with 'make PLATFORM=epiphany ipm'
The version built by default uses more than 32K. It is possible to make a version that uses less than 32K, but it requires removing nearly all the features defined in src/platform/epiphany/pmfeatures.py (including, unfortunately, floating point support). For the sake of getting something working, I may target 64k for interpreter+code+data (leave one core out of two idle).
Tentative next steps are to work on the programming model.
- First, I'll try implementing a very minimal subset of numpy arrays (nano numpy?)
- if that works, I'll try extending those arrays to coarrays. Coarrays seem like a natural fit for the Epiphany memory model.
Ultimately, it would be nice to compile the python kernels to object code - this would remove the space and performance overhead of the interpreter on each core. Given that the cores will probably be running a very restricted subset of Python, this should be relatively easy. Particularly with LLVM - assuming someone is making an LLVM backend?