What happened to the 1000 core in 2014?

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

Re: What happened to the 1000 core in 2014?

Postby dobkeratops » Wed Sep 06, 2017 6:15 am

sebraa wrote:There are a couple of reasons, and power efficiency doesn't matter outside of data centers.

individuals have electricity bills too

sebraa wrote:GPUs are cheap and available, advanced tools exist (with support by big companies), developer knowledge is wide-spread, algorithms and libraries have been developed and tunes, and they have maximum possible memory bandwidth to the system (newest PCIe standards). None of this applies to tiled manycore systems.

right, but libraries like tensorflow have a programming model that is more applicable. I also note that eventually the ability to get code generated from a template library onto the e-cores has appeared (let me find the link, something by jar). this would provide another route to writing code transferable between CPU/GPU/Tiled-manycore.

sebraa wrote:The people buying newest-generation GPUs for high-performance work and/or gaming are not the same people who could design a PCIe accelerator, even when you ignore large-scale building, selling and distribution of physical devices (RoHS and two dozen different laws to follow in different countries, etc).

so collaborate.

sebraa wrote:Gamers can be expected to own a CPU and a GPU, but they can't be expected to own an accelerator, no matter how awesome it could be. Nobody cared about PhysX either, until CUDA-enabled GPUs started to support it.

I should also say that image convolutions would also be useful for some graphical effects; i've seen papers on doing enhanced material systems using this. games might start using more AI too

sebraa wrote:You're barking up the wrong tree.

I'm pointing out potential
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk


Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 14 guests