What happened to the 1000 core in 2014?

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

What happened to the 1000 core in 2014?

Postby KNERDY » Thu Jul 20, 2017 9:27 pm

And the 16,000 core which is suppose to happen in 2022? There has been ZERO activity on this in four years.
User avatar
KNERDY
 
Posts: 13
Joined: Thu Jul 20, 2017 9:23 pm

Re: What happened to the 1000 core in 2014?

Postby sebraa » Sat Jul 22, 2017 8:34 am

The 1000 core chip has been developed (see the datasheet).
Given that DARPA is behind further developments, the bigger chips may appear, too.

However, it won't be available to the general public anymore.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm

Re: What happened to the 1000 core in 2014?

Postby KNERDY » Sat Jul 29, 2017 4:43 am

That is really sad. No wonder the board has basically died.

However, doesn't this violate the stated project goals from the Kickstarter?
Making parallel computing easy to use has been described as "a problem as hard as any that computer science has faced". With such a big challenge ahead, we need to make sure that every programmer has access to cheap and open parallel hardware and development tools. Inspired by great hardware communities like Raspberry Pi and Arduino, we see a critical need for a truly open, high-performance computing platform that will close the knowledge gap in parallel programing. The goal of the Parallella project is to democratize access to parallel computing. If we can pull this off, who knows what kind of breakthrough applications could arise? Maybe some of them will even change the world in some small but positive way.


Thanks for the info.
User avatar
KNERDY
 
Posts: 13
Joined: Thu Jul 20, 2017 9:23 pm

Re: What happened to the 1000 core in 2014?

Postby aolofsson » Sat Jul 29, 2017 12:35 pm

Attachments
John_Bauer_1915.jpg
John_Bauer_1915.jpg (3.29 MiB) Viewed 21536 times
User avatar
aolofsson
 
Posts: 1005
Joined: Tue Dec 11, 2012 6:59 pm
Location: Lexington, Massachusetts,USA

Re: What happened to the 1000 core in 2014?

Postby KNERDY » Sat Jul 29, 2017 8:21 pm

Not sure what the link to the old and original board is for.
User avatar
KNERDY
 
Posts: 13
Joined: Thu Jul 20, 2017 9:23 pm

Re: What happened to the 1000 core in 2014?

Postby jar » Sun Jul 30, 2017 2:47 am

@KNERDY,

I believe it was in reply to your criticism about the comment: "we need to make sure that every programmer has access to cheap and open parallel hardware and development tools". His reply was that the inexpensive Parallella board ($99-$149), which is an open design and includes a dual-core ARM processor, 16-core Epiphany processor, Xilinx FPGA, an open GNU toolchain, and many open source software libraries and projects -- should satisfy that stated goals.

There shouldn't be any issues with the stated goals and the project didn't violate thhem. You may argue about whether or not the goals were successfully achieved. Parallel computing is hard and it's not going away.
User avatar
jar
 
Posts: 295
Joined: Mon Dec 17, 2012 3:27 am

Re: What happened to the 1000 core in 2014?

Postby KNERDY » Wed Aug 02, 2017 7:21 pm

The board is now extremely outdated, and not inexpensive anymore. I do believe the design dates back to 2012.
User avatar
KNERDY
 
Posts: 13
Joined: Thu Jul 20, 2017 9:23 pm

Re: What happened to the 1000 core in 2014?

Postby dobkeratops » Fri Aug 04, 2017 8:23 pm

KNERDY wrote:The board is now extremely outdated, and not inexpensive anymore. I do believe the design dates back to 2012.


seems from the other post E5 wont be available to the public.. chicken egg situation. it would seem these things are still being developed by governments.. china's own supercomputer chip is similar.

We still have the games derived chips in the mainstream.. which aren't quite focussed on the same use case IMO, they just happen to do it a lot better than CPUs. (i keep talking about the historical parallel with cell, i think that would have done AI too if they continued but that also died out). E5 would be able to keep entire vision nets onboard, whilst a GPU has to stream it through it's smaller L2 cache

We get complex software done in open source, but it takes advantage of mainstream hardware that appears incrementally (GPUs grew out of GL accelerators with limited but cross-hardware APIs allowing developers to use devices from multiple vendors, then they gradually generalised with the shaders.. and of course CPUs are all much of a muchness).

It seems the radical shift parallela needed was slightly beyond community co-operation.

The software tools did eventually appear (I liked jar's posts about getting templated running on them) ... but not soon enough to get widespread momentum going. Seems big government/corporate funding can still coordinate bigger teams on bigger gambles, perhaps.

It would be nice to identify exactly what went wrong here, and still see if there are ways to fix it.. what can we do better.


I talked about the epiphany in many forums trying to get deep-learning people interested.. they were all focussed on GPUs. Given the number of people out there that buy GPUs for that (millions of programmers, there must be tens of thousands worldwide experimenting informally ), it seems disappointing that the world couldn't get enough $$$ together at the right place and right time to get the E5 built as a PCI card or in a parallela2 form factor. I do also believe E5 would have been great for game-physics and AI (and some parts of rendering.. animation skinning) but gamers are content to have that happening either on their CPU or GPU.
dobkeratops
 
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Re: What happened to the 1000 core in 2014?

Postby KNERDY » Wed Aug 23, 2017 4:13 am

I can tell you what went wrong. The 1024 core board was never delivered. Who in 2014 would not want a 1024 core system which may could have been potentially priced lower than a graphics card, and probably used less power?

I still may get a board, or two in the near future.
User avatar
KNERDY
 
Posts: 13
Joined: Thu Jul 20, 2017 9:23 pm

Re: What happened to the 1000 core in 2014?

Postby sebraa » Thu Aug 24, 2017 12:31 pm

dobkeratops wrote:I talked about the epiphany in many forums trying to get deep-learning people interested.. they were all focussed on GPUs.
There are a couple of reasons, and power efficiency doesn't matter outside of data centers. GPUs are cheap and available, advanced tools exist (with support by big companies), developer knowledge is wide-spread, algorithms and libraries have been developed and tunes, and they have maximum possible memory bandwidth to the system (newest PCIe standards). None of this applies to tiled manycore systems.

dobkeratops wrote:Given the number of people out there that buy GPUs for that (millions of programmers, there must be tens of thousands worldwide experimenting informally ), it seems disappointing that the world couldn't get enough $$$ together at the right place and right time to get the E5 built as a PCI card or in a parallela2 form factor.
The people buying newest-generation GPUs for high-performance work and/or gaming are not the same people who could design a PCIe accelerator, even when you ignore large-scale building, selling and distribution of physical devices (RoHS and two dozen different laws to follow in different countries, etc).

dobkeratops wrote:I do also believe E5 would have been great for game-physics and AI (and some parts of rendering.. animation skinning) but gamers are content to have that happening either on their CPU or GPU.
Gamers can be expected to own a CPU and a GPU, but they can't be expected to own an accelerator, no matter how awesome it could be. Nobody cared about PhysX either, until CUDA-enabled GPUs started to support it.

You're barking up the wrong tree.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm

Next

Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 7 guests