A 4-way cluster backplane

Sub forum for Parallella daughter cards and accessories

Moderator: Folknology

Re: A 4-way cluster backplane

Postby timpart » Fri Jul 05, 2013 11:55 am

ticso wrote:At most you can interconnect 16x 16 core parallella or 4x 64 core.
That is because only one mesh direction is available of the 64x64 epiphany address matrix.


I agree with you there is no easy way to do this. I can think of just one way around it offhand (apart from abandoning the Parallella and making your own board from Epiphany chips)

The FPGA is to the East of the Epiphany. It could be reprogrammed to make the FPGA PEC like an e-link. The major problem with this is that signals in the Epiphany that want to go West to lower numbered chips would get lost. But it might be possible to arrange by the FPGA using address aliasing. If the desired external address is lower than the current board (in the East West direction) then set the top E-W bit on it so it seems to be higher (need to do the same to the return address as needed). Then all off chip addresses could be made to go East and the interconnect between the boards would have to sort out the address aliasing and route to the correct board.

Horribly complex but it would let you put boards in half of the East West direction slots giving half the theoretical matrix.

Tim
timpart
 
Posts: 302
Joined: Mon Dec 17, 2012 3:25 am
Location: UK

Re: A 4-way cluster backplane

Postby ticso » Fri Jul 05, 2013 2:39 pm

To correct myself: it is 8x 64 core.
The FPGA way sounds difficult if possible at all and I doubt it can be very fast as AFAIK there are not that many io and no serdes.
Even with just north/south interconnection it is tricky with the zynq <-> ecore communication as a zynq can write to any ecore, but only read from direct connected and ecore can only write to local connected zynq.
In my project I don't need any parallella interconnection at all, but I want to support it with my board layout - just in case.
And I want some raw NAND flash chips for fast storage - will decide later depending on PL space and IO required if I connect to zynq or elink via extra FPGA.
ticso
 
Posts: 41
Joined: Mon Dec 17, 2012 3:22 am
Location: Germany, Moers

Re: A 4-way cluster backplane

Postby Sundance » Wed Jul 10, 2013 11:01 am

Dear Fellow Parallalla Fans,

Firstly, then full disclosure. Sundance Multiprocessor Technology Ltd. is a commercial company. We design, build (yes.. – really!) and test a range of product in-house. Has done for a long time and had a great time during the last generation of Parallel Processing chips, the Inmos Transputer. We are hoping to have an equally great time with Adapteva’s Epiphany, as maybe the time has finally come for Embedded Multiprocessing or Multi-DSP Accelerators. We have plans to become an Epiphany board-level design company, when the time comes when Epiphany is commercial available and suitable to Industrial applications. We already posted our intensions here - viewtopic.php?f=10&t=67

We are an ‘early’ supporter of the Kick-Starter efforts by Adapteva and our Development Platform is currently available for anybody that wants to “Play” - viewtopic.php?f=9&t=306&p=2313&hilit=Network+access+to+Parallella+prototype#p1759

The initial step is to get some good tools developed, application written and tested. The Parallalla is the route. What everybody that has supported the Kick-Starter efforts has to remember is that the USD99.00 you have invested just about cover the cost of the Zynq Z7020 FPGA + Memory that Adapteva are fitting to the Parallalla, so it’s a massive investment for Adapteva in terms of money and time. It’s a brilliant way to give LOTS of people a Parallel DSP in their hands.

So… #9600 (Andrew) contacted me (Flemming) and asked if we (#Sundance) could help with making a carrier-board for the ones that want MORE than one Parallalla and I said: “Sure” – and hence why this thread started.

Although the Parallalla is a kind of a High-Performance Computer in your hand or desk, then it will still need a PSU, a box and cables to interface to “Real-World”, so my thinking is to treat the Parallalla as a component and start from scratch.

My initial thought was:
1. Needs a way to get to PEC_FPGA and PEC_POWER to a 0.1” Header and Power + get it off the kitchen-table with leftovers
2. Needs an optional add-in interface to a PCI Express to give Storage, Scalability and eLink Routing/Switching, etc.
a.That will need an FPGA, so a way to connect the PEC_FPGA and also get access to 0.1” Headers, etc.
My gut feeling was 4x Parallalla’s, as 64x DSP Cores – but then that means a board with 16x of the PEC (Samtec) connectors and they are USD6.34 each, so the first question is:

What is best:
a. Up to Two (1x or 2x) Parallalla on a PC Plug-in board @ Price abc (what that will be depends on number of boards and size of “Routing” FPGA)

Product Name = P002

b. Up to Four (1x to 4x) Parallalla on a PC Plug-in board @ Price 2x abc (that will need an even bigger “Routing” FPGA)

Product Name = P004


Another problem in the “Life-of-Parallalla” is that we have ‘lost’ an eLink and another eLink is hard-wired to the Z7020 FPGA, so that leaves 2x (two) external eLinks. For the ones that does not have experience with high-speed design (and Epiphany is THAT!) then the eLink is a very fast Parallel Bus. It’s actually 48x wires and NOT possible to really using 0.1” headers and cables. MultiCoax cables are available, but very expensive.

How to connect multiple Carriers then?? The answer is… “Not using eLink and use Ethernet or/and PCI Express”

Another consideration in this efforts (cost) is the size of the PCB. The simple rule is that PCB Manufactures charge by size and assembly/test gets more expensive if using components on both side. P002 would only use top, whereas P004 would need Parallalla’s on both (will use two PCI Express slots!) sides, unless we want to break the physical size specification of PCI Express (not a good idea..!)

Another question that was introduced was: “Can the Hobbyist assemble the P002/P004 to save money?” – and I am sure ‘some’ could, but the pitch of the PEC (Samtec) connector is impossible to hand-solder, so not really an option, in my view.

Then… - "How many will want to buy such a ‘Carrier’??"

Firstly, Sundance needs to cover cost for this, as although we want to be helpful, then not a charity, so the cost is the cost! It will cost to build, test and ship, etc. What is the cost then?? Well, I simply have no idea at this stage other than a 8+ layer (required PCB) of the size of PCI Express is likely to be USD75.00 in quantities of 100x PCBs (less if more!) – and then everything becomes a bit fussy!

The second question is therefore:

"What is the sweet-spot (ahh… - Britain got a Wimbledon Winner!) for either a P002 or P004, where P004 is likely to be twice the price, as a large FPGA is required?"

I am sure everything is as clear as mud now, but the idea of “Open Source” is also “Open R&D” – and above is such an adventure and driving by the Community, rather than a Company.


All the best – and my work email is open for private comments as well – Flemming.C@Sundance.com.
I will also attend the event in Embecosm on 21st July in UK - http://www.embecosm.com/2013/06/26/prep ... tchley-uk/ and happy to talk about P00x


Flemming

P.S. The photo is our Year 1991 PC-Carrier for 4x Transputer Modules (TRAM) and the other photo is a Module (TRAM) with 2x T805 Transputers. It was HIGHEST possible Tech and lovely all around the World
Attachments
Quad TRAM Carrier.jpg
The PC interface was almost 1Mega/bytes/sec - wow..
Quad TRAM Carrier.jpg (4.3 KiB) Viewed 18289 times
Dual Transputer TRAM Module.jpg
2x 20MHz CPU - and 4kb of Memory of Internal Memory
Dual Transputer TRAM Module.jpg (5.16 KiB) Viewed 18289 times
Flemming Christensen
Mobile: +44 7 850 911 417;
Email: Flemming.C@Sundance.com
Skype: Flemming_Sundance
Company Home Page: http://www.sundance.com
User avatar
Sundance
 
Posts: 50
Joined: Mon Dec 17, 2012 3:25 am
Location: Chesham, Bucks, England

Re: A 4-way cluster backplane

Postby 9600 » Thu Jul 11, 2013 10:39 am

Thanks for sharing your ideas for this board, Flemming!

As well as the dual vs. quad Parallella design decision, I'd also like us to consider the options of:

P00xA: Budget board

Some of the FPGA pins routed to 0.1" breakout, with others either hard-wired between Parallella boards or routed via a smaller FPGA or similar. Meaning that you don't need to buy another module to add breakout and you get at least some high-speed FPGA interconnect, if not entirely flexible.

P00xB: Premium board

As Flemming described and with those additional high-speed connectors that carry PEC_FPGA. Enabling a module to be added with an FPGA, for routing signals between each Zynq PL and to implement PCI-E interface etc.

Just to be clear the plan is that this board(s) would be open source hardware too.

Thoughts?

Cheers,

Andrew
Andrew Back (a.k.a. 9600 / carrierdetect)
User avatar
9600
 
Posts: 997
Joined: Mon Dec 17, 2012 3:25 am

Re: A 4-way cluster backplane

Postby Gravis » Thu Jul 11, 2013 8:56 pm

9600 wrote:Thoughts?

i would love to have a pcie dev board with an epiphany IV chip that won't break the bank and perhaps have other pcie boards with multiple chips (4/8/16). though i do have a decent reflow oven so schematics and epiphany chips work for me. :P
User avatar
Gravis
 
Posts: 445
Joined: Mon Dec 17, 2012 3:27 am
Location: East coast USA.

P002 - A 2-way Backplane for Parallalla

Postby Sundance » Thu Jul 18, 2013 5:46 pm

Hi,

Me - and "Only Another" - thought that PCI Express would be useful on a Carrier, so let forget about that idea for now and go back to something much simpler and less cost, as original asked for.... - and make it a 2-way, to reduce the cost of PCB and number of connectors, that can scale to ANY number (given enough desk-space)

The attached drawing is a Carrier for 2x Parallalla's and connectivity is hopefully clear, but allow me to add:

1. PEC_North <> PEC_South via tracks. These will run at full eLink speed

2. PEC_Link_North connector can mate with PEC_Link_South Connector on a SECOND P002, populated with 2x Parallalla's - etc.

3. PEC_Link_South connector can mate with PEC_Link_North Connector on a THIRD P002, populated with 2x Parallalla's - etc.

4 The PEC_Link Connector is a 60-way connector by Samtec - LSHM - http://www.samtec.com/technical-specifi ... aster=LSHM - and designed for higher speed than eLink. They click and locks. Lovely!

5. All signal from PEC_FPGA to a 0.1" spaced "Bread-Board" area, that COULD be populated with a 0.1" Header, but left for User to add, as might want to SOLDER wires; Might want to make Tea, etc.! - or might never want to use.

6. All signal from PEC_POWER to a 0.1" spaced "Bread-Board" area, that COULD be populated with a 0.1" Header, but left for User to add, as might want to SOLDER wires; Might want to make Tea, etc.! - or might never want to use.

7. ATX-PSU Connector is to connect to 5V source to power BOTH Parallalla's, using the Mounting Holes Input power. It would ALSO be possible to use the 5V Barrel with a sufficient PSU Brick on one of the Inputs, but not more than 1x P002, so multiple P002s would require common 5V

8. Will 8-layers and impendence-match PCB (150mm x 105 mm) to make the eLink able to work @ full speed.

We have to do a batch of 100x boards to make this a $99.99 adventure and Sundance are quite happy to bank-roll the development cost and MOQ investment, IF any interest. Any purchase order would be with PayPal or Debit/Credit Card on a secure Web-site and no money taken until P002 was shipped. I did see the idea of a "Kick-Starter" for this, but much easier potentials to simple order on requirements.

The price would be closer to $75.00 for a batch of 250 and then it will not really get a lot cheaper unless we were into the 10000's! :mrgreen:

Andrew (#9600) said it might be possible to use this Support Forum as a way to get a poll of interest and I will leave that with him.

I am surely open for better ideas as well, although I can't think of anything to remove anymore, expect the second Parallalla, to make it a One-Way Carrier. That IS an option... :!:

Do not be shy... - speak-up, please ;)

... - ups... - PDF not allowed. :o - will try to save a JPG
Attachments
Simple P002.jpg
A 2-way Carrier Board for Parallalla
Simple P002.jpg (38.37 KiB) Viewed 18218 times
Flemming Christensen
Mobile: +44 7 850 911 417;
Email: Flemming.C@Sundance.com
Skype: Flemming_Sundance
Company Home Page: http://www.sundance.com
User avatar
Sundance
 
Posts: 50
Joined: Mon Dec 17, 2012 3:25 am
Location: Chesham, Bucks, England

Re: A 4-way cluster backplane

Postby LamsonNguyen » Fri Jul 19, 2013 5:57 am

I am interested and I'm sure those who pledged for 2+ boards would be interested as well.
LamsonNguyen
 
Posts: 138
Joined: Sun Dec 16, 2012 7:09 pm

Re: P002 - A 2-way Backplane for Parallalla

Postby Gravis » Fri Jul 19, 2013 7:39 pm

Sundance wrote:Me - and "Only Another" - thought that PCI Express would be useful on a Carrier, so let forget about that idea for now and go back to something much simpler and less cost

i just meant an epiphany chip on a minimal PCIe board, no parallella involved. hmm... i wonder if you could manage to interface the epiphany chip directly to PCIe. i have a free 16x PCIe, so i dont care how many lanes it needs. :)
User avatar
Gravis
 
Posts: 445
Joined: Mon Dec 17, 2012 3:27 am
Location: East coast USA.

Re: P002 - A 2-way Backplane for Parallalla

Postby ysapir » Sat Jul 20, 2013 1:28 pm

Gravis wrote:
Sundance wrote:Me - and "Only Another" - thought that PCI Express would be useful on a Carrier, so let forget about that idea for now and go back to something much simpler and less cost

i just meant an epiphany chip on a minimal PCIe board, no parallella involved. hmm... i wonder if you could manage to interface the epiphany chip directly to PCIe. i have a free 16x PCIe, so i dont care how many lanes it needs. :)


http://www.bittware.com/products-servic ... afms5-pcie
User avatar
ysapir
 
Posts: 393
Joined: Tue Dec 11, 2012 7:05 pm

Re: A 4-way cluster backplane

Postby Sundance » Sat Jul 20, 2013 4:57 pm

#LamsonNguyen: Glad you liked it. Thanks

#Gravis: You were NOT "The Another", as you clearly stated that you had own facilities to build a board with PCI Express and 64x Core Epiphany. You are going to find it interesting to connect to the 16-lane PCI Express Bus (ahh.. - could work with a 64-bit old-type PCI Bus, as found in PC's a long time ago) of your PC directly. ;) - but a good idea, as Epiphany is effectively a "Co-Processor" as explained here - viewtopic.php?f=23&t=429&p=2661#p2661 - and hence why my initial idea was to add another FPGA, etc.
That will be for another day :!:

#ysapir: The Bittware offering is truly impressive and an inspirations. Thanks for sharing.

Anybody else got comments about the idea of a simple Carrier for 2x Parallalla's?
Should it be 1x Carrier that scales like the current idea?


- or is the silence majority just happy?
Flemming Christensen
Mobile: +44 7 850 911 417;
Email: Flemming.C@Sundance.com
Skype: Flemming_Sundance
Company Home Page: http://www.sundance.com
User avatar
Sundance
 
Posts: 50
Joined: Mon Dec 17, 2012 3:25 am
Location: Chesham, Bucks, England

PreviousNext

Return to Daughter Cards & Accessories

Who is online

Users browsing this forum: No registered users and 5 guests

cron