Re: Multi-core Neural Net Engine
Posted:
Fri Oct 06, 2017 8:24 am
by claudio4parallella
Hi, all very interesting.
I'm staying with feet on the ground.
- Epiphany IV or V do not exist (at present, for public usage).
- Movidius is doing something with 6 cores even if with an hardware appearing dedicated to accelerate NN and Covolution ort simply fast memory usage...
- we do not like Cuda, here with Parallella
- we do not have OpenCL (coprthr1.6, old image only)
- we have e-lib only
That's the context within which I'd like to play something with the 16 cores (or 32 core in cluster I have) for easy exercises in a classroom.
Re: Multi-core Neural Net Engine
Posted:
Fri Oct 06, 2017 1:18 pm
by dobkeratops
I actually think trying to dive in with a complete working general purpose neural net engine is probably too much; I would guess that trying to adapt an existing framework will just cause confusion and fill the code with red-herrings, sending you down blind alleys.
Yes it can "run C" .. but it can't re-use the overall approaches seen in existing CPU source code (push with DMA, vs pull from random sources via pointers etc).. I saw this with CELL , starting with code from other platforms was a disaster..
it's probably better to start just with convolutions, then figure out how to do those efficiently (i.e. how to tile across the cores.. has anyone done this yet?), then extend it to *multi-feature* convolutions. Just do image processing examples.
(the function I see in PAL is just 'single channel.. you might be able to adapt it with interleaving, or it might be better to code multi-channels specifically), then use that as a primitive for running 'forward inference' for convolutional nets.. just try to use a net already trained on a GPU.
(As I mentioned , it *is* then possible to express the back-propogation calculation for conv-nets using convolution operations.)
(are there implementations of edge-detector algorithms for the e-cores?)
Vision nets take a long time to train (hours, days .. weeks... ) can you imagine how much hell you're in for if you're starting out with a device performing 10x as slow (hous/days/weeks become days/weeks/months) .. far better to train the net on your PC GPU, or get one ready trained.
I suppose you could look at a scaled down example, e.g digit recognition , with the assumption that you could increase the layers to deal with more elaborate vision examples later
Re: Multi-core Neural Net Engine
Posted:
Sun Oct 22, 2017 1:22 am
by dobkeratops
this paper might be interesting (i'm guessing it will have been posted elsewhere but i remembered this thread)
http://ieeexplore.ieee.org/abstract/document/7726118/