Page 1 of 1

graphics assist/fgpa.. feasible?

PostPosted: Sat Jun 06, 2015 9:18 pm
by dobkeratops
Is the FPGA theoretically capable of implementing hardware that could assist the epiphany chip in running graphics (shaders, etc..) - especially Texture Sampling (access to standard compressed formats), z-buffering (GPU's have tricks for compression), and maybe more.

i'm guessing a full on FPGA GPU would be out of the question and slow anyway

I'd expect the epiphany to be pretty good at vertex-processing, but that might waste its full capabiltiies.

Re: graphics assist/fgpa.. feasible?

PostPosted: Sun Jun 07, 2015 7:58 am
by piotr5
I don't think full on-fpga gpu is out of the question, it wouldn't be slow because fpga programs are completely parallell. so only the size is a limitation, and of course fpga-gpu would run much hotter than a real gpu, waste much more energy. afaik someone already implemented a standard framebuffer on fpga, for the self-contained sdr project shown in the tokyo talk. at least to me, gpu is a major reason for buying the embedded parallella model, if at all only this model will have the size required for full 3d accelleration...

Re: graphics assist/fgpa.. feasible?

PostPosted: Sun Jun 07, 2015 9:56 am
by dobkeratops
full GPU in FPGA... thats' rather interesting to hear.

would there be virtue to the hybrid aproach - in that you'd leverage the parallela ASIC more (executing shader code, then rely on the FPGA for details of memory addressing)

I suppose FPGA gpu's could implement specific common shaders directly , rather than having to run shader software, if you knew which shaders you wanted to run most often

Does the FPGA have hardware multipliers & RAM ?

Re: graphics assist/fgpa.. feasible?

PostPosted: Sun Jun 07, 2015 11:27 am
by piotr5
I can't say much about what fpga is capable of. but just do the math: a resolution of 1280x1024, 1280 kilo-pixels. at 32-bit colour consumes 15Mbyte per frame. epiphany can only transfer 1Gigabyte/s, that's about 68 frames per second. so while you'd be able to use epiphany for directly writing the frames to be displayed, I'm afraid textures have a higher resolution in total since gpu will scale them down. I suppose it is possible to write a game running on parallella with lots of details and physics and such, but it wont suffice to port an existing game over to epiphany. as you said, some stuff like light and shading needs to be implemented in fpga. so if you plan to release a game for parallella, you need to distribute it on sd-compatible memory which boots into the game and uploads a custom fpga image, you need to master programming of all these things. of course you can turn valuable fpga space into memory, but why not use the 1Gb chip provided?

Re: graphics assist/fgpa.. feasible?

PostPosted: Sun Jun 07, 2015 3:00 pm
by dobkeratops
heh I don't think it would be wise to consider the Parallella for gamedev. we've seen CELL & Ageia PhysX come & go in the same niche.

however it would be nice to see how far it could get as a jack-of-all-trades: would it be able to get passable performance in various areas, even outside the niches to which it may excel in. Get most use out of the devices that have been sold, and give people more incentive to get them..

Re: graphics assist/fgpa.. feasible?

PostPosted: Sun Jun 07, 2015 7:23 pm
by 9600
Antmicro implemented a graphics IP core for driving an LCD.

Regards,

Andrew

Re: graphics assist/fgpa.. feasible?

PostPosted: Mon Jun 08, 2015 10:31 am
by dobkeratops
piotr5 wrote:epiphany can only transfer 1Gigabyte/s, that's about 68 frames per second. so while you'd be able to use epiphany for directly writing the frames to be displayed, I'm afraid textures have a higher resolution in total since gpu will scale them down.


(a) textures are usually read in 'DXT'/'S3TC' compressed/mipmapped (typically 4bpp) - the GPU can write out more out than it reads in, expanding colour data out and enriching it with shading calculations (admittedly, I have suggested using the FPGA to assist with texture decompression, so that idea would probably have to go, but perhaps you could assist decompressing into palettized 256 colour images easier for a CPU to use). I suppose you might be referring to the potential for 'scaling down' if you use the highest quality sampling modes. Texture resolution is one of the easiest tweakable quality settings

(b) it might be possible to render on-chip tiles, 'TileBasedDeferedRendering' as per powerVR (not to be confused with deferred lighting/shading) - and compress these before writing out e.g. 16bit colour with dithering . perhaps a custom video mode could be used with some on-the-fly decompression (how about doing something a little like DXT compression but on 8x1 chunks of scanlines..)

but overall.. 1gb/sec does seem limiting. I wonder how far the 64core version could get.

Re: graphics assist/fgpa.. feasible?

PostPosted: Tue Jun 09, 2015 9:22 am
by tnt
I'm not so convinced by 3D capabilities, this might end up being pretty big in the FPGA.

However simple blitting and colorspace conversion/scaling overlays would go a long way to making the desktop experience much more usable. Currently when playing a video, the scaling and colorspace change is a very significant part of the CPU usage and these could easily be handled by HW.

Re: graphics assist/fgpa.. feasible?

PostPosted: Mon Aug 31, 2015 2:48 pm
by over9000
Just came across this article about a new open source proof-of-concept GPU that's implemented on FPGA. Could be implemented on Parallella, perhaps?

http://www.theregister.co.uk/2015/08/31 ... ource_gpu/

Re: graphics assist/fgpa.. feasible?

PostPosted: Tue Sep 01, 2015 1:49 am
by aolofsson
It doesn't do graphics yet. Until it does, not sure I see the point.