Using FPGAs for Artificial Intelligence
Do you know how good are standard FPGAs for AI purposes, and how different will dedicated FPGA-based devices be from them? As Artificial intelligence (AI) and machine learning (ML) are progressing faster than silicon can be designed.
FPGAs are promising, but they also have some significant problems that must be overcome. The graphics processing unit (GPU) made machine learning (ML) possible. It provided significantly more compute power and had a faster connection to memory than the CPU.
Mike Fitton says “The applicability of GPUs has been a success story in the development of ML”. This is explained by a number of factors, including high floating-point performance and ease of development with a robust, high-level tools ecosystem.”
FPGA, which is inherently parallel and hardware-programmable, and these devices excel at specialized workloads that need massive parallelism in compute operations. They have small amounts of distributed memory incorporated into the fabric, bringing memory closer to the processing.
Joe Mallet, senior product marketing manager at Synopsys, “FPGAs started to gain traction in that space because they can do more processing than the GPU,” and is now considered to be the low-power solution.
FPGAs are scalable and most important aspect of FPGAs is their flexibility. Mallet further says that, “If you have an algorithm that is bigger than what the GPU can handle or fits into the FPGA, you can daisy-chain FPGAs together, as well.”
Geoff Tate, CEO of Flex Logix. Says, “A year from now, nobody will be designing in GPUs or FPGAs to do any high-volume inference applications”.
Finally, where people require higher performance or where power efficiency is important, they can often turn to a dedicated NPU. However, even then they have a wide variety of needs, and this is why a range of devices are required for different markets.