Our Blog

Nuestro equipo de especialistas quiso compartir algunos artículos sobre tecnologías, servicios, tendencias y novedades de nuestra industria en la era de la transformación digital.

We previously learned about the benefits of Hardware Acceleration with FPGAs, as well as various Key concepts about acceleration with FPGA that they offer to companies and teams that implement them. In this article, we will learn about the applications that typically benefit from employing this technology.

First, a comparison between distributed and heterogeneous computing will be made, highlighting the place that FPGAs have achieved in data centers. Then, we will present the most widespread applications of Acceleration with FPGAs, among them, Machine Learning, Image and Video Processing, Databases, among others.

Distributed Computing Vs. Heterogeneous
In the last 10 years, we have witnessed an exponential growth in the generation of data, this in part thanks to the rise and popularity of electronic devices, such as cell phones, Internet of Things (IoT) devices, wearable devices (smart-watches), and many more.

At the same time, the consumption of higher quality content by users has been increasing, a clear example being the case of television and/or streaming services, which have gradually increased the quality of content, which translates to greater demand for data.

This growth in the generation/consumption of data brought the appearance of new computationally demanding applications, capable of both taking advantage of them and helping in their processing.
However, the problem then arises with the execution times necessary for its processing, directly affecting the user experience, making the solution impractical. This raises the question: how can we reduce execution times to make the proposed solutions more viable?

One of the solutions proposed consists of using Distributed Computing; in this, more than one computer is interconnected in a network to distribute the workload. Under this model, the maximum theoretical acceleration to be obtained is equal to the number of machines added in the data processing. Although it is a viable solution, it offers the problem that the time involved in distributing and transmitting data over the network must be considered.

For example, if we want to reduce the data processing time to one third, we would have to configure up to four computers, which would skyrocket the costs of energy consumption and the physical space to occupy.

Another alternative is to use Heterogeneous Computing. This, in addition to using processors (CPUs) for general purpose tasks, seek to improve the performance of the same computer by adding specialized processing capabilities to perform particular tasks.
It is at this point where general-purpose graphics cards (GPGPUs) or programmable logic cards (FPGAs) are used, one of the main differences being that the former have a fixed architecture, while the latter is fully adaptable to any workload, in addition to consuming less (due, among other things, to the possibility of generating the exact Hardware to be used).

Unlike Distributed Computing, in Heterogeneous Computing, the acceleration will depend on the type of application and the architecture that is developed. For example, in the case of databases, the acceleration may have a lower frequency than a Machine Learning inference case would have (which can be accelerated by hundreds of times); Another example would be the case of the acceleration of financial algorithms, where the acceleration rate is given in the thousands.
Additionally, instead of adding computers, boards are simply added in PCIe slots, saving resources, storage space, and energy consumption, resulting in a lower Total Cost of Ownership (TCO).

FPGA-based accelerator cards have become an excellent complement for data centers, is available both on-premise (own servers) and in cloud services from Amazon, Azure, and Nimbix, among others.

Applications that benefit from hardware acceleration with FPGAs
In principle, any application that involves complex algorithms with large volumes of data, where the processing time is long enough to mitigate access to the card, is a candidate for acceleration. Besides, it must be a process that can be carried out through parallelization. Among the typical solutions for FPGAs, which respond to these characteristics, we find:


One of the most disruptive techniques in recent years has been Machine Learning (ML). Hardware acceleration can bring many benefits, due to the high level of parallelism and the huge number of matrix operations required. These can be seen both in the training phase of the model (reducing this time from days to hours or minutes) and in the inference phase, enabling real-time applications, like fraud detection, real-time video recognition, voice recognition, etc.


Image and Video Processing is one of the areas most benefited by acceleration, making it possible to work in real-time on tasks such as video transcoding, live streaming, and image processing. It is used in applications such as medical diagnostics, facial recognition, autonomous vehicles, smart stores, augmented reality, etc.


Databases and Analytics receive increasingly complex workloads due to advances in ML, forcing an evolution of the data center. Hardware acceleration provides solutions to computing (for example, with accelerators that, without touching code, accelerate PostgreSQL between 5-50X or Apache Spark up to 30x) and storage (via smart SSDs with FPGAs).


The large amount of data to be processed requires faster and more efficient storage systems. By moving information processing (compression, encryption, indexing) as close as possible to where the data resides, bottlenecks are reduced, freeing up the processor and reducing system power requirements.


Something similar happens with Network Acceleration, where information processing (compression, encryption, filtering, packet inspection, switching, and virtual routing) moves to where the data enters or leaves the system.


High-Performance Computing (HPC) is the practice of adding more computing power, in such a way as to deliver much higher performance than a conventional PC, to solve major problems in science and engineering. It includes everything from Human Genome sequencing to climate modeling.


In the case of Financial Technology, time is key to reducing risks, making informed business decisions, and providing differentiated financial services. Processes such as modeling, negotiation, evaluation, risk management, among others, can be accelerated.


With hardware-acceleration, Tools and Services can be offered that process information in real-time, helping to automate designs, obtaining shorter development times.

Summary
Making a brief comparison between the models, in Distributed Computing more than one computer is interconnected in a network and the workload is distributed among all of them. This model, used for example by Apache Spark, is highly scalable but has the disadvantage of consumption and the physical space occupied, which will increase proportionally.

Concerning Heterogeneous Computing, the performance of the same computer is improved by adding hardware (for example, via graphics cards such as GPGPUs or FPGAs), adding specialized processing capabilities. This makes it possible to obtain acceleration rates that depend on the type of application, but that can be, in some cases, between 1-10X (for example, when using Databases) up to hundreds or thousands of times when using Machine Learning.

Through a profiling and validation analysis of the feasibility of parallelizing the different processes and solutions, we can determine if FPGA Hardware Acceleration is the ideal solution for your company, especially if it works with complex algorithms and large volumes of data.

In this way, your business can improve the user experience by offering a faster and smoother experience thanks to reducing process execution times; additionally, and thanks to the reduction of the TCO of its solutions, budget control would be optimized.