What is HPC?
Digitalfinca SL business is to provide HPC cloud based computing services and resources for clients worldwide.
High performance Cloud Computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit. The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. In addition, even a single GPU-CPU framework provides advantages that multiple CPUs on their own do not offer due to the specialization in each chip.
Essentially, a GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs operate at lower frequencies, they typically have many times the number of cores. Thus, GPUs can process far more pictures and graphical data per second than a traditional CPU. Migrating data into graphical form and then using the GPU to scan and analyze it can create a large speedup.
GPGPU pipelines were developed at the beginning of the 21st century for graphics processing (e.g., for better shaders). These pipelines were found to fit scientific computing needs well, and have since been developed in this direction.
Why use GPU?
Datascientists in both industry and academia have been using GPUs for machine learning to perform image classification, video analytics, speech recognition and natural language processing. In particular, Machine and Deep Learning is the area of the future for computing: it’s the basis for Artificial Intelligence.
Although machine learning has been around for decades, two relatively recent trends have sparked widespread use of machine learning: the availability of massive amounts of training data, and powerful and efficient parallel computing provided by GPU computing.
GPUs are used to train these deep neural networks using far larger training sets, in an order of magnitude less time, using far less datacenter infrastructure. GPUs are also being used to run these trained machine learning models to do classification and prediction in the cloud, supporting far more data volume and throughput with
less power and infrastructure.
Early adopters of GPU accelerators for machine learning include many of the largest web and social media companies, along with top tier research institutions in data science and machine learning. With thousands of computational cores and 10-100x application throughput compared to CPUs alone, GPUs have become the processor of choice for processing big data for data scientists.
Why use Digitalfinca HPC Cloud Servers?
The Digitalfinca Cloud Servers now offer both virtual and dedicated machines with GPUs that can run dozens of TeraFLOPS. Deep learning, physical simulation, and molecular modeling now take hours instead of days.
Our Cloud GPUs can accelerate clients‘ workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases.