Learning Process Optimization
The process of training the neural networks usually takes a lot of time and requires the most advanced hardware. It also consumes huge amount of energy.
Our engineers develop new approaches to the machine learning algorithms, strongly adapted to the most innovative, massively parallel computing platforms equipped with accelerators. The optimization process is performed at a very low level of an algorithm, significantly improving its performance, reducing execution time and most importantly, slashing the energy consumption.
Alternatively, our team has also been working with optimization methods based on efficient hardware management. The benefit of such approach is the ability to introduce optimizations without any modifications to the machine learning software. It builds on techniques like Dynamic Voltage and Frequency Scaling (DVFS) or Concurrency Throttling (CT). You can read more while browsing our pages about high performance computing (HPC) as well.