- we help run AI on small devices
- we help plan and build AI features for products
- we help build AI systems for distributed architectures via federated learning
- we build complete intelligent devices: hardware & software
We know leading technologies and both hardware and software components in the space. We can advise about selecting optimal configurations, integrate proper baseline components and help build a final product on top of that.
We know hardware solutions from Intel, NVIDIA, Qualcomm, Broadcom, Videantis, Lenovo, Basler, and many more. We are experienced in computer vision, machine learning, deep learning and have already delivered a variety of artificial intelligence solutions.
We enable on-device AI (AI on edge) through:
- Algorithms optimization and fine tuning to take maximum advantage of given processing units, accelerators and overall hardware architecture.
- Optimum usage of precious resources by selecting appropriate data types.
- Energy-efficient techniques implementation and development to reduce the energy consumption for real-time data processing.
- Pre-trained models development and deployment.
- Bringing near-real-time AI experience to small, embedded devices.
- Implementation of proper deployment strategies: on-device, in the Cloud or data center. Read our guideline here.
Making the Internet of SMARTER Things a reality.
Key benefits of deploying AI workloads on the edge (IoT) devices:
- Enable Scalability
(Decentralizes AI services & makes it easier to expand the IoT ecosystems)
- Enable near-real-time AI experience
(By using the modern low power, high performance, small form factor accelerators)
- Solve round-trip latencies
(Deploying AI directly on the device enables making on-the-spot decisions)
- Eliminate intermittent connectivity related issues
(No need for sending the data from the device to external AI services and waiting for results)
- Reduce costs of bandwidth
(AI-enabled devices pre-process the data and send the results to external services vs raw data)
- Data can stay locally on the device
(Having AI on the device allows for sending the data to external storages selectively)
To explore the full potential of Intelligent Devices, we have also built a federated learning solution in our research lab. Find out more about it here. In short, it is an AI setting that enables leveraging on decentralized IoT configurations. Federated learning aggregates AI models created on independent devices and becomes helpful when:
- we do not want to rely on connectivity and rather want to process the data either where they were created or very close to that
- we want to significantly reduce latencies between events (i.e. signals detected in data) and related actions (i.e. device’s response).
It works in a loop:
- device(s): run machine learning to process raw data and produce local models
- server: collects local models, aggregates them and sends back an update to device(s)
- device(s): receive update and include it in another round of training, producing new local models.