top of page

Anari AI raised an investment to optimize domain-specific workloads in the cloud



Anari AI has raised an investment led by deep-tech fund Entrada Ventures, and joined by funds Tensor Ventures, Earlybird, Acequia Capital, TS Ventures and Sukna Ventures, as well as investors Young Sohn, partner at Walden Catalyst and the former CEO of Samsung, Joe Costello, the former CEO of Cadence, and Sasha Ostojic, Ex VP of Software at Nvidia and Partner at Playground Global.

“Anari AI offers a unique solution for the AI compute landscape. While CPUs, GPUs, and application-specific accelerators exist in the cloud, utilization of FPGAs for specific applications have been out of reach to many AI/ML application developers. The Anari AI team has built an easy to use SW stack to make customization of FPGAs in an environment familiar to AI/ML application developers. We are excited to join Anari AI on their journey as they continue to build out their impressive team and product,” said Alex Fang, Managing partner at Entrada Ventures.

A new approach to chip design is possible thanks to Anari’s AIDA, a cloud-based AI platform that enables design, integration, and composition of compute and software components. The brain of AIDA is Graphella, a graph-based neural net that analyzes, optimizes, and deploys the best combination of components based on performance, cost, and scaling customer needs. Individual compute components (gears) are defined and designed in Python using Anari’s open-source framework – Pygears.

Explaining why they invested in Anari AI, Martin Drdúl, the co-founder of Tensor Ventures, says: “Currently, the path to a chip is very long, expensive, and makes the chip obsolete when it’s launched. Anari AI has been able to build a cloud platform that allows a very quick design of a tailor-made chip using the cloud and AI because it uses different architectures, tests it, and brings the design to life in a matter of weeks at a fraction of the cost.”

Anari AI intends to use the fund to improve optimization and acceleration of the domain-specific workloads in the cloud, using the cutting edge technology developed by the team of 20+ Ph.D. researchers, machine learning experts, and software engineers.

“Today, there are two options for accelerating domain-specific workload. First, to invest significant and complex software expertise to exploit mostly general-purpose hardware to get acceptable performance, throughput, and accuracy; or, to invest significant hardware expertise and 2-3 years to design and build a custom hardware accelerator (chip) for a domain-specific workload. On the other side, Anari’s technology stack, a custom compute cluster (System-on-Cloud) can be designed, verified, and deployed in a matter of weeks instead of years, at a fraction of the price,” said Jovan Stojanovic, co-founder and CEO at Anari AI.

Anari AI was founded in 2020 with the goal to provide an optimized approach to chip technology and cloud computing.



Published November 30, 2022

Comments


bottom of page