New Investment Opportunity: NeuReality
Enabling simple and efficient AI deployment
- Proprietary AI-acceleration systems allow clients to scale AI usage, significantly cutting costs, lowering energy consumption and shrinking data center footprint
- Huge, growing market of cloud/data centers, near edge, and edge device chipsets projected to reach $75B by 2025
- Forecasting initial revenue of $2.5M from prototype in 2022
- Partnerships with IBM, Xilinx, a leading OEM and a large semiconductor corporation
- $20M Series A led by Cardumen Capital and Varana Capital
Dear Josb,
Artificial intelligence technology is disrupting almost every industry from agriculture to retail to transportation, but deployment is held back by hardware costs and software complexity.
OurCrowd is reinvesting in NeuReality, an early-stage Israeli startup aiming to disrupt the current approach to deploying AI with a new system architecture that can reduce the cost and energy consumption of AI systems by an order of magnitude. The details in this email are based on information received from, and verified solely by, the company.
The Problem
AI deployment, known as the inference stage, has unique data-processing requirements. The industry has invested heavily in developing new and better AI chips including deep learning accelerators (DLAs), but they operate in architectures that are heavily dependent on CPU and networking chips.
A system board with $4,000 worth of AI chips needs $12,000 worth of computation/networking chips to support them, and the CPU is often the bottleneck, limiting the output of the AI-specialized chips.
The Solution
NeuReality's technology is based on completely new AI-centric system architecture designed to be optimized for AI processing, replacing the traditional CPU-centric support system.
NeuReality plans on reducing this $12,000 price tag to $1,400, and provide better support to the AI chips, yielding improved performance. In addition to the hardware cost savings, the overall system will have greatly reduced energy consumption and a reduced data center footprint, generating further savings.
NeuReality also addresses the software complexity barrier with a software development kit that will make it much easier for data scientists and DevOps engineers to deploy AI.
The Market
The market for AI chipsets is large and growing. Omdia estimates that by 2025 the total addressable market will reach $75B, with applications in cloud/data centers, near edge, and edge applications.
Traction
NeuReality has partnership agreements with IBM Cloud, Xilinx and others including leading OEM and large semiconductor corporation. The company forecasts initial sales of $2.5M in 2022 based on its prototype released in 2021 growing to $30M in 2024.
The Round
This $20M Series A round is led by Cardumen Capital and Varana Capital. Funds from this round will be used for R&D and staffing, including all the design and preparation activities for its NR1 product.
Meet the CEO
We're hosting a webinar/conference call on Monday, March 7th, at 7PM Israel / 12 Noon New York / 9AM San Francisco for investors to meet CEO Moshe Tanach and learn more about NeuReality.
Can't make the webinar? Register and we will send you a recording of the call.
The NeuReality Solution
In July 2021, NeuReality signed a partnership with IBM Cloud in which the two companies collaborate to develop the NR1 as the inference solution in IBM Cloud.
The NR1 is an AI-centric network addressable processing unit (NAPU). The NR1 hardware connects directly to AI chips (DLAs), effectively replacing the need for a CPU and the overhead costs associated with CPU-centric servers. The NR1 system-on-a-chip is intended to become the heart of AI-centric servers and expected to offer a 10X improvement in cost and energy efficiency compared with today's CPU-centric architectures. The NR1 software development kit is optimized for higher efficiency and a simple user experience for data scientists and the DevOps engineers.
In early 2021, NeuReality released its first prototype, the NR1-P, an inference platform connected to Xilinx's DLA based on field-programmable gate array (FPGA) technology. The company is reporting a 3X better performance for its NR1-P compared to the CPU-centric system and demonstrating the technology to potential customers and partners.
Next steps:
0 Comments