World’s first analog AI chip

0


Austin-based Mythic has launched the Mythic Analog Matrix Processor (Mythic AMP) – a single-chip analog computing device. The M1076 AMP uses Mythic Analog Compute Engine (ACE) to provide the compute resources of a GPU at up to one tenth of the power consumption.

With an output of 3 watts, the M1076 can perform up to 25,000 billion operations per second (TOPS). The new line includes a single chip, a PCIe M2 card for small footprint applications and a PCIe card with up to 16 chips. Now, peripheral devices can run complex AI applications at higher resolutions and frame rates, resulting in superior inference results. The calculation is done in the same place where the data is stored.

Last month, Mythic raised $ 70 million in Series C funding, with Blackrock as lead investor and co-led by Hewlett Packard Enterprise (HPE).

Why Mythic AMP?

In traditional computers, data is transferred from DRAM memory to the CPU at regular intervals. Memory contains programs and data. Computers’ processor and memory are separate, and data moves between the two. Over the years, processor speeds have seen a drastic increase. Meanwhile, advances in memory have mostly focused on density – the ability to store more data in less space – rather than transfer rates, resulting in latency.

Simply put, processors at any speed have to sit idle while retrieving data from memory and are dependent on transfer rate – Von Neumann limitation. So, by merging compute and memory into a single device, analog AI eliminates the von Neumann bottleneck, resulting in dramatic performance gains. In addition, tasks can be completed in a fraction of the time and with much less energy since there is no data transit.

Each Mythic ACE comes with a digital subsystem that includes a 32-bit RISC-V nanoprocessor, 64KB of SRAM, a SIMD vector engine, and a high-speed network-on-a-chip (NoC) router. The analog matrix processor is capable of providing power efficient AI inference up to 25 TOPS. “Cutting-edge devices can now deploy powerful AI models without the challenges of high power consumption, thermal management and form factor constraints,” according to the company.

See also

AI on edge

The main focus of Mythic is AI deployments at the edge. The company also provides server class computing in data centers. Edge AI can be used by enterprises to deploy ML models that work locally to edge devices. However, edge AI faces some challenges:

  • Low wattage: Device wattage and associated heat increase as new functions and capabilities are added. Sometimes they are powered by Power-over-Ethernet (PoE) with limited power budgets. The devices should have high performance, even at 0.5 or 2 W. The power should be as close to zero when not in use, and switching between these different modes should be quick and easy.
  • Small size: AI algorithms running on the data source have minimum latency issues and no loss of precision due to video compression; therefore, there is no requirement for large PCIe cards, large heat sinks, or fans. The whole system should fit on one M.2 A + E 22mm x 30mm card for the others. Even with larger PCIe cards, the size of the accelerator and cooling solution determines how much AI can be crammed.
  • Cost-Effectiveness: The ability to deliver high-power computing at an affordable and efficient price gives customers great freedom to scale with customer demand.

To date, the company has raised $ 165.2 million for easy and cost-effective deployment of powerful AI for smart home, smart city, augmented reality / virtual reality, drones, video surveillance and even manufacturing. .


Join our Telegram group. Be part of an engaging online community. Join here.

Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.

Kumar Gandharv

Kumar Gandharv

Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is embarking on a journey as a technical journalist at AIM. A keen observer of national and IR news. He loves going to the gym. Contact: [email protected]


Share.

About Author

Leave A Reply