News Coverage

2025 Outlook with Veerbhan Kheterpal of Quadric

(February 18, 2025) SemiWiki - 2024 was a year of tremendous momentum building for Quadric.  We introduced and began customer deliveries of the 2nd generation of our Chimera GPNPU processor, which now scales to over 800 TOPs.  We also dramatically expanded the size of our model zoo (available in our online DevStudio at our quadric.io website) from only 20 models at the start of the year to over 200 by the close of 2024 thanks to the rapid maturation of our compiler stack.

Vision Language Models Come Rushing In

(February 17, 2025) Semiconductor Engineering - The rapid emergence of Vision Language Models (VLMs) in the automotive/ADAS sector is one of those under-the-public-radar changes shaking up a different industry...Vision Language Models are multimodal AI systems built by combining a large language model (LLM) with a vision encoder, giving the LLM the ability to “see.”

MACs Are Not Enough: Why “Offload” Fails

(January 16, 2025) - Semiconductor Engineering - True, the idea of a partitioned (programmable core) + (MAC tensor engine) worked quite well in 2021 to run classic CNNs from the Resnet era (circa 2015-2017). But we have argued that modern CNNs and all new transformers are comprised of much more varied ML network operators – and far more complex, non-MAC functions – than the simple eight (8) operator Resnet-50 backbone of long ago.

Get Ready for a Shakeout in Edge NPUs

(November 20, 2024) SemiWiki - Survival of the fittest is likely to play out even faster here than it did around a much earlier proliferation of CPU platforms. We still need competition between a few options, but the current Cambrian explosion of edge NPUs must come to an end fairly quickly, one way or another.

To (B)atch Or Not To (B)atch?

(November 18, 2024) Semiconductor Engineering - If you are comparing alternatives for an NPU selection, give special attention to clearly identifying how the NPU/GPNPU will be used in your target system, and make sure the NPU/GPNPU vendor is reporting the benchmarks in a way that matches your system needs. Many NPU vendors will only report their Batched results, because there can be substantial Inferences-Per-Second differences between Batch-1 and Batch N calculations.

Denso partners with Quadric to develop NPU

(November 6, 2024) Electronics Weekly - The agreement means Denso will acquire the IP core license for Quadric’s Chimera general purpose NPU (GPNPU). Both companies will co-develop IP for an in-vehicle semiconductor for use in intelligent vehicle systems (ADAS and AD/self-driving) and inter-vehicle and cloud communications.

In Memory, At Memory, Near Memory: What Would Goldilocks Choose?

(October 17, 2024) Semiconductor Engineering - Just as the children’s Goldilocks fable always presented a “just right” alternative, the At-Memory compute architecture is the solution that is Just Right for edge and device SoCs....The best alternative for SoC designers is a solution that both take advantage of small local SRAMs – preferably distributed in large number among an array of compute elements – as well as intelligently scheduled data movement between those SRAMs and the off-chip storage of DDR memory in a way that minimizes system power consumption and minimizes data access latency.

Mass Customization For AI Inference

(October 17, 2024) Semiconductor Engineering - "It is simply infeasible to build hard-wired logic to accelerate such a wide variety of networks comprised of hundreds of different variants of AI graph operators. SoC architects as a result are searching for more fully programmable solutions, and most internal teams are looking to outside third-party IP vendors that can provide the more robust compiler toolsets needed to rapidly compile new networks, rather than the previous labor-intensive method of hand-porting ML graphs."

Can You Rely Upon Your NPU Vendor To Be Your Customers’ Data Science Team?

(September 12, 2024) Semiconductor Engineering - The biggest mistake a chip design team can make in evaluating AI acceleration options for a new SoC is to rely entirely upon spreadsheets of performance numbers from the NPU vendor without going through the exercise of porting one or more new machine learning networks themselves using the vendor toolsets.

A New Class of Accelerator Debuts

(July 22, 2024) SemiWiki - Steve Roddy (VP Marketing for Quadric) tells me that in a virtual benchmark against a mainstream competitor, Quadric’s QC-Ultra IP delivered 2X more inferences/second/TOPs for a lower off-chip DDR bandwidth and at less than half the cycles/second of the competing solution. Quadric are now offering 3 platforms for the mainstream NPU market segment: QC Nano at 1-7 TOPs, QC Perform at 4-28 TOPs, and QC Ultra at 16-128 TOPs. That high end is already good enough to meet AI PC needs. Automotive users want more, especially for SAE-3 to SAE-5 applications. For this segment Quadric is targeting their QC-Multicore solution at up to 864 TOPs.

1 2 3 8

© Copyright 2024  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram