News Coverage 

Silicon 100 Startups Worth Watching in 2025

(July 2025) - EE Times - For the second year in a row, Quadric ranked in the top 100 silicon startups. "Founded in 2016, Quadric.io is a semiconductor IP company specializing in optimized machine-learning processors for on-deviceAI inference. The Chimera general-purpose neural processingunit integrates neural network processing with classical DSP and control algorithms in a unified hardware/software architecture. In a multicore configuration, Chimera scales to hundreds of TOPS. The company has raised a total of approximately $50 million in equity funding."

Quadric and Denso Team Up to Progress Automotive AI Semiconductors for 2025

(July 7, 2025) Market Research Future - Quadric's collaboration with DENSO marks a pivotal shift in DENSO's direction towards developing automotive semiconductor technology. With Quadric's focus on high-performing edge processing, DENSO can create semiconductor solutions that meet the rising computational needs of modern automobiles. This aligns with the ever-increasing industry shift towards automating vehicles with AI for improved safety, efficiency, and user experience.

RISC-V’s Increasing Influence

(June 12, 2025) Semiconductor Engineering - RISC-V is simply just another control CPU in the same vein as the Arm, x86, MIPS, Xtensa and ARC processors. The latter two also provide the designer with instruction set customization ability similar to, and superior to, that of RISC-V. As such, RISC-V offers nothing of a technical nature that is leaps and bounds better than its predecessors.

The Best DRAMs For Artificial Intelligence

(June 12, 2025) Semiconductor Engineering - We’ve not seen any planned uses of HBM outside of the data center — not even the high-end automotive market. Car companies building high-end SAE Level 4 automated driver-assistance systems (ADAS) want silicon solutions that are air-cooled and cost less than four figures. They cannot accommodate 1,000 watt modules that cost $10,000 or more.

Legacy IP Providers Struggle to Solve the NPU Dilemna

(June 11, 2025) SemiWiki - The Chimera GPNPU from Quadric runs all AI/ML graph structures. The revolutionary Chimera GPNPU processor integrates fully programmable 32bit ALUs with systolic-array style matrix engines in a fine-grained architecture. Up to 1024 ALUs in a single core, with only one instruction fetch and one AXI data port. That’s over 32,000 bits of parallel, fully-programmable performance.

Connecting AI Accelerators

(June 4, 2025) Semiconductor Engineering - We’re speaking with auto OEMs who are talking about putting three petaOPS of compute in a car for real L5. That used to be a supercomputer 5 or 10, years ago. They would light up a petaOP supercomputer. By 2035 you may be driving that thing. So it’s all about scale, system, and specialization. That means there are lots of different interconnect technologies that are going to come to play.

Future-proofing AI Models

(May 21, 2025) Semiconductor Engineering - Clearly, in data centers and automotive, along with some other segments, people are trying to do 20 TOPS in the base and modularly add on 100. How do I make that happen? If you have multiple chips and spread the execution of a model over two or three different chips that are modularly added on, all are going into memory. You’ve got to synchronize those memory accesses, and that means the systems become quite complex.

AI Accelerators Moving Out From Data Centers

(May 15, 2025) Semiconductor Engineering - People are looking at flexibility and scalability. There are a heck of a lot of systems and applications where companies want to build base-model silicon with the bare minimum amount of AI they need, and some ability to scale up, whether it be a second chip or a chiplet. We clearly see that in automotive, where you’ve got the $100,000 car, the $50,000 car, and the entry level car, and people want to invest in a single platform and have some scalability. But you see it also in things like AI PCs and security camera-type applications.

Recent AI Advances Underline Need to Futureproof Automotive AI

(SemiWiki) April 28, 2025 - The takeaway is that a model as advanced as BEVDepth, supported by a key function written in CUDA, was easily mapped over to the Quadric platform and ran twice as fast as the same function running on an Nvidia chip at substantially lower power. Faster of course because Chimera is designed for IoT inferencing rather than heavy-duty training. Much lower power for the same reason. And programming is easily managed by an OEM or Tier1 C++ programmer.

AI Drives Re-Engineering Of Nearly Everything In Chips

(April 24, 2025) Semiconductor Engineering - For a lot of the architectures for the deployment of inference models, people have chosen very inflexible, fixed-function accelerators of AI, and that’s the trap. If you look at the set of models today and try to build something that accelerates those and makes them low-power and efficient, and then the state-of-the-art model changes in two years, you could be in trouble. You could wind up with a chip that you spent a lot of money developing, and it can’t run the latest thing, and now you’re dead in the water.

1 2 3 9

© Copyright 2025  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram