About Quadric

Democratising AI Inference Chips. Flexible, programmable AI acceleration for any model — fully programmable by you.

AI is shifting to the device edge, and fixed-function accelerators can’t keep pace. Quadric’s unified, programmable AI processor IP unlocks massive addressable markets—from automotive and industrial to consumer electronics and emerging AI PCs.

Investors: Our Company, Our Markets, Our Impact

Quadric, Inc. is a semiconductor IP licensing company. We deliver the blueprints for efficient, flexible AI processors to a wide range of customers designing chips for varied applications. Our licensees are both semiconductor companies who sell to broad user bases as well as systems companies designing chips for their own internal use. We focus on solutions for AI inference in devices and on the edge – the deployment of AI models in real world situations. We have design wins spanning the industry’s widest performance range from 1 Trillions of Operations Per Second (TOPS) to hundreds of TOPS in markets as varied as automotive ADAS systems, AI PCs, industrial sensors, robotics, office automation equipment and more.

The fully programmable nature of our Chimera processor is what distinguishes Quadric General Purpose Neural Processing Unit (GPNPU(s)) from competing NPU accelerator solutions. Unlike NPUs that are programmed in idiosyncratic hardware command streams or hardware-centric assembly code, Chimera processors are programmable using common AI graph formats, C++, and Python languages that are known and used by millions of programmers worldwide, hence the tagline “democratizing AI chips”. Quadric addresses a broader Served Available Market (SAM) than competing IP licensors giving us a greater likelihood of becoming a widely adopted market leader.

Executive Decision Makers

Future-Proofing AI Investment: Design Once, Update Forever
Smarter ROI in AI: Design for Today, Programmable for Tomorrow

The two key determinants of the return on investment in an AI-enabled chip design are (1) how many AI models can run at high performance on the device, including models not yet invented; and (2) can your team and your downstream users easily port new models, or will you be forever dependent on the NPU vendor team to respond to market changes? Quadric Chimera processors ensure that design projects undertaken today will have a longer lifespan and serve more use cases than designs which choose fixed-function AI accelerator blocks that have to be constantly updated to handle newer models, and which provides no benefit for already shipping products. No one can predict which new AI model will take the word by storm in 3 years, but with a Chimera processor powering your project’s AI capabilities you know your chip will be ready for every change that arrives.

Technologists & SoC Architects

Reimagining AI Compute: One Architecture for Every Layer
Solving the Graph-Partitioning Problem Once and for All

Quadric is the only licensor of AI acceleration NPU solutions that (1) did not start with an existing legacy CPU, DSP or GPU and add a bolt-on matrix accelerator, and (2) provides a single processor pipeline that runs all layers of AI models – scalar, vector and matrix. The result is the Chimera AI processor core that does not suffer the graph partitioning problem that handicaps all other NPU options on the market.

Industry’s first GPNPU

Artificial Intelligence (AI) enhances the functionality of devices used in many applications – autonomous vehicles, industrial robots, remote controls, game consoles, smartphones, and more. Machine Learning (ML) is a subset of the broader category of AI. By using ML models, which are trained by sifting through enormous amounts of historical data to discover patterns, devices can perform amazing tasks without being explicitly programmed.

ML models are created using known labeled datasets (training phase) and are subsequently used to make predictions when presented with new, unknown data in a live deployment scenario (inference). Because an enormous amount of computing resources are required both for training and inference, a dedicated AI / ML compute resource is now vital in most new systems to handle AI workloads.

For most high-volume consumer products, chips are designed with tight cost, power, and size limitations. This is the market Quadric serves with innovative semiconductor intellectual property (IP) building blocks. Our AI-optimized Chimera processors allow companies to rapidly build leading-edge SoCs and more easily write application code for those chips.
After proving the innovative Chimera architecture in 2021 by producing a test chip, Quadric introduced its first licensable IP product – the industry’s first GPNPU (general purpose Neural Processing Unit) - in November of 2022 and began product deliveries in Q2 2023.

In 2024, Quadric introduced its third generation Chimera AI processors with more feature options as well as automotive safety-grade variants. With more than 1000 hardware configurations, there is a Chimera processor suited to every high-volume inference application.
© Copyright 2025  Quadric
All Rights Reserved
Privacy Policy
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram