Future Proof AI
Inference IP 

Scales from 1 to 864 TOPs
Serves All Markets - Includes Safety Enhanced Version for Automotive Designs
Runs Classic ML Models, LLMs, Transformers - Every Operator, Every Network!

SIMPLIFY YOUR SOC DESIGN

And Speed Up Porting of New ML Models

Quadric has the leading processor architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high ML inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.

Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 864 TOPs.  Chimera GPNPUs run all ML networks - including classical backbones, vision transformers, and large language models.

simplify your soc design

And Speed Up Porting of New ML Models

Quadric is the leading processor architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high ML inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.

Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 864 TOPs.  Chimera GPNPUs run all ML networks - including classical backbones, vision transformers, and large language models.

Design your Soc faster with Chimera GPNPU

One architecture for ML inference plus pre-and-post processing simplifies SoC hardware design and radically speeds up ML model porting and software  application programming.

Three REASONS TO CHOOSE the chimera GPNPU

1
Handles matrix and vector operations and scalar (control) code in one execution pipeline. No need to artificially partition application code (C++ code, ML graph code) between different kinds of processors.
2
Runs EVERY kind of model - from classic backbones, to vision transformers, to Large Language Models (LLMs)
3
Up to 864 TOPs. A solution for every application segment - including ASIL-ready cores for Automotive applications
Find out more about the chimera GPNPU

Quadric Developer studio

Quadric’s DevStudio provides easy simulation of AI software and visualization of SoC design choices.
Learn more about developer studio

Quadric Insights

Evaluating AI/ML Processors – Why Batch Size Matters

If you are comparing alternatives for an NPU selection, give special attention to clearly identifying how the NPU/GPNPU will be used in your target system, and make sure the NPU/GPNPU vendor is reporting the benchmarks […]

Read More
In-At-Near?  The NPU Style Debate – Fairy Tale Version

There are a couple of dozen NPU options on the market today.  Each with competing and conflicting claims about efficiency, programmability and flexibility.  One of the starkest differences among the choices is the seemingly simple […]

Read More
Can You Rely Upon your NPU Vendor to be Your Customers’ Data Science Team?

The biggest mistake a chip design team can make in evaluating AI acceleration options for a new SoC is to rely entirely upon spreadsheets of performance numbers from the NPU vendor without going through the […]

Read More
Evaluating AI/ML Processors – Why Batch Size Matters

If you are comparing alternatives for an NPU selection, give special attention to clearly identifying how the NPU/GPNPU will be used in your target system, and make sure the NPU/GPNPU vendor is reporting the benchmarks […]

Read More
In-At-Near?  The NPU Style Debate – Fairy Tale Version

There are a couple of dozen NPU options on the market today.  Each with competing and conflicting claims about efficiency, programmability and flexibility.  One of the starkest differences among the choices is the seemingly simple […]

Read More
Explore more quadric blogs

© Copyright 2024  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram