Machine Learning Inference IP 

Quadric's Chimera General Purpose Neural Processing Unit (GPNPU)

Faster Porting - For ANY Machine Learning Model
Faster Time to Market
Runs Classic ML Models, Runs LLMs, Runs Transformers - Runs Everything!
porting time in weeks

simplify your soc design

And Speed Up Porting of New ML Models

Quadric is the leading processor architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high ML inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.

Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 16 TOPs.  Chimera GPNPUs run all types ML networks - including classical backbones, vision transformers, and large language models.

Design your Soc faster with Chimera GPNPU

Traditional Design

Quadric GPNPU Design

One architecture for ML inference plus pre-and-post processing simplifies SoC hardware design and software programming.

Three REASONS TO CHOOSE the chimera GPNPU

1
Handles matrix and vector operations and scalar (control) code in one execution pipeline. No need to artificially partition application code (C++ code, ML graph code) between different kinds of processors.
2
Runs EVERY kind of model - from classic backbones, to vision transformers, to Large Language Models (LLMs)
3
Scales from 1 to 16 TOPs in a single core, and multicore scales to >100 TOPs
Find out more about the chimera GPNPU

Quadric Developer studio

Quadric’s DevStudio provides easy simulation of AI software and visualization of SoC design choices.
Learn more about developer studio

Quadric Insights

KANs Upend the AL/ML Scene, and We’re Ready

What’s the biggest challenge for AI/ML? Power consumption. How are we going to meet it?  In late April 2024, a novel AI research paper was published by researchers from MIT and CalTech proposing a fundamentally […]

Read More
Failback Fails.  Massive Failure!

Not just a little slow down. A massive failure! Conventional AI/ML inference silicon designs employ a dedicated, hardwired matrix engine – typically called an “NPU” – paired with a legacy programmable processor – either a […]

Read More
New ML Networks Far Outperform Old Standbys

The ResNet family of machine learning algorithms, introduced to the AI world in 2015, pushed AI forward in new ways. However, today’s leading edge classifier networks – such as the Vision Transformer (ViT) family -  […]

Read More
KANs Upend the AL/ML Scene, and We’re Ready

What’s the biggest challenge for AI/ML? Power consumption. How are we going to meet it?  In late April 2024, a novel AI research paper was published by researchers from MIT and CalTech proposing a fundamentally […]

Read More
Failback Fails.  Massive Failure!

Not just a little slow down. A massive failure! Conventional AI/ML inference silicon designs employ a dedicated, hardwired matrix engine – typically called an “NPU” – paired with a legacy programmable processor – either a […]

Read More
Explore more quadric blogs

© Copyright 2024  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram