Flexibility of a Processor + Efficiency of an NPU Accelerator 

Scales from 1 to 864 TOPS
Serves All Markets - Includes Safety Enhanced Version for Automotive Designs
Runs Classic CNN Models, LLMs, Transformers - Every Operator, Every Network!

Best Edge AI Processor IP

Simplify your SOC design

And Speed Up Porting of New AI Models

Quadric has the leading processor architecture optimized for on-device artificial intelligence computing. Only the Quadric Chimera GPNPU delivers high AI inference performance and also runs complex C++ code without forcing the developer to artificially partition code between two or three different kinds of processors.

Quadric’s Chimera GPNPU is a licensable processor that scales from 1 to 864 TOPS.  Chimera GPNPUs run all AI models - including classical backbones, vision transformers, and state-of-the-art large language models.

Design your SOC faster with Chimera GPNPU

The Other Guy's NPU

Messy, Heterogeneous, Only Partially Flexible,
Nightmare to Program & Tune

Quadric GPNPU

Single Core, 100% C++ Programmable
Single Unified Binary for Complete Inference Workload
One architecture for AI inference plus pre-and-post processing simplifies SoC hardware design and radically speeds up AI model porting and software  application programming.

Three REASONS TO CHOOSE the Chimera GPNPU

1
Handles matrix and vector operations and scalar (control) code in one execution pipeline. No need to artificially partition application code (C++ code, AI graph code) between different kinds of processors.
2
Runs EVERY kind of model - from classic backbones, to vision transformers, to Large Language Models (LLMs) - and all the models that have not been invented yet!
3
Up to 864 TOPS. Suitable for every application segment - including ASIL-ready cores for automotive applications

© Copyright 2025  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram