News Coverage

ConvNext Runs 28X Faster Than Fallback

(July 22, 2024) Semiconductor Engineering - Perhaps more impressive than our ability to support ConvNext – at a high 28 FPS frame rate – is the startling increase in the sheer number of working models that compile and run on the Chimera processor as our toolchain rapidly matures.

GPNPU has multi-core cluster options for +100TOPS

(July 16, 2024) Electronics Weekly - A component supplier … building a 3nm chiplet could deliver over 400TOPS of fully C++ programmable ML + DSP compute for software defined vehicle platforms for a die cost of well under $10.

KAN tackles AI power challenge

(July 8, 2024) EE News Europe - Quadric says its Chimera general purpose NPU is able to support KANs as well as the matrix-multiplication hardware needed to efficiently run conventional neural networks with a massively parallel array of general-purpose, C++ programmable ALUs capable of running any and all machine learning models.  Quadric’s Chimera QB16 processor, for instance, pairs 8192 MACs with a whopping 1024 full 32-bit fixed point ALUs, giving 32,768 bits of parallelism to run KAN networks.

KANs Explode!

(June 13, 2024) Semiconductor Engineering - In late April 2024, a novel AI research paper was published by researchers from MIT and CalTech proposing a fundamentally new approach to machine learning networks – the Kolmogorov Arnold Network – or KAN. In the six weeks since its publication, the AI research field is ablaze with excitement and speculation that KANs might be a breakthrough that dramatically alters the trajectory of AI models for the better – dramatically smaller model sizes delivering similar accuracy at orders of magnitude lower power consumption – both in training and inference.

The Fallacy of Operator Fallback and the Future of Machine Learning Accelerators

(May 30, 2024) SemiWiki - Managing the interplay between NPU, DSP, and CPU requires complex data transfers and synchronization, leading to increased system complexity and power consumption. Developers must contend with different programming environments and extensive porting efforts, making debugging across multiple cores even more challenging and reducing productivity.

Will Domain-Specific ICs Become Ubiquitous?

(May 16, 2023) Semiconductor Engineering - Even low-cost SoCs for mobile phones today have CPUs for running Android, complex GPUs to paint the display screen, audio DSPs for offloading audio playback in a low-power mode, video DSPs paired with NPUs in the camera subsystem to improve image capture (stabilization, filters, enhancement), baseband DSPs — often with attached NPUs — for high speed communications channel processing in the Wi-Fi and 5G subsystems, sensor hub fusion DSPs, and even power-management processors that maximize battery life.

Fallback Fails Spectacularly

(May 16, 2024) Semiconductor Engineering - Our analysis of ConvNext on an NPU+DSP architecture suggests a throughput of less than 1 inference per second. Note that these numbers for the fallback solution assume perfect 100% utilization of all the available ALUs in an extremely wide 1024-bit VLIW DSP. Reality would undoubtably be below the speed-of-light 100% mark, and the FPS would suffer even more. In short, fallback is unusable.

Dealing With AI/ML Uncertainty

(April 25, 2024) Semiconductor Engineering - When it comes to the question of how to test and correct the model, the first thing most companies need to do is establish the realistic goal of what kind of error rate is acceptable, what severity of error needs to be eliminated completely, and then guardband the model using those criteria.

Embrace the New!

(March 14, 2024) Semiconductor Engineering - Perhaps those who are not Embracing The New are limited because they chose accelerators with limited support of new ML operators. If a team four years ago implemented a fixed-function accelerator in an SoC that cannot add new ML operators, then many newer networks – such as Transformers – cannot run on those fixed-function chips today. A new silicon respin – which takes 24 to 36 months – is needed.

Thanks for the Memories!

(February 15, 2024) Semiconductor Engineering - “I want to maximize the MAC count in my AI/ML accelerator block because the TOPs rating is what sells, but I need to cut back on memory to save cost,” said no successful chip designer, ever.

1 2 3 7

© Copyright 2024  Quadric    All Rights Reserved     Privacy Policy

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram