The “Microprocessor Wars” are often compared to the Muscle Car Wars of the 1960s and 70s.   This new battlefield will pull in nearly every company that depends on data processing. However, winning is less about raw horsepower and more about efficiency, sensing latency, and results.  The stakes are now far higher, with priorities that are now driven by “end-to-end” processing performance.  Nvidia capitalized on superior graphics processing to deliver “end-to-end” processing advancements that have taken the company to a $400B market capitalization.  The ARM licensed microprocessor family recently sold for more than $40B, while the Ryzen processor, with its outstanding graphical processing, is given credit for increasing AMD’s capitalization well above $100B.  While Intel still sits on the microprocessor throne, Apple’s M1 announcement and other custom microprocessor developments are driving a serious analysis of what “end-to-end” processing means to future valuations.  The world’s largest companies are placing massive bets on machine learning with optimizations that start with better data to speed processing, improve AI efficiency, and deliver better, more meaningful experiences as the end game.  End-to-end performance now has much less to do with microprocessor muscle.  So, who and what determines the winners and losers in this new “end-to-end” microprocessor battle?

Contrary to marketing materials, this microprocessor battle is NOT simply about 2-3X Machine Learning speed benchmarks or 70% AI efficiency comparisons. Traditional benchmarks haven’t always worked in this particular comparison game because “end-to end” performance does not assume starting with the same data set. In many cases, particularly when the data is derived from analog sensing, the new “end-to-end” comparison must include the fidelity of the data since no level of AI processing can replace having 100-1000X better data at the start. As the old saying goes, “garbage in, garbage out”.

Performance and AI training efficiency are now measured end to end… from the source of information (input) at the edge to the final output or end experience that is desired.  In real-time systems, when the latency or cost of moving data to the cloud is prohibitive, the analog and digital filtering, reduction and data compression at the edge becomes even more critical. Such applications, whether for self-driving cars, fast response robots, or touch user experiences are highly dependent on the fidelity of information and the processing of that information with minimum latency.  In such systems, an analysis of end-to-end processing for the sake of real performance comparisons, must start upstream at the very source of information capture.

Microprocessors and AI processing perform only as well or as poorly as the fidelity of the input data. Better, cleaner data yields better, faster processing results.  Real-time examples, such as a touch on a touch screen, a power management sensor, a communications transfer, or data from any of a thousand different analog sensors, can greatly benefit from the method of initial edge data collection and processing.  The end-to-end quality, efficiency and latency depend heavily on the edge computing and analog data collection, ultimately deciding the winner in the overall system.  Big motors in the digital world are vulnerable given the new end-to-end processing challenge that is solved in the analog domain.    As Sun Tzu is often quoted, “Their strength shall become their weakness”.

Touch-enabled tabletop displays are a stark example of where the end-to-end system performance is far more critical than the microprocessor and machine learning algorithms, due to the analog-to-digital conversion (ADC) bottleneck.  Tabletop displays were recognized as a huge opportunity back when Microsoft started selling the world on their initial “Surface” vision nearly 15 years ago with the launch of the Surface Table (read Table… not Tablet).  During 2006/07 more than 20 companies received funding to deliver touch-based gaming tables, desktops, and work surfaces capable of sensing touch, pens, and objects like drinks on the surface of the screen.  Restaurant tables, and classroom desks were being redesigned to provide collaborative game experiences and learning via group interactions. Families and friends could join around a table to play board games and multi-player hockey or soccer games.  However, none of these products had the analog sensing and processing performance required to succeed.  Even the simplest hockey game proved too difficult and could not accurately process a single fast-moving hand on the screen, much less handle the complexity of tracking multiple objects and multiple hands.

Dedicated touch controllers could not sense and process simple X and Y data fast enough on a large tabletop display to keep up with a single fast-moving touch.  Intuitively, the display was able to refresh at high speeds and the graphics processing could generate fast moving images, but the touch microprocessor lacked the ability to capture and process analog touch data fast enough.  To further complicate the problem, analog noise from the display interfered with the touch system and the noise from the touch signals interfered with the display image.   Because the human eye easily detects lag in the system, traditional voltage mode analog to digital conversion (ADC) could not keep up with fast moving, multi-player touches and high-speed object identification. Performance was worsened in the presence of liquid spills or other environmental noise.

Clean, real-time, high fidelity data is the key and the movement away from voltage mode ADCs to current mode ADCs solves many of the end-to-end processing challenges by eliminating the latency, reducing the threshold voltages, and improving the signal-to-noise ratio by 100-1000X.  For tabletop displays, this capability opens all new possibilities for high-speed gaming, object recognition, and sophisticated noise reduction by starting with better, immediate sensing data. This same edge processing capability is also applicable to many other demanding applications where superior data collection can greatly enhance the effectiveness and responsiveness of machine learning and AI, whether it is a game requiring twitch reflex response times, advanced biosensing, or a robot with ultra-sensitive high-speed movements.

Apple recognized the incremental claims of microprocessor performance improvements was not addressing the battle for end-to-end performance, that the battle is shifting to the edge – starting with smaller datasets of clean, less noisy data to deliver huge impacts on efficiency and processing results.  This new microprocessor battle for end-to-end superiority has nothing to do with CPU compute “muscle” but is now being fought and decided in the analog world using ultra-low voltages with current mode ADCs that deliver instantaneous high-fidelity data.   Come to think of it, the muscle car wars did shift from a focus on muscle to a focus on end-to-end sensing, handling, and efficiency, finally to be disrupted by a new type of motor.  Their strength became their weakness.

Author

Gary Baum

Sr. VP Emerging Technologies

Share