For the past decade, the Society for Information Display (SID) has driven this idea of a genuine two-way human interaction with displays, instead of just pure viewing consumption. While we have watched incremental improvements over the years, we have yet to see the big breakthrough. Interactive touch display design has been challenging to implement with many design, tuning, and manufacturing constraints – until now. Receiving the Component of the Year award for our SDC100 device represents a milestone in the display industry; from consumption-only to the exchange of information beyond just touch.

While we are a touch controller technology company, this is predominantly a digital transformation driven by human interaction. This transformation is happening at the very edge of analog and digital. Our SDC100 chip provides much higher performance with higher precision than ever before. We always knew that if we could get higher fidelity data from all different sensors around the system, we could have a much richer interactive experience with the display. When we grab big data sets, whether high fidelity data or just faster amounts of data, we can make quicker, better decisions for the human interface. This high-fidelity, agile data is enabled by a software-defined sensing (SDS) architecture. SigmaSense can report touches at 300 Hz, while the rest of the industry is still at 30 Hz, 60 Hz or, at most, 120 Hz. Our customers can also monitor and update their software remotely providing new features and a better user experience to their customers, even outdoors in the rain or snow.

For instance, LG-MRI is shipping SDC100 in a 75-inch digital sign with 11 millimeters of bulletproof glass. The SDC100 can detect touches through that glass: through a pair of gloves and rain, changes of temperature from heat to cold, direct sunlight and electrical noise from trains going by, including all kinds of mechanical and electrical environmental interference that may exist in the environment. Working through all these design challenges while relaxing design constraints to operate in the most rugged, complex environments is finally a reality.

The SDC100 illustrates the difference between traditional projected capacitance (PCAP) sensing and the SigmaSense sensing approach to detecting touch. Software-defined sensing starts with the meanings and definitions of every single channel in a touch sensor. It does not matter if it is a TX or an RX; all channels within the controller are physically identical. There’s no difference between transmitting or receiving as they communicate concurrently in both directions and can all be software configured.

The data resolutions are software configured and defined even down to the precision of the individual channel within a given function, or for multiple functions. Which filters do you apply? Software controlled. What post-processing do you use? Software controlled. Data rates, presence, hover, and touch happen simultaneously, each mode individually software controlled. This software-defined sensing redefines flexibility for hardware designers, integrators, and manufacturers while eliminating the previous design constraints of matched length traces or sensor impedance uniformity.

Our systems ultimately transform a hardware environment for flexibility and to have meaning in the form of a human user interface feature or function. The flexibility is a “wow” of this technology: we’ve transitioned from a rigid approach taking traditionally locked analog hardware functionality and moved to software controls empowering the end system integrators to build the rich advanced features they want.

Your engineers want to assign specific channels to touch, maybe to buttons or scroll pads? Great, we can do it. And we can often share many of the channels providing a sensor fusion capability not previously possible. The definition of analog-based voltage threshold systems is no longer design once, lock, and never change. It is now an updatable, upgradable system environment, adapting to the space that it occupies.

Software-defined sensing ushers in a new generation of flexibility and design options, but what are the real benefits? Now engineers have all this flexibility, and everything is software controlled to make changes on the fly. Tuning becomes a simple task of letting the system dynamically tune. The system can adapt to the environment adjusting all of these different settings software-wise to quickly come up with a full range of performance options and features that never existed before.

But what is the benefit to the end-user, and how will this change what they see? The sensing approach uses a current-mode analog to digital converter rather than the traditional voltage mode system giving us somewhere around one-hundred to a thousand times better signal to noise ratio in a given time at a given voltage. And that enables better and more high-fidelity data.

With SigmaSense SigmaDrive technology, you’ll be able to touch a display through the rain and to hover while you’re wearing gloves in a bus station in freezing weather. You will go to a restaurant and order without touching the point-of-sale terminal. Your new work space, learning space, and conference experiences will be highly interactive and collaborative. Signal-to-noise is king, delivering better touch interactions, broader applications, and the ability to work around traditional PCAP systems’ complex design constraints.

We transform a very analog design process into a giant digital data set with all kinds of opportunities to apply AI processing, improve the output and the user experience of that system to anticipate what a user would do, even predict what a user expects, and then deliver that “wow” experience.

Whenever an industry moves from the hardwired analog world to the digital world, big transformative things change. It does not matter the industry, whether you are talking about medical devices or displays, microprocessors or architectures; anytime you can move functions into the digital world, you get more control, flexibility, and options for how to program and use those systems.

Displays have been mostly consumption devices to this point. We mostly watch them – until now. Software-defined sensing is the transition point where we take advantage of big data to make these genuinely configurable systems interactive, engaging and faster, with higher precision and with far better user experiences.

Author

Gary Baum

Sr. VP Emerging Technologies

Share