Eyes, Brains and Action
The technology powering Cybernetyx’s intelligent screen revolution

How Cybernetyx is powering the next wave of intelligent devices, from Bangalore to 80 countries worldwide.

When Nishant Rajawat returned to India after completing research at the University of Toronto, one of the world’s leading AI labs, he carried a single, deceptively simple question: could computer vision accurately understand human interaction with any surface in real time, enough to replace physical input devices entirely?

That question became EyeRIS, a vision AI engine that turns any flat surface into an interactive, intelligent one, using only a camera and the intelligence behind it. It did not enter an existing market. It created one.

Two decades on, Cybernetyx Technik has 15 million users, installations in 80 countries, and roughly $300 million in product sales, with its name absent from most of the boxes its technology powers. The company operates as the invisible intelligence inside displays manufactured by global giants including NEC/Sharp, Sony, Delta and ViewSonic. In India, the brand is more visible: over three lakh installations, a 22 per cent share of the installed base of interactive displays, and the highest-selling AI interactive flat panel in the country for three consecutive years.

The model was never intended to compete with display manufacturers. It was to power them.

From interactive to autonomous

The company’s current narrative, however, has moved beyond interactive surfaces. Rajawat is emphatic that nothing has changed in the technology’s direction, only in what the technology can now do.

“EyeRIS was always a perception engine,” he explains. “Once you’ve built perception at that level of reliability and scale, the question becomes: what else can this engine perceive? Motion. Posture. Presence. Faces. Attention. Fatigue. And once it can perceive all of that, the question becomes: can the device act on what it perceives without waiting for a human instruction?”

That shift from interactive to autonomous underpins Cymbient, Cybernetyx’s universal operator agent platform. Where EyeRIS gives devices perception, Cymbient gives them reasoning, contextual intelligence and autonomy. One agent architecture, deployable on any device: a display, a camera, a speaker, an appliance. The hardware remains the same. Cymbient makes it an intelligent agent. Cymbient is now the company’s fastest-growing segment, signalling that the market for devices that don’t just respond but understand and act has arrived.

Cybernetyx introduced the industry’s first NVIDIA-powered interactive display, and the decision to embed that compute in the panel, rather than rely on standard Android processing, changed what the category could do. The intelligence and the display surface became the same device: running large vision AI models locally, understanding the room, processing everything on the surface in real time, with no data transmitted externally. For corporate environments that demand both AI capability and uncompromising on-device security, it was the architecture the market had been waiting for.

What intelligent devices actually do

The practical implications are easiest to grasp through concrete scenarios.

In a corporate meeting room, the friction of today’s experience, such as finding the remote, waiting for the display to boot, and navigating wireless sharing, is replaced by something altogether different. The display detects presence before any button is pressed. A gesture towards the surface confirms intent. The device wakes, connects to the meeting infrastructure, checks the calendar booking, and initiates the session, with video conferencing active, screen sharing ready, and recording enabled. The meeting ends; the system detects the room emptying, closes the session, logs occupancy data, and resets for the next booking. No human intervention at any point.

In education, the distinction is more fundamental. Rajawat is direct about why previous generations of interactive whiteboards largely failed: they were interactive in name only. Touch replaced the mouse, but pedagogy did not change. The intelligent display alters the relationship between content and the learner, not merely between teacher and board.

A biology teacher explaining cell structure can manipulate a three-dimensional model rendered on-device in real time, rotating and annotating it as the lesson demands. Simultaneously, EyeRIS reads the room: attention direction, posture, and movement patterns that indicate confusion or disengagement. The teacher receives a live signal, not a post-lesson report, with the feedback loop operating in the moment, allowing immediate adjustment of approach. The ambition, as Rajawat frames it, is expertise at scale: content built by the strongest educators in any subject, delivered through a platform that adapts to the specific class in the room.

A different definition of physical AI

The broader industry conversation about Physical AI has been dominated by humanoid robotics. Cybernetyx’s position is deliberately distinct.

“Our definition: any device that can perceive its environment, reason about what is happening, and take meaningful action without a human trigger,” says Rajawat. “That device does not need legs or arms. It needs sensors, intelligence, and a platform to tie them together.”

The deployment surface this unlocks is, by any measure, larger than the humanoid robotics market: hundreds of millions of displays, cameras and speakers already installed across institutional environments worldwide. Making those devices intelligent through Cymbient is faster, cheaper and more immediately impactful than waiting for a new hardware form factor to scale.

For example, the wellness display is a category within this that Cybernetyx believes will become standard institutional infrastructure within a decade. Screens that monitor posture, detect fatigue, and surface health signals passively and continuously, without the person wearing anything. The building understands the people in it.

Rajawat does not dismiss humanoid systems; he positions them as another device on the same platform. The perception, reasoning and contextual intelligence architecture that Cymbient provides to a classroom display today, he argues, is the same architecture a humanoid ultimately requires. The approach is to build the subsystems din the order that generates commercial returns at each stage. The humanoid is one form. The platform is the constant.

Building from Bangalore

What makes the Cybernetyx story notable is not only the technology but also the context in which it was built. The company holds multiple international patents, operates its own manufacturing, and has sustained OEM relationships with the world’s largest display manufacturers, all from India, for over two decades before the current wave of deep tech investment arrived.

Rajawat is clear-eyed about what India’s deep tech landscape has and has not yet resolved. Manufacturing in India is now a genuine strategic asset, not merely on cost grounds, but because global supply chains are being restructured and India has become a preferred alternative for hardware production. Cybernetyx built that capability before it became fashionable. The gap that remains is recognition: the distance between building world-class intellectual property and being acknowledged globally for doing so.

By 2030, Rajawat believes, every institutional space, including schools, hospitals, corporate floors, and government buildings, will carry a layer of ambient intelligence running through its devices. Not intrusive, not conspicuous, but present: the room knowing who is in it and responding accordingly. Vision AI applications that are niche today, such as posture analytics, occupancy intelligence, and attention mapping, will become default features of building management and workplace design, much as CCTV is standard infrastructure now.

“We are building the intelligence layer for the physical world,” he says. “The perception engine is EyeRIS. The platform is Cymbient. And we are building it in Bangalore.”

Cybernetyx’s bet was that making existing devices intelligent would prove the more consequential path. In 2025, it is beginning to look like the more obvious one, too.

Nishant Rajawat has been building intelligent device technology for nearly two decades, long before the industry had a name for it. The Cybernetyx story is not about catching a wave. It is about building the infrastructure before the industry reaches the same conclusion, and watching the world catch up.