It started, like many important reflections do, with a casual conversation. My kids and I were chatting about the way technology is changing everything—from how we speak to each other, to how we shop, work, and even how we see the world. I was telling them about my interaction with ChatGPT and how it is now even helping me with HTML code to make my blogs so much more vibrant. My daughter, passionate about the field of Human-Computer Interaction with a deep focus on accessibility, was excitedly sharing the latest research she’s been reading. My son-in-law, ever the product innovator, added his own thoughts about technology that adapts, learns, and empowers. Somewhere in between their excitement and my curiosity, a question bubbled up:
Do we even need screens on mobiles anymore?
That single thought opened the floodgates. As someone who grew up watching bulky CRTs become sleek touchscreens—and who now sees five-year-olds swiping their way through apps as naturally as flipping pages—it felt like we were due for another radical shift. I began thinking about what comes after the smartphone.
And as I sat there, the ideas began swirling faster than I could keep up. What if our phones no longer needed screens at all? What if a small black box, no bigger than a matchbox, could project everything we needed to read onto a wall or our palm? What if we could wear a pair of glasses that not only gave us directions but also described the world in real-time to someone who couldn’t see? I was going bonkers.
It’s not science fiction. These tools are already around us, just not stitched together yet. A decade ago, I saw an infrared (IR) keyboard — nothing but a small device projecting a keyboard like a slide. All one had to do was to connect our phone to it over bluetooth and the keyboard projected on a wall or a tabletop could be used to type messages or even mails. What a great help it would be for the elderly who find it difficult to peck with their fingers on the small keyboard on a smartphone.Today, Micro projectors can beam high-resolution visuals onto any surface.
Smart glasses, once a prototype with privacy concerns, have matured into lightweight stylish wearables capable of interpreting your surroundings, translating text, and even displaying captions for the hearing-impaired. For a visually challenged person, these smart glasses—combined with AI—could narrate everything happening in front of them. “A red bus is pulling into the stop, and the signal has just turned green. A woman is walking toward you, holding a white umbrella.” That’s not magic. That’s AI with a high-definition camera and real-time processing.
Put them together, and suddenly, the concept of a “phone” becomes entirely fluid. What we carry today will be less like a device, more like an invisible partner.
More importantly, it becomes inclusive.
And then comes the part that excites me most: this future doesn’t have to be expensive or exclusive. With open-source AI models, falling hardware costs, and global developer communities, these tools can be made accessible across economic boundaries. A screenless, intuitive, assistive device isn’t just a cool gadget. It’s a great leveler. It is a walking stick made of code for the blind. A hearing aid made of vision for the hearing impaired. A helping hand that doesn’t get tired for the elderly.
For someone hard of hearing, these glasses could display live transcriptions—making every conversation more understandable without the need for someone to raise their voice or repeat themselves. There are smart glasses today that can not only transcribe speech to text, but also translate it in real time. Imagine you are speaking to a local during your visit to China and the glasses become like a Head Up Display with a real time translation from Chinese to English being typed out in green coloured font. Yes, it is already there. For older people struggling with dexterity, voice-activated projectors could replace complicated apps. “Show me my calendar.” “Play the You Tube video on the wall.”
But this isn’t just about innovation. It’s about intention. What are we creating and with what purpose?
The more I think about it, the more I realize that technology is only as human as the people who build it. AI doesn’t have ethics, we do. Code doesn’t understand empathy—but the coder can. If we race ahead chasing what’s possible without pausing to ask what’s right, we risk widening the very gaps we want to bridge.
So as we imagine this future—screenless, seamless, inclusive—it becomes even more important to talk about privacy, about consent, about trust. A device that knows where I am, what I see, what I say—should also know where to draw the line. It doesn’t. Only the makers know that line, and they must be held responsible. We must embed our values into the very architecture of the tools we build.
Technology can be an equalizer—but only if we design it to be so. And that begins with awareness. Empathy. Curiosity embedded with responsibility. And regularly asking to oneself, what if?
So no, I’m not going bonkers. I’m just a little wide-eyed. A little awestruck. About the world that is being built around me, around us. I am filled with hope, if we do it right. I am equally terrified, if we forget who we are and what we are here for.



No comments:
Post a Comment