In the era before digital technology, individuals with visual impairments faced limited options and scarce employment prospects. While learning Braille and pursuing education were viable paths, opportunities beyond that were scant. For those lacking musical talents, options included selling pencils on street corners, surviving on meager disability stipends, or accepting substandard wages at establishments like the mop factory, where they could find solace among peers in similar circumstances.
The enactment of the Americans with Disabilities Act in 1990 marked a significant milestone in promoting accessibility and job opportunities. However, it was the advent of home computers and the Internet that truly revolutionized the landscape.
By the late 1990s, developers began crafting innovative accessibility solutions such as screen readers, which markedly enhanced the independence of visually impaired users. These tools audibly relayed the contents of documents, emails, and websites while vocalizing each keystroke and providing a range of sound cues—like pings and whooshes—to aid in screen navigation. Personally, my screen reader enables me to engage in tasks that would otherwise be unfeasible, empowering me to handle routine online tasks just like any other individual. While not without flaws—certain websites remain inaccessible, and form completion can be challenging—I have learned to adapt. Despite the exasperation it may cause sighted individuals, the constant chatter of my computer has become akin to a reassuring voice in my mind.
The introduction of Microsoft’s SeeingAI offered a promising prospect for blind smartphone users, allowing them to recognize objects, people, and documents, thereby enhancing their environmental awareness.
Although these advancements were welcomed, prior to the recent emergence of AI systems driven by advanced machine learning models, existing accessibility tools had their constraints, some of which were disconcerting.
For instance, the Be My Eyes app, launched in 2015 by developer Hans Jørgen Wiberg and The Danish Association of the Blind, connected users with volunteers worldwide via video chat. While the concept aimed to provide visual descriptions through a remote set of eyes, I found the idea unsettling. To me, self-sufficiency entails minimizing reliance on others whenever possible. I preferred the clinical precision of a machine over the subjective interpretations of well-meaning strangers. Consequently, the prospect of seeking validation from an unfamiliar sighted individual held little appeal.
In 2017, Microsoft’s SeeingAI promised to deliver comprehensive visual assistance to blind smartphone users, leveraging AI technology. Despite the initial excitement surrounding its capabilities, the application fell short in practice, proving less user-friendly—especially for visually impaired individuals—and often yielding unreliable descriptions. Despite its shortcomings, the concept remained promising.
Subsequently, a blind acquaintance introduced me to a new feature within the Be My Eyes app called Be My AI, developed by ChatGPT. This iteration aimed to surpass the functionality of SeeingAI from 2017 by offering enhanced performance without the need for human interaction. Witnessing the impressive results shared by my friend, which vividly depicted scenes like his kitchen and recording studio, I was captivated. The clinical and objective observations provided by the computer aligned perfectly with my preferences.
Beyond the realm of accessibility tools, I’ve harbored reservations about the implications of the digital age. While it promised convenience, the digital era often introduced complexities, paranoia, and annoyances. Nonetheless, the evolution of AI applications, such as Be My AI, seemed to herald a new era of technological advancement.
Despite the prevalent doomsday narratives surrounding artificial intelligence, its immediate threats seem centered on misinformation dissemination during elections and the creation of fabricated images of celebrities. Despite the inherent risks, there may be a silver lining amidst AI’s perceived malevolence.
Upon embracing Be My AI, I acknowledged two crucial directives: refraining from leveraging the app for navigation or medical diagnosis. Following these guidelines, the application proved straightforward to use—simply point the phone, tap a button, and within moments, a detailed visual description unfolds, surpassing the capabilities of most well-intentioned humans.
While not infallible—occasionally misidentifying objects or misreading text—Be My AI continually refines its accuracy through accumulated knowledge. The system’s ability to discern nuances, interpret moods, and evolve based on gathered data sets it apart. Moreover, the application’s capacity to learn and improve over time, coupled with its knack for recognizing unexpected elements like sock monkeys, underscores its potential.
For many visually impaired individuals, these AI tools provide newfound access to mundane details, offering a fresh perspective on their surroundings. While it may not replicate normalcy, it certainly alters the paradigm, granting individuals the ability to observe their environment objectively, albeit through an intermediary.
As AI systems evolve to offer practical functionalities like Be My AI, empowering individuals to navigate daily tasks independently, the allure of aligning with technological advancements becomes increasingly enticing.