Four weeks ago in San Francisco’s Chinatown, a vacant autonomous taxi was swarmed by a crowd and set ablaze. The motive behind the attack on the Waymo self-driving vehicle, touted as “the future of transportation,” remains unclear. Whether the incident stemmed from a broader discontent with the perceived threat to jobs posed by the Californian tech elite, frustration over accidents involving autonomous vehicles, or simply due to its inadvertent intrusion into a festive gathering celebrating the lunar new year.
The pressing need to gauge public sentiment towards the rapid progression of potentially dystopian technologies has gained significance. This comes amidst discussions in the British parliament regarding the integration of self-driving cars on UK roads and the allocation of funds by Jeremy Hunt for law enforcement to utilize drones in responding to emergency calls.
Although not akin to a scenario from RoboCop, the deployment of aerial cameras to surveil accident or crime scenes raises complex ethical questions. How would a crowd react to a drone hovering above during a protest? Does the presence of a human responder at a car crash offer essential reassurance, even if it may not be the most efficient use of police resources? These dilemmas mark the initial phase of what appears to be a profound transformation in the government’s interaction with artificial intelligence, with significant implications for individuals reliant on public services and for workers whose roles in the public sector could eventually be automated.
As Deputy Prime Minister Oliver Dowden hails AI as a “silver bullet” in the Conservative agenda to reduce state intervention and potentially enable tax cuts, the Labour party emphasizes the potential benefits for the National Health Service (NHS). While some AI tools now surpass human capabilities in interpreting cancer scans, the temptation to automate routine administrative tasks for cost savings is evident. The political discourse revolves around leveraging AI to enhance public services without escalating taxes, yet the reliance on tech magnates like Elon Musk poses risks alongside rewards. Surprisingly, amid a general election year, there is a notable absence of transparent public dialogue on this subject.
In her recent book “AI Needs You,” former Downing Street advisor turned tech executive Verity Harding advocates for public participation in shaping the future societal landscape impacted by AI advancements. Harding’s shift from advising Nick Clegg to spearheading an academic initiative at Cambridge University underscores the critical need for robust governance of AI to mitigate potential disruptions to employment, livelihoods, and communities.
Harding challenges the prevailing notion that technological progress is inevitable, emphasizing the necessity of proactive decision-making on the implementation and utilization of technology. Drawing parallels with historical instances like John F. Kennedy’s space exploration initiatives and Britain’s regulation of IVF technology, she underscores the significance of deliberate choices in steering AI’s trajectory towards socially beneficial outcomes.
The narrative extends to addressing the misuse of AI in generating deceptive content like “deepfake” videos and combating disinformation. Harding advocates for redirecting AI capabilities towards addressing pressing global challenges such as climate change. By acknowledging that AI reflects human biases embedded in its training data, she stresses the importance of guiding AI development to embody positive human values rather than perpetuating negative traits.
Harding’s call for visionary leadership from policymakers coincides with a period of caution towards challenging the dominance of the tech industry, perceived as the key driver of economic growth. The symbiotic relationship between tech corporations and political figures raises concerns about regulatory independence and the alignment of political decisions with industry interests.
In navigating this pivotal phase of technological evolution, Harding underscores the potential for collective agency in shaping AI’s trajectory responsibly. By recognizing the limitations of tech innovators and asserting human control over AI’s role as a tool rather than a master, she advocates for an empowered approach to governance and policymaking. Ultimately, the narrative emphasizes the enduring power of human agency in steering the course of technological advancement.