Written by 10:10 am AI, Latest news, Technology

### Urgent Alert: 50 International Specialists Urge Action Against AI Dominated by Technology

According to a team of global experts, we need to stop the development of new AI technology merely …

Experts are advocating for a shift towards human-centered AI, emphasizing the importance of designing technology that enhances human life rather than requiring humans to conform to it. A recent book, featuring contributions from fifty experts spanning twelve countries and diverse disciplines, delves into practical strategies for implementing human-centered AI, addressing associated risks and presenting solutions across various scenarios.

The global experts highlight the necessity to halt the development of new AI technology solely for the sake of innovation, which often necessitates adjustments in practices, habits, and regulations to accommodate the technology. Instead, they propose the creation of AI tailored to meet human needs, aligning with the core tenets of human-centered AI design.

The book, titled Human-Centered AI, showcases the insights of fifty experts from a multitude of countries such as Canada, France, Italy, Japan, New Zealand, and the UK, representing disciplines including computer science, education, law, management, political science, and sociology. It explores AI applications in diverse contexts like agriculture, healthcare, criminal justice, and higher education, offering actionable steps to foster a more ‘human-centered’ approach, including regulatory sandbox strategies and frameworks for interdisciplinary collaboration.

Defining Human-Centered AI

Human-centered AI is positioned as a crucial paradigm in response to the pervasive integration of artificial intelligence in our daily lives. Some experts caution against relying solely on technology firms to develop and deploy AI in a manner that truly enhances the human experience, foreseeing potential long-term detriments. Shannon Vallor, a leading authority on human-centered AI from the University of Edinburgh, elucidates that this approach entails leveraging technology to facilitate human flourishing.

Vallor articulates, “Human-centered technology entails aligning the entire technological ecosystem with the well-being of individuals. It contrasts with technology designed to supplant, compete with, or devalue humans, emphasizing instead technology that supports, empowers, enriches, and fortifies human capabilities.”

She highlights generative AI as a prime example of technology that lacks a human-centered focus, emphasizing its creation primarily for organizational prowess rather than addressing genuine human needs. This results in a scenario where individuals are compelled to adapt to technology, rather than technology being tailored to serve human requirements.

Challenges Associated with AI

Contributors to Human-Centered AI underscore both their aspirations and apprehensions regarding AI’s current trajectory devoid of a human-centered approach. Malwina Anna Wójcik, hailing from the University of Bologna, Italy, and the University of Luxembourg, draws attention to the systemic biases ingrained in contemporary AI development processes. She emphasizes the exclusion of historically marginalized communities from meaningful participation in AI design, perpetuating existing power dynamics.

Wójcik advocates for diversity in research endeavors and interdisciplinary collaborations spanning computer science, ethics, law, and social sciences to address these biases effectively. She further suggests fostering international initiatives that engage in intercultural dialogues incorporating non-Western perspectives.

Matt Malone, an expert from Thompson Rivers University in Canada, delves into the privacy challenges posed by AI, noting the prevalent lack of understanding among individuals regarding data collection and utilization processes. He warns that consent and knowledge gaps result in continuous infringements on privacy boundaries, with far-reaching implications for individual autonomy and the delineation between human agency and technological intrusion.

Malone predicts a dynamic landscape for privacy regulation in response to the acceptance or rejection of AI-driven technologies, underscoring the pivotal role of privacy in shaping the evolving relationship between humans and technology.

Behavioral Impacts and Solutions

Apart from societal implications, contributors investigate the behavioral repercussions of current AI applications. Oshri Bar-Gil, affiliated with the Behavioral Science Research Institute in Israel, conducted a study on the transformative effects of using Google services on self-concept. He reveals how platforms create a digital ‘self’ based on user interactions, subsequently influencing human cognition and agency, potentially diminishing individual autonomy.

Alistair Knott from Victoria University of Wellington, New Zealand, Tapabrata Chakraborti from the Alan Turing Institute at University College London, UK, and Dino Pedreschi from the University of Pisa, Italy, scrutinize the pervasive utilization of AI in social media platforms. They caution against the inadvertent reinforcement of user biases by AI-powered recommender systems, potentially steering individuals towards extremist viewpoints.

Proposed solutions include enhancing transparency in data management by companies operating recommender systems to enable comprehensive scrutiny of their impact on user behavior and attitudes towards potentially harmful content.

Envisioning Human-Centered AI Implementation

Pierre Larouche from the Université de Montréal, Canada, challenges the notion of treating AI as a distinct entity from existing legal frameworks, emphasizing the need to extend and apply current laws to regulate AI effectively. He cautions against framing AI regulation as an open-ended ethical debate, advocating for leveraging existing legal structures to govern AI advancements proactively.

Benjamin Prud’homme, the Vice-President of Policy, Society, and Global Affairs at Mila – Quebec Artificial Intelligence Institute, echoes Larouche’s sentiments, urging policymakers to move beyond the innovation-regulation dichotomy. He encourages policymakers to exhibit confidence in regulating AI responsibly, leveraging diverse perspectives and experiences to craft effective governance mechanisms.

Prud’homme underscores the imperative of taking AI governance seriously, acknowledging the iterative nature of policy development and the necessity of incorporating varied viewpoints, including those of marginalized communities and end-users, in shaping robust regulatory frameworks.

The forthcoming book, “Human-Centered AI – A Multidisciplinary Perspective for Policy-Makers, Auditors, and Users,” promises a comprehensive exploration of human-centered AI principles and practical recommendations for fostering a technology landscape that prioritizes human well-being and empowerment.

Visited 2 times, 1 visit(s) today
Tags: , , Last modified: March 10, 2024
Close Search Window
Close