The joint statement released on Friday by the European Union and the United States underscores their commitment to enhancing collaboration in the field of artificial intelligence (AI). The agreement not only addresses AI safety and governance but also extends to various other tech-related issues, such as the development of digital identity standards and the advocacy for platforms to uphold human rights.
The sixth meeting of the EU-U.S. Trade and Technology Council (TTC) has paved the way for this collaboration, aiming to mend the transatlantic relations strained during the Trump administration.
However, with the looming possibility of Donald Trump’s return to the White House in the upcoming U.S. presidential elections, the future of EU-US cooperation in AI and other strategic tech domains remains uncertain.
Despite this political uncertainty, the current climate on both sides of the Atlantic favors closer alignment on tech matters. The joint statement serves as a message to both sets of voters, advocating for a collaborative approach over a divisive one in the upcoming elections.
In the realm of AI, the statement emphasizes a risk-based approach and the promotion of safe, secure, and trustworthy AI technologies. It also encourages advanced AI developers in both regions to align with the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems.
A significant outcome of the TTC meeting is the establishment of a “Dialogue” between the European AI Office and the U.S. AI Safety Institute to facilitate deeper collaboration and information sharing within their AI research ecosystems. This collaboration aims to advance the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management.
Furthermore, the agreement highlights the importance of defining key AI terms and standards to support the progress of AI standardization efforts.
Collaboration between the EU and the U.S. extends to research endeavors leveraging machine learning technologies for beneficial purposes like healthcare, agriculture, and climate change mitigation. The focus is on sustainable development and extending AI advancements to developing nations.
The joint efforts aim to address global challenges through AI applications, with a specific focus on energy optimization, emergency response, urban reconstruction, and climate forecasting. The collaboration also intends to expand by involving more global partners in the AI for Development Donor Partnership.
In the context of platform governance, both entities emphasize the need for Big Tech companies to prioritize protecting information integrity, especially in light of the potential risks posed by AI-generated content like ‘deepfakes’.
Looking ahead to 2024 as a significant year for democratic resilience due to global elections, the EU and U.S. express concerns about the misuse of AI applications for spreading misinformation and interference. They call for platforms to enhance support for researchers’ access to data to study societal risks effectively.
The statement also touches on ongoing collaboration in areas such as e-identity standards, clean energy, quantum technologies, and 6G development, reflecting a comprehensive effort to strengthen transatlantic cooperation in various tech domains.