Written by 11:30 am Big Tech companies

### Microsoft’s AI Social Wrangler: Unveiling the Architect of Digital Interactions

POLITICO’s weekly transatlantic tech newsletter uncovers the digital relationship between cri…

THE IDENTICAL DIGITAL BRIDGE IN THE FRESH YEAR. I am Mark Scott, the primary technology editor at POLITICO, extending my congratulations to everyone on the arrival of the new year. Here is my initial disclosure for 2024: I am an unwavering stickler for language. Consequently, this was undoubtedly the most exceptional Christmas card I received during the festive season. Visitors, take heed.

Sufficient with the festive spirit. Let’s delve into the core matters:

— Natasha Crampton is tasked with incorporating Microsoft’s AI principles into the global artificial intelligence market.

— Over the past month, I have interviewed over 50 individuals regarding their access to social media data. The insights gleaned and the subsequent actions are detailed below.

The pinnacle of international polling processes is upon us this year. Jot this down in your notebook to stay vigilant.

THE FEMALE PRESENCE IN MICROSOFT’S AI GOVERNANCE NARRATIVE

NATASHA CRAMPTON, UNLIKE Sam Altman of OpenAI or Satya Nadella of Microsoft, may not be a household name. Nevertheless, the New Zealand-born lawyer, currently serving as Microsoft’s Chief Responsible AI Officer, is spearheading the integration of the company’s AI protocols into its offerings. She has emerged as a pivotal figure in shaping global AI regulations alongside Microsoft President Brad Smith. Recently, I had the opportunity to converse with her to gather her insights. (Interestingly, our discussion took place a week before Sam Altman’s rollercoaster journey at OpenAI, including Microsoft’s eleventh-hour involvement.)

During a phone conversation in early November, Crampton expressed, “What strikes me is the emergence of these critical elements.” These elements encompass domestic regulations, global governance, and industry standards. The harmonious interaction of these components is essential for the effective regulation of AI. Drawing from firsthand experience, she actively contributed to Microsoft’s advocacy endeavors that led to the White House’s endorsement of deliberative AI commitments and the subsequent AI executive order. She is a member of the UN’s AI advisory panel, which published its inaugural report last month, and also engaged in the AI Safety Summit convened by the United Kingdom.

A key aspect involves not reinventing the wheel. As emphasized by the Microsoft professional, existing regulations still hold relevance. It is imperative for authorities to provide guidance on the application of these laws to novel AI methodologies. This clarity is pivotal in setting regulatory expectations. Such a stance may not come as a surprise to U.S. residents. Recent White House executive orders empower governmental bodies to curb potential abuses of emerging technologies. It also serves as a reminder to the European Union that there is no necessity to devise new regulations from scratch when existing standards suffice.

Crampton navigated contentious waters when addressing one of the most debated AI governance issues: whether to prioritize immediate risks like data bias or long-term concerns such as AI-triggered catastrophes. She underscored the importance of addressing both types of threats simultaneously. This comprehensive approach entails a strong focus on testing and evaluation to mitigate immediate issues like AI-driven racial profiling and enduring risks such as AI systems surpassing human control. She elaborated, “Comprehending these risks involves rigorous testing and evaluation practices,” a less glamorous facet of a technology that has captured widespread attention.

Echoing sentiments echoed globally, the Microsoft professional highlighted concerns about AI-powered bioweapons. She acknowledged that “several governments have expressed apprehensions on this front.” While much of this information is publicly available through a cursory online search, there is a prevailing notion that advanced AI systems could swiftly outpace human capabilities in devising pandemic-scale bioweapons. According to Crampton, “certain governments are deeply considering the potential acceleration of specific risks by AI,” such as the development of bioweapons.

Determining the rightful beneficiaries of cutting-edge technology remains a contentious issue within AI governance circles. While a faction of Silicon Valley titans, including Microsoft, Google, and OpenAI, advocate for proprietary frontier models, which represent the latest AI iterations, adversaries like Meta champion an open-source model that ensures widespread access to technology. Crampton acknowledged that “there exists a spectrum of flexibility versus proprietary methodologies.” Determining the optimal approach in each scenario is inherently context-dependent.

The crux of the matter lies in recognizing that not all AI methodologies are created equal. While advocating for maximum flexibility in less complex applications like chatbots or AI-powered smartphone apps, Crampton aligns with Microsoft’s stance that stringent controls should govern advanced systems. This cautious approach underscores the necessity of comprehensively understanding the capabilities of new models before broad dissemination. She emphasized, “We have observed instances where we could not anticipate all their capabilities (of new models) beforehand. Prior to widespread deployment, it is imperative to grasp these capabilities and their implications.”

UNVEILING THE ESSENTIALS OF DATA ACCESS (THE DETAILS YOU SOUGHT BUT FEARED TO ASK)

I JUGGLED TWO ROLES EXTENSIVELY LAST YEAR. Firstly, my routine responsibilities as a general tech journalist at POLITICO. Secondly, my research endeavors focused on how regulators, public health authorities, academia, tech firms, and civil society entities access data from social media platforms while serving as a non-resident fellow at Brown University’s Details Futures Lab. The primary objective is to understand the current utilization of this data for regulatory oversight, epidemic surveillance, or public accountability. The subsequent phase involves leveraging these insights to inform forthcoming regulations under the EU’s Digital Services Act, which mandate platforms like TikTok, Facebook, and X (formerly Twitter) to open up their extensive data repositories to the public. These directives are currently unparalleled outside of China.

As my fellowship draws to a close, it is imperative to evaluate the landscape. To elicit candid responses regarding data access strategies, I conducted over 50 interviews, all conducted anonymously. This comprehensive investigation represents a pioneering effort in grasping the profound impact of social media on broader society.

Given our limited understanding of these platforms’ internal mechanisms, there is a pressing need to deepen our insights into their operations. This entails enhancing transparency and advancing policy adjustments based on empirical evidence rather than mere intuition. Enhanced access to underlying data, encompassing everything from the functionality of so-called algorithmic sorting systems to the propagation of misinformation across platforms, is indispensable for this endeavor.

Discussions with stakeholders underscored widespread uncertainty regarding the appropriate course of action. Rapidly evolving social media regulations, including those enacted by the EU, UK, Australia, and imminently Canada, have prompted authorities to navigate uncharted waters in detecting potential harms. Authorities have encountered resistance from corporations regarding data access and the subsequent public disclosure, potentially necessitating enhanced regulatory powers. Those anticipating swift data-centric safeguards from new regulations may find their expectations challenged.

A nod to the American public health community is warranted. These professionals, integral to combating the COVID-19 pandemic by identifying outbreak signals on social media, faced significant hurdles. However, individuals within this sphere exhibited limited access to social media data, inadequate tools or expertise to leverage such data, and scant connections to those possessing such capabilities. Relying on a private Instagram group of physicians to glean insights into COVID-19 concerns is hardly a sustainable approach when preparing for future global health crises and the ensuing information warfare.

Researchers based in the United States currently enjoy privileged access to social media data. Nonetheless, these connections often stem from prestigious affiliations with longstanding ties to platforms, such as Stanford University and New York University. Research laboratories lacking such direct affiliations are hamstrung in their investigative pursuits. The political spectrum’s right flank has accused social networks, U.S. federal authorities, and these experts of colluding to silence conservative voices online, resulting in a marked decline in scientific data accessibility.

The veneer of collaboration surrounding data collection, storage, and access swiftly eroded within tech firms. A lack of comprehensive understanding regarding the cumulative data amassed by these companies was evident, exacerbated by the ad hoc creation of directories to store such information. Furthermore, individuals within trust and security teams noted that corporate attorneys, acting in the firm’s best interests, often intervened and impeded data sharing on legal grounds, even when inclined to share data with external entities.

The crux of my research underscores the imperative for more widely accessible tools that enable a broader cohort to access social media data for regulatory, academic, or accountability purposes while upholding ethical standards and prioritizing user privacy. Aligning with the EU’s regulatory framework (primarily due to its current enforcement), I advocate for establishing a “clearinghouse” for social media—a publicly accessible repository of such data.

However, further efforts are warranted to establish a standardized methodology for comparing data from platforms like Twitter and TikTok. The multifaceted threats posed by social media transcend individual platforms. Before a comprehensive understanding of these platforms can materialize, the ability to juxtapose diverse social networks through real-time quantitative datasets is indispensable. Should any of these insights pique your interest, please do not hesitate to reach out, enabling us to continue this dialogue.

THROUGH THE LENS OF PERSONALITIES

infographic

GEARING UP FOR A DIGITAL DEMOCRACY SHOWDOWN

As over 2 billion individuals—spanning from the U.S. and the EU to India and Indonesia—participate in what constitutes the largest democratic exercise globally, I will be spearheading POLITICO’s digital election coverage this year. Delving into the digital realm of essentially local popularity contests, expect Digital Bridge to feature segments dedicated to what I term “Digital Democracy” throughout 2024. These elections will witness an intricate interplay of social media politicking, traditional campaign financing, and cutting-edge technologies like artificial intelligence, increasingly shaping electoral landscapes online.

However, a foundational understanding is essential before delving into specifics. This year, elections are slated in over 50 nations. Here are a few noteworthy ones to monitor, in my estimation. Taiwan is set to conduct its presidential and legislative elections on January 13, a pivotal moment testing the nation’s relations with China. Indonesia follows suit with regional elections on February 14, while Russia, amidst a less democratic backdrop, holds its presidential elections on March 17. Irish voters will weigh in on altering the nation’s laws pertaining to gender and family through two referendums scheduled for March 8.

South Korean voters head to the polls on April 10, with India’s mega-general elections likely to unfold in April or May. Noteworthy mentions include South Africa and the United Kingdom, with both nations anticipated to hold national elections at some point this year, although specific dates remain unconfirmed. The EU gears up for its parliamentary elections on June 6–9, followed by the U.S. mega-election on November 5. Brace yourselves.

INSIGHTS INTO THE WEEK’S WONK

A SECOND TERM SECURES A THIRD G20 PRESIDENCY. This grants Brazil and its G20 luminary Maurcio Lyro a significant role in shaping collaborative efforts among the top global markets, spanning anti-corruption measures to climate change initiatives for 2024.

Following President Luiz Inácio Lula da Silva’s focus on combating hunger, poverty, and inequality, alongside sustainable development and reforms in international governance, Brasilia does not prioritize online matters. Expect contemporary discussions to center on aiding developing nations in bridging the gap with their more advanced counterparts.

Here are key G20 dates to mark on your calendar: Digital economy dialogues are slated for January 31–February 1, April 18–20, June 11–13, and September 9–14. The primary G20 summit is expected to convene from November 18 to 19, with a dedicated research and innovation gathering scheduled for May 22 to 24.

QUOTABLE QUOTES

European Parliament’s seasoned advisor Laura Caroli, engaged in deliberations on the EU’s AI Act, shared on Linked In, “AI may not lead to fatalities, but it exacts a toll on those directly impacted.” She noted the mental strain experienced by many during the final stages of the process, emphasizing the need to navigate these challenges with resilience (a sentiment echoed by all).

CURRENT READS

— The Digital Forensic Research Lab at the Atlantic Council delved into how social media algorithms and software architecture influenced ongoing conflicts in the Middle East. Further insights are available.

— Apple collaborated with Columbia University researchers to unveil Ferret, an open-source large language model. A comprehensive study outlining its capabilities is accessible below.

— The French government’s revocation of the export license for ASML, a local manufacturer of cutting-edge semiconductor production equipment, has repercussions for some of the company’s Chinese clients. More details are provided here.

— Stanford University researchers outline the most significant AI advancements anticipated over the next 12 months. Dive into these insights.

— Australia released a series of compliant industry codes of conduct governing the handling of illicit and “restricted” content by platforms. Explore further.

Fiona Alexander, representing the Center for European Policy Analysis, critiques the U.S. government’s decision to halt its push for unrestricted data flow through the United States Trade Representative, signaling a retreat from its global digital leadership role.

Visited 2 times, 1 visit(s) today
Last modified: January 16, 2024
Close Search Window
Close