Written by 8:59 pm AI, AI Guidelines, AI Trend, Uncategorized

– Michigan to Participate in State Initiative Regulating AI Political Ads amid Pending Federal Legislation

Michigan Gov. Gretchen Whitmer is expected to sign legislation in the coming days aimed at curbing …

LANSING, Mich. (AP) — Michigan is set to join an initiative aimed at combating deceptive applications of artificial intelligence and manipulated media at the state level, while broader regulations are being deliberated by Congress and the Federal Elections Commission in anticipation of the 2024 elections.

Under forthcoming legislation expected to be endorsed by Gov. Gretchen Whitmer, political campaigns at both state and federal levels will be mandated to explicitly disclose the use of artificial intelligence in creating political advertisements aired in Michigan. Additionally, the bill would prohibit the dissemination of AI-generated deepfakes within 90 days of an election without a distinct disclosure identifying the content as manipulated.

Deepfakes, which are fabricated media portraying individuals engaging in actions or making statements they have not, are produced using generative artificial intelligence—a form of AI capable of swiftly generating convincing images, videos, or audio recordings.

There is a growing apprehension that generative AI might be leveraged in the upcoming 2024 presidential campaign to deceive voters, impersonate candidates, and subvert elections on an unprecedented scale and pace.

Various candidates and committees in the electoral race are already exploring the use of this rapidly evolving technology, which has become more accessible, efficient, and cost-effective for the general public in recent years.

For instance, in April, the Republican National Committee unveiled an entirely AI-generated advertisement envisioning a hypothetical future of the United States under a reelected President Joe Biden. Despite a small disclaimer indicating its AI origin, the ad featured realistic yet fabricated images depicting boarded-up businesses, militarized patrols in the streets, and a surge in immigration causing alarm.

In a separate instance in July, a super PAC supporting Republican Florida Governor Ron DeSantis, known as Never Back Down, employed an AI voice cloning tool to mimic the voice of former President Donald Trump, creating an illusion that he narrated a social media post—despite never vocalizing the statement.

Experts caution that these instances offer just a glimpse of the potential misuse of AI deepfakes by campaigns or external entities for more malevolent purposes.

Several states, including California, Minnesota, Texas, and Washington, have enacted laws regulating deepfakes in political advertising. Similar legislation has been proposed in Illinois, New Jersey, and New York, as reported by the nonprofit advocacy organization Public Citizen.

The Michigan legislation mandates that any individual, committee, or entity disseminating an advertisement for a candidate must overtly disclose the use of generative AI. This disclosure must be presented in the same font size as the predominant text in print ads and must be displayed for a minimum of four seconds using text of similar size to the majority of any text in television ads, according to an analysis by the state House Fiscal Agency.

Advertisements featuring deepfakes within the 90-day period preceding an election would necessitate a distinct disclaimer informing viewers that the content has been manipulated to depict speech or actions that did not occur. In the case of video content, the disclaimer must be prominently visible throughout the entirety of the video.

Violations of the proposed laws could result in misdemeanor charges, punishable by up to 93 days in prison, a fine of up to $1,000, or both for the first offense. The attorney general or the aggrieved candidate could seek recourse from the appropriate circuit court against deceptive media.

While federal lawmakers emphasize the necessity of regulating deepfakes in political advertising and have convened discussions on the matter, Congress has yet to pass any legislation.

A bipartisan Senate bill, co-sponsored by Democratic Senator Amy Klobuchar of Minnesota, Republican Senator Josh Hawley of Missouri, and others, aims to prohibit “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire.

In early November, Michigan Secretary of State Jocelyn Benson participated in a bipartisan dialogue on AI and elections in Washington, D.C., urging senators to pass the federal Deceptive AI Act proposed by Klobuchar and Hawley. Benson also encouraged senators to advocate for similar state-level legislation upon returning to their respective states.

Benson highlighted the limitations of federal law in regulating AI at the state and local levels, underscoring the need for federal funding to address the challenges posed by AI.

The Federal Election Commission took a preliminary step in August toward potentially regulating AI-generated deepfakes in political advertisements under existing rules prohibiting “fraudulent misrepresentation.” Despite soliciting public feedback on a petition submitted by Public Citizen, the commission has yet to issue a ruling.

In efforts to mitigate the dissemination of harmful deepfakes, social media companies have introduced guidelines. Meta, the parent company of Facebook and Instagram, announced that political ads on its platforms must disclose if they were created using AI. Similarly, Google unveiled a policy in September requiring political ads on YouTube and other Google platforms to be labeled if AI was used in their creation.

Visited 2 times, 1 visit(s) today
Last modified: February 4, 2024
Close Search Window
Close