Wisconsin legislators were poised to vote on Thursday regarding measures to oversee artificial intelligence, aligning with a trend seen in numerous states as they grapple with managing this technology leading up to the November elections.
The Assembly’s agenda included a bipartisan proposal mandating that political candidates and entities disclose the use of AI in advertisements, with violators subject to a $1,000 penalty.
Several organizations, including the League of Women Voters and the state’s newspaper and broadcaster associations, have expressed support for this initiative, while no opposition groups have officially registered their stance.
Additionally, a separate proposal crafted by Republicans, awaiting Assembly approval, seeks to criminalize the production and possession of AI-generated images depicting child sexual abuse, carrying a potential 25-year prison sentence. While current laws address traditional forms of such offenses, the legislation aims to encompass digital representations of minors. Notably, no dissenting voices have emerged against this bill.
Another item on the Assembly’s docket involves a call for auditors to scrutinize the utilization of AI within state agencies. This measure also outlines a deadline of 2030 for agencies to devise strategies for streamlining their operations, with a requirement to report on AI’s potential role in enhancing efficiency by 2026.
Despite concerns, the bill does not establish specific targets for downsizing the workforce nor explicitly advocate for replacing human labor with AI. Representative Nate Gustafson emphasized that the primary objective is to identify efficiencies amidst labor shortages, refuting assertions that the bills aim to supplant humans with AI technology.
The realm of AI encompasses a diverse array of technologies, from recommendation algorithms on platforms like Netflix to generative systems such as ChatGPT, which assist in writing or generating multimedia content. The proliferation of generative AI tools in commercial sectors has sparked both intrigue and apprehension regarding their capacity to deceive individuals and propagate misinformation.
In the past couple of years, various states in the U.S. have initiated efforts to regulate AI, with approximately 25 states, Puerto Rico, and the District of Columbia introducing AI-related bills in the previous year alone.
States like Texas, North Dakota, West Virginia, and Puerto Rico have established advisory panels to evaluate and supervise the implementation of AI systems within state agencies. Louisiana, on the other hand, has formed a dedicated security committee to analyze AI’s implications on state operations, procurement, and policies.
Furthermore, the Federal Communications Commission recently prohibited the use of AI-generated voices in robocalls, responding to instances where AI-generated calls mimicked President Joe Biden’s voice to dissuade participation in New Hampshire’s primary elections.
Sophisticated generative AI tools, spanning voice cloning to image generation software, have already been deployed in elections across the U.S. and globally. Notably, during the previous U.S. presidential campaign, several advertisements integrated AI-generated content, and some candidates experimented with AI chatbots for voter engagement.
While the Biden administration issued non-binding guidelines for AI technology in 2022, primarily outlining ambitious objectives, federal legislation regulating AI in political campaigns is yet to materialize through Congress.