Bennett Borden, the primary data scientist at the law firm DLA Piper, looks to literature for inspiration when clients approach him with questions about the abstract concept of artificial intelligence and its potential impact on their businesses. He believes that AI shares more similarities with Iron Man than with any movie character.
Iron Man, a highly advanced suit of armor that assists its inventor, the magnate Tony Stark, in safeguarding the Marvel universe, serves as a vivid illustration for those uninterested in science fiction. In contrast, the first Terminator movie portrays an AI-powered assassin sent back in time to eliminate Sarah Connor, evoking a range of emotions from fascination with the technology’s capabilities to concerns about its impact on humanity. Borden suggests that by leveraging the respective strengths of both humans and AI, they can complement each other effectively. While AI excels at data collection, summarization, and analysis, humans possess qualities like wisdom, inference, compassion, and persuasion.
The question arises: Will AI replace professionals? Borden poses a counterpoint, stating, “No, but lawyers who embrace AI will outperform those who do not.”
There is a palpable sense of apprehension among the populace. A study by the American Psychological Association reveals that approximately four out of ten American workers harbor concerns that AI may eventually encroach upon or entirely assume their job responsibilities.
These apprehensions are spurring legal ramifications. Experts note that there are over 100 legal cases related to AI currently making their way through the legal system, spanning issues from intellectual property disputes to bias in algorithmic decision-making. The prevalence of AI-related disputes is expected to escalate in the coming years.
Several poets have initiated copyright infringement lawsuits against OpenAI, while a coalition of visual artists has pending litigation against AI companies such as Stability AI, Midjourney, and DeviantArt. Legal experts predict that intellectual property disputes are just the tip of the iceberg in the realm of AI-related conflicts, with future cases likely involving data integrity, security, and workplace applications. A joint statement from the U.S. Equal Employment Opportunity Commission and other entities cautions public and private organizations to exercise caution in utilizing AI for employment decisions.
In the realm of linguistic concepts and other digitally transformed domains over the past six decades, Borden notes that AI models have been trained on vast datasets reflecting human expression in language. While some of this content is deemed appropriate, beneficial, and non-toxic, a portion of it falls short of these standards.
Businesses, particularly corporations, may encounter an uncertain regulatory landscape concerning AI in the near future. Post the 2024 presidential election, the likelihood of comprehensive federal AI regulations being enacted remains uncertain, placing pressure on states, federal agencies, and regulators to bridge the regulatory void. In contrast, the European Union’s AI Act, a risk-based framework, has faced criticism from major corporations like Heineken and Airbus.
Tom Siebel, the CEO and founder of the AI enterprise C3, contends that deciphering the complexities of AI governance requires a nuanced understanding. He humorously remarks, “If you can comprehend a single word of it, you’re one step ahead of me and, in my opinion, the individuals who penned it.”
The enforcement of the EU’s AI Act is not expected until at least 2025.
Jordan Jaffe, a partner at Wilson Sonsini Goodrich & Rosati in San Francisco, suggests that judicial support may precede significant legislative pronouncements in the AI domain. On the other hand, Brad Newman, a litigation partner at Baker McKenzie in Palo Alto, California, describes the current landscape as “organized chaos.” Newman advocates for rational, innovation-friendly federal AI legislation and expresses optimism about Congress’s progress in this area.
The complexity of operating a nationwide business is further compounded as states and municipalities across the U.S. begin to enact regulations. For instance, if AI is utilized in analyzing job candidate discussions, compliance with Illinois laws is mandatory, with similar regulations in effect in New York City. California is singled out as a key jurisdiction to monitor, as Duane Pozza, a partner at the law firm Wiley ReinLLP, anticipates the state may introduce stringent regulations governing automated decision-making in alignment with its consumer protection laws.
In a sweeping move on October 30, the Biden administration introduced new guidelines aimed at enhancing AI security and safety. The directive outlined principles focusing on workforce development, security, equity, and bolstering U.S. competitiveness in AI innovation. Pozza acknowledges, “The White House has secured voluntary commitments from several industry leaders regarding the governance of AI.”
Danny Tobey, head of DLA Piper’s AI practice, cautions against both overregulation and underregulation of AI. He emphasizes the need for companies to differentiate between conceptual AI and conventional IoT technologies.
While relational AI presents expansive capabilities and potential risks for businesses, standard AI, such as predictive AI, offers more manageable and targeted functionalities. In the absence of clear regulatory guidance, companies are left to navigate significant AI investments and legal uncertainties.
Newman stresses the importance of C-suite accountability for ethical AI usage within organizations. He recommends appointing a Chief AI Officer to oversee AI utilization, establish clear policies, and provide data scientists with comprehensive guidance beyond just AI tools.
Moreover, Newman advises businesses to conduct regular assessments pre and post-deployment to ensure AI fairness, compliance with privacy regulations, and absence of bias. Transparent communication with employees and customers regarding AI utilization is also crucial.
Conceptual AI, primarily employed for creative purposes, introduces challenges related to intellectual property, trade secrets, security, and quality control.
Jaffe underscores the need for robust guidelines on relational AI usage. He emphasizes that while employees and contractors may be eager to explore novel applications of AI, oversight is essential to mitigate risks and ensure compliance. Individuals involved in the AI lifecycle must recognize the potential liabilities associated with its deployment.
Experts recommend establishing dedicated committees or internal structures to evaluate new AI applications and enforce compliance, with key leaders like the Chief Security Officer or Chief Information Officer driving stringent AI governance. The involvement of these committees and executive leadership is pivotal.
While acknowledging the transformative potential of conceptual AI, which requires distinct governance frameworks, some companies are striving to integrate AI within existing governance structures.
The National Institute of Standards and Technology unveiled a comprehensive AI risk management framework earlier this year. By assessing both the opportunities and risks posed by AI deployment, businesses can make informed decisions based on the level of risk associated with specific AI use cases. Higher-risk applications may involve AI systems influencing customer and employee decisions.
Despite the understandable concerns among legal professionals regarding AI risks, many argue that businesses and regulators should not overlook the positive impact AI can have when ethically and responsibly applied.
Tobey emphasizes the importance of not viewing doctors as impediments to AI progress. He advocates for a perspective that positions law as an enabler of business advancement.
Borden, a colleague at DLA Piper, concurs with this viewpoint, likening the current AI landscape to the second industrial revolution, where companies that embraced transformative technologies flourished and ascended to prominence.