Merger and acquisitions involving targets with substantial software or intellectual property assets often require a specialized diligence process that focuses on open-source compliance, security, and intellectual property considerations.
To address the complexities stemming from the use of generative AI, acquirers are encouraged to establish a dedicated diligence workstream tailored to address issues specific to generative AI technology. This recommendation applies across various industries and sectors, necessitating a reassessment of diligence strategies to encompass the diverse potential applications of this innovative technology.
The widespread adoption of generative AI tools is driven by their ability to enhance existing products, expedite the development of new offerings, and optimize operational processes. However, alongside these benefits, the integration of generative AI introduces certain risks.
There is a risk that sensitive information or trade secrets used as inputs for generative AI tools may inadvertently be exposed to the public domain, jeopardizing their confidentiality and legal protection.
Generative AI models may be trained on third-party content, raising concerns regarding fair use versus proprietary rights and the potential inclusion of copyrighted material or proprietary data in the generated output.
In situations where generative AI is employed to generate software code, the resulting code may be subject to open-source licensing requirements or contain security vulnerabilities such as viruses or malware.
Furthermore, conventional forms of intellectual property protection such as patents and copyrights may not be readily applicable to content produced by machines.
To mitigate these risks, acquirers should conduct comprehensive due diligence on a target company’s utilization of generative AI, scrutinizing its policies, practices, and risk mitigation strategies in this domain.
Key considerations include evaluating the existence of formalized policies governing generative AI usage, the effectiveness of risk mitigation measures, compliance monitoring mechanisms, and identifying potential gaps in the target’s approach to managing risks associated with generative AI.
Similar to open-source diligence, an assessment of the generative AI tools employed by the target should include an analysis of the risks posed to proprietary information, the presence of safeguards against harmful content generation, and the adherence of external parties to the target’s generative AI policies.
In instances where the target utilizes AI algorithms or large language models, additional scrutiny should be placed on data sources, ownership rights, data usage compliance, potential IP implications, and strategies to address bias in AI outputs.
Implementing bias mitigation strategies and ensuring human oversight of AI recommendations are essential steps in reducing legal risks and liabilities associated with biased AI outputs, particularly in sensitive areas like employment decisions.
Continuous monitoring of the evolving legal and technical landscape concerning generative AI is crucial for all stakeholders, enabling a proactive approach to risk management and compliance with emerging standards.
As AI reshapes business practices and due diligence norms, acquirers must adjust their strategies to effectively navigate the complexities of AI integration, thereby safeguarding deal value and minimizing unforeseen risks.