Written by 8:30 am AI Business, Uncategorized

### Key Questions Businesses Should Ask When Buying AI Tools

Parker Poe’s Sarah Hutchins, Debbie Edney, and Robert Botkin emphasize the importance of updating r…

It is crucial to introduce risk-management systems that represent optimal purchasing methods as businesses increasingly turn to artificial intelligence tools to enhance, supplement, or even supplant various functions.

Failure to do so may lead companies to embrace what seems like a universal solution powered by AI, only to discover it as a Pandora’s box filled with legal and regulatory risks.

When procuring AI resources, companies should consider the following inquiries and adhere to the recommendations provided by the World Economic Forum for the proper adoption of AI.

Is the information comprehensible?

Despite the apparent complexity of AI tools, their efficacy hinges on the quality of the data they are trained on. Companies can seek assurance regarding the collection, utilization, and disclosure of training data from their AI providers. It is imperative that suppliers demonstrate compliance with relevant laws by obtaining requisite consents when gathering customer data.

Before deploying an AI tool, businesses need to assess its data utilization and training protocols. Vendors should also outline their governance measures, audits, and other safeguards to ensure the reliability, utility, and impartiality of the tool, while addressing potential biases, inaccuracies, or unfairness.

Companies must also consider the scenarios involving the use of domestic data that may or may not be uploaded alongside corporate data, as well as how this data will be utilized for training purposes by the vendor.

Lastly, businesses should be mindful of the users’ access to corporate data and ascertain its potential admissibility in court if the need arises.

Have I Deliberated on Regulatory Scrutiny?

The concern whether technology companies and their tools foster anti-competitive environments or disadvantage consumers is on the radar of regulatory bodies such as the Department of Justice, Federal Trade Commission, and other authorities.

Regulators are apprehensive about the potential harm AI tools could inflict on consumers due to the profound insights they offer. For instance, an AI application could be leveraged in marketing and pricing strategies to accurately predict an individual consumer’s purchasing power, enabling companies to maximize profits by offering products at premium prices.

Authorities are also focusing on how certain consumer segments are adversely affected by discrimination and inaccuracies in AI outcomes, potentially leading to anti-competitive collaborations.

Considering a longstanding tenet of competition law, where companies are prohibited from colluding to set future prices but are allowed to discuss historical prices, the utilization of AI applications that sift through extensive competitor price data could blur the line between past and present or aggregated and disaggregated data, as highlighted by Principal Deputy Assistant Attorney General Doha Mekki. “Our concerns are further compounded when competitors utilize similar pricing algorithms.”

Businesses can proactively manage evolving regulatory risks by evaluating the potential implications of data sourcing practices on dynamic information.

Have I Mitigated Security Risks?

As per the guidelines from the World Economic Forum, cyberattacks targeting AI vendors “have the potential to compromise the integrity of AI decisions and predictions.” Companies should exercise caution when incorporating personally identifiable information, as data fed into an AI tool becomes susceptible to exploitation by malicious actors.

The integration of AI tools could potentially create vulnerabilities in corporate systems, acting as a gateway for unauthorized access. This risk is particularly pronounced for AI tools that traverse systems to execute specific data retrieval requests.

Understanding the company’s security measures, including preemptive strategies to detect intrusions and response protocols to mitigate breach impacts, is paramount.

Have I Incorporated Best Practices in the Contractual Agreement?

Businesses should ensure that their contracts with AI vendors encompass clauses addressing common legal aspects like data usage, data integrity and loss, intellectual property rights, security breaches, and other pertinent matters.

Specific security measures should safeguard certain types of data, such as restricting or encrypting files containing personal information that contravenes data privacy regulations.

Furthermore, in accordance with the recommendations of the World Economic Forum, master service agreements should include a compliance attestation aligned with prevailing AI regulations and principles, such as the Responsible Artificial Intelligence Institute certification.

Companies are advised to formulate unique key performance indicators in line with the guidelines. Organizations should also consider the justification for implementing training programs that educate employees on proper AI utilization.

The landscape of business operations is rapidly evolving with the advent of AI. Companies must diligently evaluate AI tools strategically and fortify themselves through each contractual arrangement to harness AI’s potential while mitigating associated risks.

Visited 2 times, 1 visit(s) today
Last modified: February 9, 2024
Close Search Window
Close