Written by 11:15 am AI Security

**Collaborative Efforts of Banks to Offer AI Risk Mitigation Strategies**

A consortium of banks focused on cybersecurity has published a series of white papers aimed at navi…

This month, a group of companies focused on security published a series of white papers discussing the risks, threats, and appropriate applications of artificial intelligence within the financial services sector.

The six papers, released by the nonprofit Financial Services Information Sharing and Analysis Center (FS-ISAC), cover a wide range of topics. These include the cybersecurity threats associated with AI, ways in which banks can leverage AI to bolster their cyber defenses, and the key principles that banks should consider when creating AI-based tools and applications.

Referred to as a “framework” by FS-ISAC, these documents are designed to complement existing resources from other nonprofits and governmental entities that tackle similar issues related to managing risks and harnessing the potential of AI in a transparent and secure manner.

Key components of this framework draw from resources such as the National Institute of Standards and Technology (NIST), guidelines for secure AI development established by international security organizations, and a white paper outlining generative AI development principles from the Association for Computing Machinery.

Benjamin Dynkin, the senior director at Wells Fargo and chair of FS-ISAC’s AI Risk Working Group responsible for crafting the white papers, emphasizes that the framework is a timely initiative due to the mounting pressure on banks to integrate AI effectively.

These publications aim to provide timely guidance on the safe and responsible use of AI, offering actionable steps for the industry to address the escalating risks associated with AI technologies.

Hiranmayi Palanki, principal architect at American Express and vice chair of FS-ISAC’s AI Risk Working Group, highlights the pivotal role played by the six white papers in helping financial institutions combat threat actors who are increasingly leveraging generative AI for malicious purposes.

While acknowledging the abundance of existing documentation, FS-ISAC asserts that its white papers represent a pioneering set of guidelines and standards tailored specifically for the financial services sector. Notably, regulators like the New York Department of Financial Services have issued sector-specific guidance as well.

The six papers underscore the importance of addressing four primary threats posed by AI, including deepfakes, employee misuse of generative AI, novel hacking techniques, and improper information usage. FS-ISAC recommends that financial institutions prioritize these threats.

One of the white papers, “Combating Risks and Reducing Dangers Posed by AI,” delves into strategies for mitigating these priority threats and touches on issues like individual GPT versions used by cybercriminals and data poisoning.

The remaining five papers cover topics such as adversarial AI frameworks, integrating AI into cyber defenses, responsible AI principles, vendor evaluation for generative AI, and establishing acceptable use policies for external generative AI.

Furthermore, FS-ISAC released a report titled “Financial Services and AI: Embracing Opportunities, Managing Risks,” serving as a companion piece to the white papers.

Noteworthy organizations involved in shaping this framework include Wells Fargo, Goldman Sachs, FirstBank, Bank of Hope, NBT Bancorp, MUFG Bank, Ally Financial, as well as non-banking entities like Mastercard, American Express, and Aflac.

Visited 3 times, 1 visit(s) today
Tags: Last modified: March 1, 2024
Close Search Window
Close