The head of the U.K.’s financial regulator announced plans to explore how big tech companies’ access to extensive data might lead to improved financial products and more options for consumers.
The regulatory shift seeks to maximize artificial intelligence’s (AI’s) potential for innovation, competitive pricing and expanded options for consumers and businesses. The move underscores a global trend of examining and potentially harnessing tech companies’ power with new regulations.
“This announcement is interesting in that the U.K. seems to be taking a different approach to innovation than the EU,” Gal Ringel, co-founder and CEO at Mine, a global data privacy management firm, told PYMNTS. “The EU, having just passed the AI Act, regularly goes out of its way to regulate technology before it reaches the market. The U.K. taking the approach of working hand-in-hand with Big Tech to help harness data insights and build better products puts a lot more faith in businesses and the free market.”
He added, “One approach is not better than the other, as the EU prioritizes end-user safety and privacy and the traditional places like the U.S. have prioritized the end output, but seeing the U.K. start to diverge more from the EU is something to watch as the AI space heats up.”
Call for Action on Data
During a presentation at a Digital Regulation Cooperation Forum event, Nikhil Rathi, who leads the U.K.’s Financial Conduct Authority (FCA) and chairs the forum, explained his main concerns with big technology companies. Rathi mentioned that if the FCA’s analysis shows that tech firms’ data can benefit financial services, they would encourage more data sharing between tech and financial companies
“The dominance of a handful of firms and further entrenchment of powers will imperil competition and innovation,” Rathi said in the speech. “And, alongside promoting effective competition, the FCA has a primary objective to protect consumers from harm.”
The FCA also released a feedback statement regarding its request for input on data-sharing practices between Big Tech and financial services firms. While Big Tech companies have access to financial data through open banking, they are not obligated to reciprocate by sharing their data with the financial sector.
Ringel noted that the larger the dataset, the more insight you can draw from it and the more reliable the baseline you can build for AI or other products.
“Those benefits, especially when combined with data collection and scraping practices that do not violate user privacy or safety, can drive innovation that leads to faster and more intuitive technologies on the consumer market,” he added.
Growing Call for Regulations
The decision by U.K. regulators to revisit their approach to AI and data use in Big Tech reflects a broader global trend evident in several recent regulatory actions. For instance, the EU has passed a sweeping AI act.
In the United States, there has been increased scrutiny under the Biden administration, which has advocated for more rigorous enforcement of antitrust laws, particularly concerning Big Tech’s data practices. The Chinese government has implemented strict data protection laws and has cracked down on the previously unregulated expansion of tech firms like Alibaba and Tencent.
As PYMNTS previously reported, the U.K. is adopting a distinctly “pro-innovation” stance on AI regulation, diverging from its EU counterparts, who have unanimously agreed on the final text of the EU’s AI Act. The AI Act adopts a risk-based framework for regulating AI applications. Once it is enacted, it will affect every AI company serving the EU market and any users of AI systems within the EU, though it does not extend to EU-based providers serving outside the bloc.
In contrast, the U.K. government prefers an alternative regulatory framework that differentiates AI systems based on their capabilities and the outcomes of AI risks rather than just the risks alone. According to the U.K. government’s response in February to the consultation on AI regulation, the plan is to implement sector-specific regulation guided by five core AI principles rather than enacting specific AI legislation. This approach aims to foster innovation by tailoring regulation more closely to different sectors’ particular needs and risks.
Benoît Koenig, co-founder of Veesion, which makes AI-powered gesture recognition software, told PYMNTS that the EU AI Act is necessary for building trust in AI technologies.
“For businesses operating within the EU, this will necessitate a greater focus on compliance, particularly for AI applications deemed high risk, which could include areas like surveillance and biometric identification,” he added. “This might increase operational costs and demand more rigorous testing and documentation processes.”
U.S. companies with EU operations or customers must adapt their AI strategies to comply with the forthcoming AI Act, Koenig said.
“It could also serve as a precursor to similar regulations in the U.S., prompting businesses to proactively adopt more stringent ethical standards for AI development and use,” he added.
“Overall, while the act presents certain challenges, it also offers an opportunity for businesses to lead in the ethical use of AI, fostering innovation that is not only technologically advanced but also socially responsible and trusted by the public.”