Written by 9:17 pm Latest news

– Exploring the Impact of AI: Over a Third of Texas Businesses Embrace Artificial Intelligence

Amid fears that AI could heighten bias or affect privacy, Texas is forming an advisory committee to…

The Texas Workforce Commission resorted to artificial intelligence in March 2020 due to a surge of unemployment claims.

In a report by Tom Abrahams of ABC13, the impact of artificial intelligence on various sectors worldwide is discussed, including its role in Texas.

The chatbot, affectionately named “Larry” in memory of the late Larry Temple, former head of the company, was developed to aid Texans in navigating the process of applying for unemployment benefits.

Functioning like a next-generation FAQ section, Larry would address user queries related to unemployment cases by selecting the most appropriate prewritten responses from a pool of responses generated by human staff, utilizing AI language processing. Prior to the introduction of Larry 2.0 in March of the following year, the chatbot had handled over 21 million inquiries.

Larry serves as a prime example of how government agencies are integrating artificial intelligence into their operations. While advancements in state government systems have been notable in recent years, concerns have been raised regarding potential issues such as bias, data privacy, and technology governance. To address these concerns, the Legislature has committed to actively monitoring the state’s utilization of AI.

State Representative Giovanni Capriglione, R-Southlake, introduced a bill aimed at optimizing the state’s implementation of AI systems.

Capriglione believes that this initiative will bring about a significant transformation in how state affairs are conducted.

In June, Governor Greg Abbott signed House Bill 2060 into law to evaluate the current use of AI in state agencies and explore the necessity of establishing an ethical code for AI technology. The responsibility of overseeing the state’s AI practices does not fall under the agency’s jurisdiction.

Artificial intelligence encompasses a range of technologies that simulate and expand upon human logic through computer systems. AI systems like ChatGPT are categorized as generative AI, capable of producing unique responses based on user inputs. By analyzing large datasets, AI can automate tasks that were traditionally performed by humans. The focus of HB 2060 is on automated decision-making processes.

According to a 2022 report from the Texas Department of Information Resources, over a third of state agencies in Texas currently leverage artificial intelligence in their operations. The Workforce Commission offers an AI tool for job seekers, providing personalized job recommendations. AI is also utilized in language translation, speech-to-text tools for call centers, as well as enhancing security and fraud detection measures.

Automation of time-consuming tasks, such as tracking expenses and budgeting, is another area where AI is making an impact on productivity and efficiency. In 2020, the DIR established an AI Center for Excellence to support state agencies in adopting AI technologies. However, the level of AI implementation across state agencies is not extensively monitored, as participation in the DIR’s center is voluntary, and each agency typically manages its own technology team.

While there are currently no specific disclosure requirements regarding the types and uses of AI systems, Texas state agencies are mandated to ensure that their AI technologies comply with state safety regulations. Under HB 2060, each agency is required to report this information to the AI advisory committee by July 2024.

Capriglione emphasized the importance of fostering innovation in AI applications while acknowledging the challenges posed by data quality issues that can impede the system’s intended functionality.

As concerns about the ethical and practical implications of AI technology grow, there is a call for greater oversight. The establishment of an AI expert government, comprising individuals with expertise in constitutional law, ethics, law enforcement, and AI, is seen as a crucial step towards regulating the use of AI technology.

Research conducted by Samantha Shorey, an associate professor at the University of Texas at Austin, highlights the cultural impacts of AI, particularly its potential to exacerbate existing inequalities. The use of AI in decision-making processes, such as determining eligibility for social services, has raised ethical concerns, as demonstrated by a case in Pennsylvania where an AI system allegedly discriminated against disabled parents.

Sandy Venkatasubramanian, director of the Center for Technology Responsibility at Brown University, warns that AI systems can perpetuate biases present in historical data, leading to discriminatory outcomes based on factors like gender, religion, or culture.

Privacy concerns arise from the vast amount of data collected by AI systems, raising questions about transparency and control over how this data is utilized over time.

Jason Green-Lowe, executive director of the Center for AI Policy, advocates for stricter regulations to ensure AI technologies are subject to thorough evaluation and scrutiny to prevent potential harm or misuse.

By December 2024, the AI expert government is expected to present its findings and recommendations to the Legislature. Meanwhile, interest in implementing AI technologies at various levels of government is on the rise, with the artificial intelligence user group led by DIR attracting representatives from state organizations, educational institutions, and local governments.

The increasing interest in AI technologies underscores the need for proactive measures to address ethical, legal, and societal implications associated with their deployment.

Visited 2 times, 1 visit(s) today
Last modified: January 15, 2024
Close Search Window
Close