Written by 6:59 pm AI

### 2024 Preview: Artificial Intelligence Trends

With the 2024 Legislative Session rapidly approaching, MACo is profiling some significant issues th…

With the approaching 2024 Congressional Program, MACo is highlighting key topics for discussion in the General Assembly. The realm of artificial intelligence (AI) is poised to transform our lifestyles, professions, and modes of communication.

As defined in the 2020 National Artificial Intelligence Initiative Act, AI represents a machine-driven system capable of making forecasts, offering advice, or rendering judgments that impact real or virtual scenarios to achieve specific human-defined objectives.

The integration of AI stands to assist state and local governments in streamlining processes, enhancing service delivery, bolstering public safety and security, combating fraud, leveraging data analytics, and ensuring regulatory compliance. This rapidly evolving technology holds the potential to revolutionize fundamental input and commitment mechanisms.

However, the practical implementation and regulation of AI present challenges concerning privacy, security, surveillance, fairness, accessibility, and bias. Particularly in sectors like human resources, healthcare, criminal justice, and financial services, the risks are notably pronounced.

Advocates for regulatory frameworks express concerns about the swift advancement of AI tools and the widespread adoption of largely untested technology without due consideration for potential adverse effects. Conversely, opponents of regulation argue that self-regulation within the private sector is preferable, as external oversight could impede innovation and progress.

While many AI critics acknowledge the necessity of finding a middle ground between AI advancements and core human values, there exists a spectrum of opinions regarding the extent of regulatory measures needed, if any. Consequently, numerous unanswered questions persist as policymakers strive to strike a balance between safeguarding interests, fostering innovation, and enhancing services through AI.

This issue is addressed by the Department of Legislative Services in its annual compilation of Concern Papers:

Elected officials are increasingly recognizing conceptual AI software systems and others as they evolve, aiming to deepen their comprehension, regulate them to protect users from potential risks, and promote the development of secure applications.

Key Challenges

Data Protection: While data privacy has been a concern since the inception of the Internet, the intricate algorithms underpinning AI have spurred discussions on governmental regulation to prevent the illicit or unethical use of personal data. However, the proprietors of these systems have been reluctant to allow scrutiny of their internal mechanisms, making regulation in this aspect of AI occasionally challenging.

Discrimination and Bias

Existing socio-political disparities can be embedded within the inherent biases of the data models utilized by AI systems, influencing the operation of AI models since these algorithms and neural networks are trained by humans. One example of biased AI features is facial recognition software used in law enforcement or security settings, which has exhibited discriminatory tendencies. This technology, employed by numerous law enforcement agencies nationwide, has shown inaccuracies in identifying young individuals, women, and minorities. Similarly, certain AI programs designed to screen resumes for employment have displayed biases against older individuals, women, and minorities.

Government Initiatives

Federal Intervention

Effective January 1, 2021, the National Artificial Intelligence Initiative Act of 2020 came into force. By fostering the development and deployment of reliable AI in public and private domains and preparing the workforce for the seamless integration of AI systems, the Act aims to propel U.S. leadership in AI research and development, thereby accelerating economic growth and bolstering national security.

In collaboration with the National Institute of Standards and Technology, the U.S. Department of Energy formulated the AI Risk Management Playbook as a guidance manual to facilitate the responsible and trustworthy use and advancement of AI within this multi-agency initiative. While not legally binding, this handbook addresses common AI risks and outlines actions that AI leaders, practitioners, and procurement teams can take to mitigate data privacy and discrimination risks.

In October 2023, President Biden issued an Executive Order, as reported in Conduit Street, to ensure that the United States takes the lead in harnessing AI’s capabilities while mitigating its risks. Apart from establishing new standards for AI safety and security, the Executive Order promotes financial and civil rights, safeguards consumers and workers, fosters innovation and competition, and fortifies American global leadership.

State-Level Efforts

During the 2022 legislative session, at least 17 states, including Colorado, Florida, Idaho, Maine, Maryland, Vermont, and Washington, introduced and passed AI-related bills and commitments. Maryland, for instance, established the Business 4.0 Technology Grant Program within the Department of Commerce to aid small and medium-sized manufacturing enterprises in implementing innovative Industry 4.0 systems or connected networks, with AI being a component of Industry 4.0.

In the 2023 legislative session, approximately 25 states, along with Puerto Rico and the District of Columbia, introduced AI-related bills, with 14 states, including Puerto Rico, either proposing or enacting policies.

Global Initiatives

While the U.S. federal and state governments deliberate on AI oversight and development strategies, the European Union (EU) has taken a proactive stance by proposing a robust legal framework aimed at tightening regulations on the advancement and utilization of artificial intelligence. The EU passed the first draft of the Artificial Intelligence Act in May 2023, categorizing AI risks into appropriate, high-risk, and prohibited categories to ensure the ethical and human-centric progression of AI.

Future Outlook for Maryland

At the MACo Winter Conference in 2023, Nishant Shah, Maryland’s senior consultant for responsible artificial intelligence, outlined the state’s approach to ensuring ethical and effective AI implementation. Shah discussed leadership strategies, risk mitigation through appropriate safeguards, “sandbox” methodologies, and ongoing collaboration and resources for local governments.

Senator Katie Fry Hester, Senate Chair of the Joint Committee on Cybersecurity, Information Technology, and Biotechnology, addressed the 2023 MACo Summer Conference, delving into the opportunities and challenges presented by AI applications across various policy domains such as health, education, environment/climate, public health, employment, and housing. Senator Hester also highlighted several AI-related considerations that the General Assembly should contemplate, including cataloging statewide AI applications, procurement practices, innovation, efficiency, and transparency.

Visited 1 times, 1 visit(s) today
Last modified: December 28, 2023
Close Search Window