Gov. Glenn Youngkin expressed a sense of urgency in establishing regulations for artificial intelligence (AI) to govern its widespread utilization by state agencies. A key member of his administration indicated that while Youngkin is receptive to modifications, he is keen on implementing these rules promptly due to the extensive adoption of this rapidly advancing technology by government entities over the years.
On January 18, Youngkin issued an executive order outlining the framework for the application of AI within state agencies, along with directives concerning its integration into K-12 education and higher education institutions.
The utilization of AI is not a recent development in Virginia, with state agencies leveraging this technology for diverse purposes such as data processing, automated decision-making, and customer service operations.
However, Youngkin’s directive comes at a time when the focus is intensifying on the burgeoning potential of AI, particularly generative AI, which leverages data to generate new content, audio, and text, raising concerns about its implications on services, education, and employment opportunities.
The updated regulations stipulate the necessity of human oversight in agency programs utilizing AI to ensure its ethical application, address bias concerns, establish an approval mechanism for AI initiatives, and enforce data privacy protections.
According to Andrew Wheeler, Virginia’s director of regulatory management, Youngkin is resolute in establishing these safeguards promptly to address apprehensions surrounding AI, given the current and continued reliance of state agencies on AI programs.
While Youngkin aims to promote the integration of AI in state operations and education, Wheeler emphasized the importance of a cautious and protective approach to safeguard individuals’ data and equip students for future career prospects.
The executive order mandates a thorough review process before deploying AI technology within agencies to mitigate bias or unintended consequences, ensure compliance with IT standards, and undergo an approval evaluation.
A critical concern regarding AI revolves around the potential bias in its programming and data, with instances highlighting discriminatory outcomes, such as bias against women in hiring practices and Black mortgage applicants.
The new standards established by the Youngkin administration emphasize the requirement for AI applications within state agencies to yield positive outcomes, like reducing wait times, along with conducting security assessments and enabling individuals to consent to data usage.
Additionally, the order necessitates the availability of documentation on AI models employed by agencies, public disclosures when utilizing generative AI in decision-making processes, and vetting of third-party AI developers.
Del. Michelle Maldonado praised Youngkin’s executive order as a positive initial step, acknowledging ongoing legislative efforts, including her proposal to establish operational standards for AI developers and users to prevent discrimination, ensure transparency, and conduct assessments.
In the realm of education, Youngkin’s directive underscores the importance of instilling moral and ethical principles, addressing issues like cheating, and educating on the potential adverse effects of AI on individuals, relationships, and communities.
Furthermore, the order emphasizes leveraging AI to enhance student learning experiences without replacing educators in classrooms, reflecting the administration’s belief in AI as a supportive tool across various professions.
While recognizing the transformative impact of AI on the workforce, public relations professor Cayce Myers anticipates a gradual evolution rather than a sudden upheaval in job markets. Myers envisions AI streamlining tasks, allowing individuals to focus on more intricate assignments, akin to the evolution spurred by personal computers.
Despite the transformative potential of AI, the absence of concrete regulations and federal laws in this domain poses challenges. Myers highlights the need for cohesive regulatory frameworks, given the divergent viewpoints in Washington and the industry’s pivotal role in shaping AI governance.
Youngkin’s proposal to allocate $600,000 in the budget for pilot programs underscores his commitment to evaluating the efficacy of the new standards through practical implementation. The formation of a task force to oversee the standards’ implementation and pilot assessments reflects a proactive approach towards refining the regulations based on real-world feedback and outcomes.