States Assessing and Mitigating Risks of Agencies Using Artificial Intelligence

February 23, 2024 | Maggie Davis, Beth Giambrone

Robotic arm typing on a laptop with the text Health Policy Update overlayed

Applications of artificial intelligence (AI) appear in front of us every day—as suggested text for messaging or using map applications for directions. The basis of more advanced AI systems is machine learning, which allows a computer to conduct certain tasks without explicit programming, such as looking for patterns within large amounts of data to inform an anticipated outcome. Public health agencies have been using chatbots for several years to support screening, an example of “narrow AI” technology which performs specific tasks within a limited domain.

General AI’s ability to rapidly group and analyze data is already supporting public health efforts to scan death certificates to identify potential drug overdose deaths and combat misinformation about COVID-19. This capability could expand further with the introduction of generative AI (genAI) technology, which encompasses AI systems capable of creating new, original content by learning patterns and structures of data (e.g., large language models). Early study of genAI’s applicability to public health found that some genAI tools could plausibly assemble, summarize, and generate relevant text on public health concerns. However, the same study noted that some of the information generated was “purely invented” by the genAI tool and made the generated text invalid—a known problem with current genAI tools commonly referred to as “hallucinations.”

With rapidly evolving technological capabilities, policymakers at all levels of government are exploring whether and how AI tools should be used in service to the public. At the state level, policymakers have been exploring how governmental agencies are using AI, should use AI, and how they will ensure privacy protections.

Assessing Current Government Use of AI

Some state agencies have already integrated AI technology into their work. For example, while the Texas Workforce Commission uses a chatbot to help residents apply for unemployment benefits, the technology has not been fully or consistently incorporated into government operations. Maine Information Technology asked all state agencies not to use genAI products until the state could conduct a full risk assessment, but continued to allow the employees to use previously approved chatbot technology.

Several states are determining which of their agencies currently use AI, how they use it, and developing recommendations for continued or future use. Maryland’s Governor issued Executive Order 01.01.2024.02, which establishes an AI Subcabinet of the Governor’s Executive Council to promote Maryland’s AI guiding principles and develops an action plan to phase them in drawing on national guidance like NIST’s AI Risk Management Framework. Additionally, the Arizona Judiciary issued Administrative Order 2024-33 creating a Steering Committee on Artificial Intelligence and the Courts to advise the state judicial system on ethical implementation and use of AI technologies in the courts.

During the 2023 legislative sessions at least four states—California, Connecticut, North Dakota, and Texas—enacted laws directing government entities to assess their use of AI. California enacted AB 302, directing the state Department of Technology to conduct a comprehensive inventory of automated decision systems used to “replace human discretionary decisions” that have a significant effect on people’s lives, such as accessing housing, employment, and healthcare. Under the new law, the state Department of Technology must conduct a comprehensive inventory of these systems, cataloging the categories of data and personal information used by these systems and submit the comprehensive inventory report to the legislature by January 1, 2025.

Connecticut enacted SB 1103, requiring the Department of Administrative Services to conduct an annual inventory of all state agency systems using AI, collecting the name, description of the system, whether the system is used to independently make or inform a conclusion, decision or judgement, and whether there was an impact assessment made prior to implementation. The law also required the state Office of Policy and management to establish policies and procedures governing responsible agency AI use, which was published February 1, 2024. To assist in the ongoing implementation of AI among state systems, Connecticut is establishing an AI Advisory Board to help state agencies use AI technology that adheres to their policy guiding policy principles: accuracy, privacy, transparency, and equity and fairness.

North Dakota enacted HB 1003, requesting a study of the effect of AI on state institutions, agencies, businesses, citizens, and youth. Texas enacted HB 2060, creating an Artificial Intelligence Advisory Council to study and monitor AI systems. The new law requires that state agencies report to the Council all AI systems currently in use, under consideration, or in the procurement process. These reports will detail the systems’ general and reasonably foreseeable capabilities, and whether the systems feature independent decision-making capabilities. The Council will submit their findings and recommendations for future AI use to the legislature by December 1, 2024.

As of January 2024, at least seven states (Alaska, Florida, Indiana, New Jersey, Rhode Island, Virginia, and Washington) have introduced bills related to government use of AI. Alaska is considering SB 177, which would require an assessment of all AI systems used by state agencies. Indiana is considering SB 150, which would create an AI task force to assess current state agency use of AI and cybersecurity policies for public entities.

Establishing Safeguards in Agency Hiring

As state agencies adopt AI technologies, many policymakers are sensitive to the potential negative consequences of AI technologies assisting in the employee recruitment process. Early research on the use of AI in recruitment suggests that the technology can increase hiring efficiency and improve recruitment quality; however, there remains a risk that algorithms used to assist employment decision making may further entrench societal biases rather than mitigating the impact of human bias.

Specifically, there have been concerns raised that AI supported recruitment technologies may discriminate based on race, gender, disability status, or other protected classes because the data used to train the AI system represent existing societal biases rather than an unbiased dataset and do not yet incorporate algorithms to adjust for such discrimination. However, if systems can train on data free from existing societal biases, these AI tools may be able to help mitigate human bias in the hiring process.

Although accessible AI tools can help people with disabilities in the workplace, some AI systems used for hiring may actually do the opposite. The U.S. Equal Employment Opportunity Commission (EEOC)—the federal agency that enforces federal employment anti-discrimination laws—began assessing the potential for AI systems to unlawfully discriminate in the hiring process in 2021. Of particular concern is the chance that an AI decision-making tool may intentionally or unintentionally “screen out” people with disabilities, which would violate the Americans with Disabilities Act legal protections. To avoid this, EEOC recommends that employers take active steps to provide potential applications information on how AI technology evaluates applications so that applicants that may need an accommodation (e.g., people with visual impairments who use a screen reader) can request an accommodation to have their application appropriately reviewed.

Beyond federal protections, several states are considering legislation to prohibit and reduce the risk of AI decision-making tools resulting in discrimination in employment decisions. As of January 2024, at least five state legislatures (Georgia, Oklahoma, New Jersey, New York, and Washington) are considering bills that would protect against AI discrimination in employment decisions. Oklahoma is considering the Ethical Artificial Intelligence Act (HB 3835), which would prohibit using any AI tools that cause “algorithmic discrimination”—where an AI tool disfavors applicants based on race, color, national origin, citizen or immigration status, familial status, religious belief, sexual orientation or expression, marital, disability, and veteran status—and requires AI developers to document how the tool is intended to be used for a consequential decision (e.g., hiring) and a risk assessment of foreseeable algorithmic discrimination.

New Jersey is considering S 1588, which would require any person using an AI tool to support employment decisions to inform each candidate within 30 days that the tool is being used, how the tool assesses the job qualifications or characteristics of a candidate, and create a penalty of up to $500 for each violation of failing to inform candidates notice of the use of AI in the recruitment process. New York is considering A 8129, which would establish an AI bill of rights for state residents, including a requirement that AI systems used in employment be tailored to that specific purpose and “incorporate human consideration” in sensitive decisions, such as buffering against potential discrimination in employment decisions.

ASTHO will continue to follow these important issues, providing relevant updates to our members.

Special thanks to Greg Papillon, director, public health innovation at ASTHO for his contributions to this Health Policy Update.

Related Content

August 04, 2023

AI in Public Health—ASTHO Has Entered the Chat

The basics of Artificial Intelligence for public health professionals.

View More
August 22, 2023

Artificial Intelligence in State and Territorial Public Health

Public health data experts discuss all things AI.

View More