1

GenAI Risks & Considerations

#FFFFFF

While Generative AI (GenAI) has significant potential to enhance state government operations, it also presents certain risks and considerations that require human oversight, including the following.

  • In a recent McKinsey survey, 23% of respondents reported inaccurate GenAI results and 16% reported cybersecurity issues. 
  • GenAI outputs can be inaccurate or biased, which can compromise decision-making and perpetuate unfairness.
  • GenAI systems are vulnerable to cyberattacks, threatening the security and privacy of sensitive data. Employee training is crucial for the proper use, identification and oversight of vendor AI features and LLMs (large language models).
  • Creating misleading or harmful content, such as deep fakes, can be misused, disrupting public order and trust in government communications.
  • The rapid adoption of GenAI technology may outpace the development of appropriate regulatory and ethical frameworks, resulting in challenges in accountability and governance.
  • The challenge of clearly revealing the use of AI when engaging constituents, along with the ability to explain how AI is used and performs its tasks, must be implemented and standardized.

Addressing these risks requires robust data governance, transparent AI practices, employee training, ongoing monitoring and the implementation of comprehensive security and ethical guidelines. By closely observing early adopters' successes and failures, we can gain valuable insights into best practices and potential pitfalls. OIT will learn from these early adopters to refine our strategies, implement proven methodologies and avoid common mistakes.
 

GenAI Risk Index

All GenAI use cases must undergo a thorough risk assessment conducted by OIT based on the standards set by the National Institute of Standards and Technology (NIST). The risk assessment criteria also align with but are not dependent on the high-risk definition in SB24-205, Artificial Intelligence.

Prohibited 

  • Any activity that performs or facilitates illegal or malicious activities.
  • Generating content that facilitates, promotes or incites violence, hatred, bullying, fraud, spam or harm of individuals or a group.
  • Tracking or monitoring individuals or groups without their consent.
  • Not disclosing the use of GenAI in the creation of any deliverable without human editing or review.
  • Entering non-public information into any Generative AI tool without prior approval.
  • Bypassing any existing laws, state policy and/or legislation.

High Risk 

  • Drafting official documents that will NOT be proofread and all content validated by a human before release.
  • Evaluation of individuals for any purpose.
  • Utilizing sensitive information such as criminal justice (CJIS), health (PHI, HIPAA, ePHI), Social Security numbers and personally identifiable information (PII).
  • Developing code that will be used or promoted for use in production systems.

Medium Risk 

  • Drafting internal documents utilizing only publicly available information.
  • Research activities utilizing only publicly available information.