UK will create centralized AI usage rules with automated employee, financial audits
The University of Kentucky plans to use artificial intelligence to audit the institution’s financial health and employees’ compliance with its policies.
AI is expected to help identify risks within the university, such as misappropriation of money, fraud, policy violations and actions that don’t align with UK’s values, according to Martin Anibaba, UK’s deputy accountability officer and audit executive.
The university’s current AI use is “unmonitored or decentralized,” Anibaba told UK’s Board of Trustees’ audit and compliance committee last month. The committee reviews financial reports and accountability of UK’s internal systems, according to UK’s governing rules.
“AI is currently on a fast adoption pace throughout the university ...,” he said. “The goal here is very straightforward. We want to expand the depth and breadth of our work without a proportional increase in resources.”
UK launched the Commonwealth AI Transdisciplinary Strategy in November, which is a university-wide initiative to coordinate and increase automated programs across academics and healthcare.
The university may incorporate AI programs including rubric and grading tools for faculty, tutoring and personalized learning for students, clinician transcriptions and patient scheduling in healthcare, job applicant screenings and university budget assessments.
There are different standards for AI use in instructional and clinical settings at UK, but there is no university-wide policy on other aspects like how it can be used for auditing or assessing actions by employees and departments.
The university’s internal auditors plan to develop overarching guidance for “safe, consistent AI use across the university” by June, Anibaba said. In the meantime, auditors are conducting shadow AI assessments, or experiments that test the university’s effectiveness and oversight of AI.
By December, human auditors will continue their typical duties, and a pilot AI program will try to produce the same results “to validate accuracy, efficiency and consistency before any broader deployment of AI,” Anibaba said.
The university will begin integrating AI in “day-to-day workflows” in January, Anibaba said. He believes it will help humans identify and respond to financial and employee risks.
But automated programs can also produce risks of their own.
Risks can include inaccurate or biased learning outcomes, lack of fairness, misclassification of patient groups and decisions that misalign with the university’s values, according to a slideshow presented at the board of trustees’ auditing committee meeting on April 24, 2025.
It was unclear if these issues have occurred at UK as the university’s implementation of AI is still underway.
Is there state oversight on AI?
There is limited state-mandated guidance on how AI can be used in Kentucky. Senate Bill 4, enacted on March 24, 2025, required the Commonwealth Office of Technology to create and implement policies for the use of AI in state institutions such as UK.
Jackson Hurst-Sanders, a Louisville-based business attorney, says if AI use is challenged in Kentucky courts, judges might look to other states to see what they have done when proper human oversight isn’t put in place.
In Michigan, plaintiff Arshon Harper filed a complaint against Sirius XM Radio that alleged the audio entertainment company’s use of an AI hiring tool discriminated against Black job applicants. The case is still pending, so the courts haven’t set new precedent yet.
“We expect Kentucky courts to turn toward precedent from other courts for guidance, meaning the outcomes … could serve as bellwethers for what to expect when a Kentucky court takes up an AI employment matter,” Hurst-Sanders said.
It’s common for legal issues to arise when employers use AI to audit potential hires, he said. It was unclear if UK would use it for hiring, but it listed automated job candidate screening and benefits guidance as potential uses in 2025.
Hurst-Sanders recommended that institutions create a task force or committee to implement AI usage policies, which UK’s auditors are doing, according to Anibaba.
Hurst-Sanders and Anibaba emphasized that human judgment over AI results are essential to keep up with evolving policies like at UK.
“We want to also make sure that the controls are appropriately designed and upgraded, and then our audit approach keeps pace with that,” Anibaba said. “It evolves along with the technology.”