This session shares a real-world AI governance model implemented in higher education to enable responsible AI adoption while protecting institutional data and maintaining FERPA and GLBA compliance. The presentation will walk through how enterprise Microsoft Copilot is offered as a baseline AI tool within the institutional tenant, while advanced needs are evaluated through a justification-based request process for ChatGPT Business licenses with enterprise data protections. Attendees will see how AI use is governed through acceptable use and technology acquisition policies, vendor classification, and targeted training, including webinars and risk awareness for free AI tools, meeting recording, transcription, and overreliance on AI outputs.
Learning objectives:
- How to tier AI access so different AI tools and capabilities are matched to risk, data sensitivity, and institutional needs.
- How to create AI vetting and exception processes that include training and accountability for advanced or sensitive use cases.
- How to govern AI use through policy and procurement to reduce risk and maintain oversight without monitoring individual users.