Unlocking the Power of COACHE Data: How Ohio State Uses Generative AI to Listen to Faculty Voices

Unlocking the power of COACHE data: How Ohio State uses generative AI to listen to faculty voices
In the field of institutional research, the "qualitative bottleneck" has long been a barrier to progress. At large institutions, thousands of open-ended survey comments often sit for months while manual coding processes move at a snail's pace. By the time the data is analyzed, the window for meaningful intervention has frequently closed.
In a recent webinar, Jeannie Kim, interim director and interim principal investigator of the Collaborative on Academic Careers in Higher Education (COACHE) at the Harvard Graduate School of Education, and Michele Hansen, associate vice president for institutional research and planning at The Ohio State University, discussed a breakthrough. Using a secure, "human-in-the-loop" generative AI framework, Ohio State analyzed a dataset of complex faculty comments housed within the COACHE platform in just 48 hours – a task that traditionally takes several weeks of coding due to the complexity and length of the faculty comments.
Hansen noted that COACHE data was the ideal catalyst for this AI experimentation because the dataset arrived "clean," already redacted, anonymized, and highly organized. For presidents, provosts and faculty, this case study offers a blueprint for how institutions can leverage high-quality data through technology to honor every faculty voice at scale while maintaining strict data integrity.
Moving From Skepticism to Strategy
The transition at Ohio State was not about replacing human judgment with an "autopilot" system. Instead, it was about leveraging AI as a high-powered research assistant to break the manual bottleneck. The framework relies on critical operational guardrails to ensure results are accurate and trustworthy.
First, the university uses university-approved, secure versions of tools like Google Gemini and Enterprise Copilot. This ensures that the sensitive faculty data provided through COACHE is never used to train public models, keeping institutional information within a "walled garden." This secure environment allows researchers to process data without the risk of intellectual property or privacy leaks.
Second, the team employs a "verbatim" mandate. To prevent the "hallucinations" sometimes associated with AI, researchers use specific guardrail prompts. These instruct the AI to identify themes using only verbatim quotes and provide a table matching every theme back to the specific comments that informed it. This level of granularity allows researchers to "audit" the AI, ensuring the software is categorizing real faculty sentiment rather than paraphrasing.
The Power of Multi-Theme Analysis
One of the most significant advantages discussed in the briefing was the AI’s ability to handle "multi-thematic" comments. Traditionally, a single comment from a faculty member might touch on several issues – such as departmental climate, leadership support and physical facilities. Manual coders often struggle to categorize these efficiently, but generative AI can tag and cross-reference these nuances simultaneously.
The process remains deeply iterative. Human verification is the most critical component. Researchers cross-validate AI results against multiple tools and benchmark them against traditional manual coding. Expert oversight ensures the software identifies complex faculty concerns that traditional automated tools might miss.
Key Takeaways for Future University Leaders
For leaders looking to replicate this success with their own institutional data, the webinar highlighted three essential strategies:
Prioritize a Secure Environment: Before uploading any faculty data, ensure your institution uses an enterprise-grade AI tool. This protects the privacy of your community and ensures that faculty feedback is not used to train external, public models.
Fuel the System With "Clean" Data: The accuracy of AI outputs is directly tied to the quality of the inputs. Using a professionally redacted and organized dataset — like those provided by COACHE — removes the noise and allows the AI to focus on generating actionable insights rather than correcting errors.
Establish a "Human-in-the-Loop" Workflow: AI should be viewed as an assistant, not a replacement. Assign institutional research experts to audit AI-generated themes against raw data to ensure the AI tool’s interpretation aligns with the authentic faculty experience.
Use AI for Real-Time Responsiveness: The greatest value of AI in qualitative analysis is speed. Use the time saved to move immediately into action phases – such as town halls or policy shifts – while the faculty feedback is still fresh and the issues are current.
As higher education faces increasing pressure to support and retain faculty, the ability to listen – and act – at scale is no longer just a convenience. It is a strategic necessity. This partnership demonstrates that when combined with human oversight, strict methodology and organized data, generative AI can help institutions turn their COACHE data into a fast-moving strategic roadmap.
About the Authors: Jeannie Kim Ph.D. is the interim director and interim principal investigator of the Collaborative on Academic Careers in Higher Education (COACHE) at the Harvard Graduate School of Education, and Michele Hansen is the associate vice president for institutional research and planning at The Ohio State University.

























