Genevieve Martinez, Matthew Frangul, Chose Tran / Psychology / Faculty Mentor: Logan Watts

This study examines whether artificial intelligence (AI) can code qualitative responses as effectively as human coders by utilizing two reinforcement learning methods. Given that coding in research is a time-intensive process demanding significant human effort, AI’s potential to match human expertise in rating responses presents an opportunity to enhance efficiency.
To investigate this, we utilized ChatGPT 4o and analyzed its rating accuracy by comparing it to human-coded responses. The AI was provided with a rubric and received human judgment to assess its ability to identify and categorize ratings appropriately in the first reinforcement learning method. Alternatively, AI was provided with the same grading materials but also examples of human knowledge ratings alongside the rubric and prior to human judgments. By implementing these techniques, we examined whether AI could improve its rating accuracy. Results indicate that AI can effectively relate to human accuracy, particularly when provided with clear guidelines and reinforcement mechanisms.
These findings have significant implications for research efficiency, suggesting that AI can serve as a tool for coding and reducing the manual workload for researchers. These findings suggest that AI can enhance research efficiency and allow scholars to allocate more time to complex analytical tasks.
Leave a Reply