Introducing: AI Essentials for Instructors

Post by Dr. Jess Kahlow, Instructional Designer in the Center for Distance Education & Adjunct Associate Professor in the College of Education (Faculty Profile)

Dr. Jess Kahlow debuts our new AI Essentials for Instructors Canvas course in this blog post.

Image of Jess Kahlow folding her arms in a black shirt.

TLDR

Curious about using AI in your teaching? Our new AI Essentials for Instructors is your one-stop guide to the effective, ethical, and practical use of AI at UTA. The course takes about two hours, and you’ll get an “AI Essentials” badge!

Self-enroll in the course now!

Key Ideas

Image is of a laptop saying “AI Essentials”

My background is primarily in writing and communication, so the impact of technologies like Copilot and ChatGPT has interested me from the start. Because of that, it shouldn’t be surprising that I started using these technologies in every way I could think of and allowed AI in my courses.

I often use AI tools like Copilot, Notebook LM, and ChatGPT. As I started experimenting with AI, I first focused on understanding what these tools are really good at (we knew they were bad at a lot of things, but finding things they were consistently good at was a bit more challenging). Soon, I realized that AI is good at restructuring information; for example, it was good at turning a bunch of information into a nice, short overview or general summary. This was great for writing chapter or article summaries and even for writing module overviews. Then, I applied this same idea to using AI to make rubrics. Since a rubric is essentially a differently formatted version of the assignment, AI is a very helpful tool for doing this (and you can learn more about it in my chapter on Using AI for Rubrics!). From there, I became more curious about how AI could support instructors, and in my chapter on Instructors’ Use of AI in Teaching, I talk more about how AI can be used to develop assignments, discussions, and quizzes. While writing this, I came across the idea of AI-proof assignments; I realized that AI-proofing basically just meant creating meaningful and authentic assessments. I had already been doing this, but it did give me a more persuasive argument for getting the faculty I worked with in my role as an instructional designer to include more meaningful and authentic assessments in their courses.

Then, once I figured out how I was using AI, I started addressing AI in my courses and started explicitly telling students that they could use it. But, allowing AI in my courses wasn’t as simple as just saying, “Go ahead!” In my courses, I used one of the syllabus statements provided by UTA that says students can use AI as long as they cite that they used it. I included it in the syllabus and on the “Academic Integrity” page in my courses. The first semester I used this statement, I noticed students immediately started disclosing that they used ChatGPT for some of their assignments. That’s fine—I expected as much when I decided to allow it. However, simply citing that they used ChatGPT was not at all helpful to me as an instructor. First, most students just included a vague statement that said something like “ChatGPT was used to complete this assignment,” while others provided an APA citation that resembled something like “Copilot. (2025, April 25). Conversation.”  Neither of those gave me enough information to know how students actually used the AI tool, nor did they give me any kind of idea of how much thinking happened. Did they use it for brainstorming general ideas? Did ChatGPT write the whole assignment? I honestly had no idea.

The following semester, I adjusted my approach. I kept the same syllabus statement, but I added information about how to cite AI, and I asked students to include AI tools in their in-text citations and in their reference list. I also asked them to include links to the conversation they had with ChatGPT and the prompt they entered. You’ll never guess what happened. (Seriously, try.) Students completely stopped citing ChatGPT. I’m naive, but not so much so that I believed none of my students used ChatGPT or other AI tools. I could still see evidence of it in their writing (e.g., random bolded words throughout paragraphs, the overuse of words like “delve” and “journey”, etc.). So, by requiring more strict directions for citing AI use, I either discouraged its use altogether or simply dissuaded them from disclosing their use of AI. Neither of those options was great.

Feeling defeated and like the Cybermen returned yet again, I started looking into how other instructors and universities were navigating AI in courses. That’s when I stumbled across the idea of AI use statements. Up until this point, I was missing a way for students to tell me how they were engaging with AI on their assignments. At a minimum, they explain what AI was used to create the materials and how it was used, which already goes beyond what a basic citation asks. I then took it a step further and asked students to describe how the way they used the tool aligns with the honor code, specific information about the prompt, including the prompt itself and the number of iterations, a link to the prompt (or printout), and even a reflection about how the AI tool contributed to the quality, clarity, or depth of their submission. (I now call them “AI use supplements” because they ended up becoming a bit more than a “statement”.) Somehow, seemingly doubling down on AI use directions made students start disclosing that they’re using AI tools again, and they’re being more specific about how they use them. This shows that having clear and specific directions helps students understand exactly what my expectations are and how to meet them when using AI; it’s about helping make sure students have the resources they need to use AI thoughtfully, not out of convenience.

While AI use statements (or supplements) were helpful, it became clear that students probably aren’t super aware of what they are or why they should include them. Since I was making directions anyway, I ended up making the AI Essentials for Students Guide that provides an overview of how students can effectively and ethically use generative AI in academic work. It introduces the basics of AI technology, ethical considerations, and privacy concerns when using AI tools. It talks students through university policies on AI use, practical applications for academic tasks, and how to properly cite AI-generated content.

However, once the student-facing guide was done, I couldn’t help but feel like something was missing. The student guide was great for students, but it left out all of this other stuff I did as the instructor to try to navigate this new AI landscape. This led to the final thing I came across about how to navigate AI in courses: a guide just for instructors. With the support of the Center for Distance Education and the Center for Research on Teaching and Learning Excellence, the AI Essentials for Instructors course was then born. The course takes all of what I learned as I tried to figure out how to let students use AI in my courses while also making sure they’re still learning and thinking for themselves, combined with common concerns and questions I get from faculty that I work with as an instructional designer. 

AI Essentials for Instructors is a quick, practical course to help you make informed decisions about using AI in your courses (so you don’t have to go through all of these iterations like I did!). The purpose of this course isn’t to talk you into (or out of) using AI—it’s about recognizing that AI is a powerful tool that we need to be aware of. The AI Essentials course does just that; it’s there to help you make informed, thoughtful decisions about what works best for you, your course, and your students. Whether you end up fully integrating AI or barely touching it, you’ll walk away with strategies, examples, and guidance to help you navigate this new terrain with confidence.

References

Center for Distance Education, & Center for Research on Teaching and Learning. (2025). AI Essentials for Instructors. University of Texas at Arlington. CC BY-NC-ND. https://uta.instructure.com/enroll/J4BXH8

Kahlow, J. (2025). Artificial Intelligence (AI) Essentials for Students. [H5P]. University of Texas at Arlington, Center for Distance Education. CC BY-NC-ND. https://utarlington.h5p.com/content/1292539455670252558

Kahlow, J. (2024). The Alchemy of Assessment and Evaluation: From Lead to Gold. Mavs Open Press. uta.pressbooks.pub/thealchemy

Magruder, A. M. L. Cavallo, & A. M. Clark (eds.) (2025). AI-Powered Education: Innovative Teaching Strategies to Elevate Student Learning. Mavs Open Press. https://uta.pressbooks.pub/aipowerededucation

#ICYMI AI Sessions, 4/29 and 4/30/25:Dr. Veletsianos: on The Future of AI and Henrik Skaug Sætra on AI and Sustainablity

Flyer for the event with pictures of the two keynote speakers

Dr. George Veletsianos

#ICYMI on April 28 with Dr. George Veletsianos. A recap summarized by Microsoft Copilot with slight edits is below:

Key Topics:

  • Introduction and Setup: George Veletsianos and Peggy Semingson introduced the session and setting up the meeting. They confirmed the agenda and the structure of the day’s sessions. 1
  • Generative AI in Education: George Veletsianos discussed the promises, tensions, and challenges surrounding generative AI in education. They emphasized a balanced approach without hype or panic, grounded in the history of educational technology and reflective practice. 2
  • Historical Context of Educational Technology: George Veletsianos highlighted the recurring narrative of technology transforming education. They stressed the importance of questioning this idea critically, noting that technology often amplifies existing systems and inequalities rather than fundamentally changing education. 3
  • Future of Higher Education: Participants were asked to imagine higher education in 2035, considering factors like online learning, AI integration, and environmental disruptions. This exercise aimed to explore potential futures and the implications for education. 4
  • Tensions in AI Adoption: George Veletsianos outlined several tensions in AI adoption, including scholarly critique vs. pragmatic need, technological skepticism vs. innovation, and reflective pedagogy vs. efficiency gains. These tensions highlight the complexities of integrating AI into education. 5
  • Assessment Strategies: The discussion included the need to rethink assessment strategies, focusing on the learning process rather than just the final product. This shift aims to understand better students’ learning journeys and the role of AI in supporting this process. 6
  • Speculative Fiction Workshop: George Veletsianos led a workshop on creating speculative fiction to envision utopian AI educational futures. Participants engaged in brainstorming and writing exercises to imagine positive futures for education with thoughtful AI integration. 7
  • Scenario Planning Activity: In the afternoon session, George Veletsianos introduced a scenario planning activity. Participants used challenge cards to discuss potential future scenarios for UTA and how the institution might respond to various challenges and opportunities. 8
  • Group Discussions and Sharing: Participants in both in-person and online groups discussed the challenge cards and shared their insights. They considered risks, opportunities, and strategies for UTA to navigate potential future scenarios. 9
  • Workshop Conclusion: George Veletsianos concluded the workshop by encouraging participants to reflect on how the discussions expanded their thinking about AI in education. They emphasized the importance of imagining positive futures and taking small steps towards those visions. 10
  • Resources: Discussion Questions: UTA-Workshop – Google Drive

Henrik Skaug Sætra

Tuesday, April 29, 2025: Henrik Skaug Sætra presented Hybrid and Collective Intelligence and Connections to Climate: A recap summarized by Microsoft Copilot with slight edits is below:

Generated by AI. Be sure to check for accuracy.

Meeting notes:

  • Meeting Setup: Henrik and Pete discussed the technical setup for the meeting, including audio and slide sharing, and confirmed the participation of Peggy, who would join virtually later.
    • Technical Setup: Henrik and Pete discussed the technical setup for the meeting, including ensuring the audio was working correctly and that Henrik could share his slides. They also addressed some initial technical issues, such as pixelation and frame drops, and Henrik adjusted his settings to improve the connection.
    • Peggy’s Participation: Peggy was confirmed to join the meeting virtually at around 10:40. Pete mentioned that she might assist in the second part of the meeting, depending on the group size and the need to break into smaller groups.
  • Workshop Structure: Henrik and Pete outlined the structure of the workshop, including a keynote presentation followed by a workshop session, with a focus on artificial intelligence and its implications for academia.
    • Keynote Presentation: Henrik’s keynote presentation focused on the impact of artificial intelligence on academia, particularly in research and education. He discussed the potential changes and challenges posed by AI technologies.
    • Workshop Session: The workshop session was designed to be interactive, allowing participants to engage in exercises and discussions about the implications of AI in academia. Henrik planned to use collaborative online tools to gather feedback and insights from the participants.
  • AI in Academia: Henrik presented on the impact of intelligence technology on academia, focusing on research and education, and discussed the potential changes and challenges posed by AI.
    • Impact on Research: Henrik discussed how AI technologies could significantly impact research processes, including data gathering, analysis, and dissemination. He highlighted the potential for AI to automate various stages of research, which could lead to increased efficiency but also raised concerns about the loss of human involvement in critical thinking and analysis.
    • Impact on Education: Henrik explored the implications of AI in education, particularly in higher education. He discussed how AI could be used to enhance learning experiences, provide personalized education, and support educators in administrative tasks. However, he also noted the challenges of integrating AI into educational systems and the potential risks of over-reliance on technology.
    • Challenges and Concerns: Henrik raised several concerns about the widespread adoption of AI in academia, including issues related to data privacy, ethical considerations, and the potential for AI to perpetuate biases. He emphasized the need for careful regulation and oversight to ensure that AI technologies are used responsibly and ethically.
  • Historical Context of AI: Henrik provided a historical overview of AI development, highlighting key milestones and the evolution of AI technology over the years.
    • Early Developments: Henrik traced the history of AI back to the 1940s and 1950s, mentioning key figures such as McCullough and Pitts, who proposed the concept of artificial neurons, and the Dartmouth Conference in 1956, which is often considered the birthplace of AI as a field of study.
    • Key Milestones: Henrik highlighted significant milestones in AI development, including the creation of the first AI programs, the development of expert systems in the 1980s, and the victory of IBM’s Deep Blue over chess champion Garry Kasparov in 1997. He also mentioned more recent advancements, such as the success of AlphaGo in defeating human champions in the game of Go.
    • Generative AI: Henrik discussed the emergence of generative AI technologies, such as ChatGPT, which have brought AI into mainstream awareness. He noted that these technologies have demonstrated impressive capabilities in generating human-like text and performing various tasks, leading to increased interest and investment in AI research and applications.
  • Perspectives on AI: Henrik discussed various perspectives on AI, including its role as a tool, its impact on collective intelligence, and the concept of hybrid intelligence, where humans and machines collaborate.
    • AI as a Tool: Henrik explained the perspective of AI as a tool that can be used to enhance human capabilities. He emphasized that while AI can be a powerful tool, it is essential to consider how it is used and the potential consequences of its application.
    • Collective Intelligence: Henrik discussed the concept of collective intelligence, where AI systems can contribute to the collective knowledge and problem-solving abilities of groups. He highlighted examples of how AI can be integrated into collaborative efforts to achieve better outcomes than individual efforts alone.
    • Hybrid Intelligence: Henrik introduced the idea of hybrid intelligence, where humans and AI systems work together in a complementary manner. He described how AI can augment human intelligence by providing computational power, pattern recognition, and data analysis capabilities, while humans contribute creativity, empathy, and contextual understanding.
  • Future Scenarios: Henrik presented three potential future scenarios for AI in academia: collaborative intelligence, business as usual, and technological acceleration, and discussed their implications.
    • Collaborative Intelligence: Henrik described a future scenario where AI and humans collaborate closely, with AI systems augmenting human capabilities and providing support in research and education. This scenario emphasizes the importance of explainable AI and the need for regulation to ensure ethical and responsible use of AI technologies.
    • Business as Usual: In the business as usual scenario, Henrik outlined a future where AI adoption continues at its current pace, with incremental improvements and integration into existing systems. This scenario assumes that AI will be used to optimize and automate various processes, but without significant changes to the overall structure of academia.
    • Technological Acceleration: Henrik discussed a more radical future scenario where AI development accelerates rapidly, leading to the emergence of superhuman AI researchers and significant disruptions in academia. In this scenario, AI systems take on more autonomous roles, potentially surpassing human capabilities in research and innovation, and raising concerns about the implications for human researchers and educators.
  • Research Automaton: Henrik introduced the concept of the research automaton, exploring the potential for AI to automate various stages of the research process, and discussed the benefits and challenges of such automation.
    • Automation Potential: Henrik explored the potential for AI to automate different stages of the research process, including ideation, literature review, data gathering, analysis, writing, and dissemination. He highlighted the benefits of increased efficiency and the ability to handle large volumes of data.
    • Challenges of Automation: Henrik discussed the challenges associated with automating the research process, such as the risk of losing critical thinking and creativity, the potential for biases in AI-generated outputs, and the ethical considerations of relying heavily on AI for research.
    • Human Involvement: Henrik emphasized the importance of maintaining human involvement in the research process to ensure the quality and integrity of research. He argued that while AI can be a valuable tool, it should not replace the human elements of curiosity, intuition, and ethical judgment in research.
  • Interactive Exercise: Henrik led an interactive exercise where participants evaluated the impact of AI on different stages of the research process, using a collaborative online tool to gather feedback.
    • Exercise Overview: Henrik introduced an interactive exercise where participants used a collaborative online tool to evaluate the impact of AI on various stages of the research process. Participants were asked to place stamps on a matrix to indicate their views on the benefits and challenges of using AI in each stage.
    • Participant Feedback: Participants provided feedback on the use of AI in different research stages, highlighting areas where AI could be beneficial, such as transcription and data analysis, and areas where it might be problematic, such as peer review and theoretical development. The exercise facilitated a discussion on the potential and limitations of AI in research.
  • Concerns and Resistance: Henrik expressed concerns about the potential negative impacts of AI on research and academia, advocating for a cautious approach and resistance to full automation.

Resources shared in the chat in Henrik’s session:

https://www.anthropic.com/research/exploring-model-welfare

https://link.springer.com/article/10.1007/s12124-020-09523-6

https://journals.sagepub.com/doi/10.3233/FRL-200023

https://link.springer.com/article/10.1007/s43681-021-00092-x

https://a16z.com/the-techno-optimist-manifesto

https://ai-2027.com

AI podcasts currently in Michael Schmid’s rotation 04-24-2025:

Slides from 4/24/25 Pondering AI session (11 am): AI COP April 24 2025 11 AM.pptx [requires UTA login to access]

Curating content and staying current with AI is crucial! Michael Schmid of University Analytics shares his favorite technology and AI-focused podcasts. –Peggy Semingson, Interim Director of CRTLE at UT Arlington.

From Michael Schmid (Learning Analytics Director at UTA University Analytics):

Free time? Yeah, probably just as rare for you as it is for me. We all juggle multiple priorities every day. And I’m preaching to the choir—my fellow educators—to say we all learn and consume information differently. Podcasts on my commute work for me.

So today, in our UTA AI Community of Practice session, we’re each sharing resources that work for us. Certain topics and media may resonate more with you than others, and that’s okay. On this page, the first six items are some of my favorite AI-related podcasts.

A few dive deeper into the weeds, but most are great for staying current with the latest AI developments—or even helping you drift off to sleep at night. If you’ve got other helpful AI resources, let me know and we’ll add them to the list!

Podcast Name and Host(s) Brief Summary 
Pondering AIKimberly NevalaOffers reflective, in‑depth discussions on AI’s ethical, societal, and cultural impacts, inviting listeners to ponder technology’s future.
The Artificial Intelligence ShowPaul Roetzer & Mike KaputBreaks down AI trends and insights, providing clear, non-technical insights into business and real-world impacts of AI weekly.
AI for HumansKevin Pereira & Gavin PurcellExplores practical AI applications for everyday users, demystifying the technology and highlighting its real‑world impact. Sometimes irreverent.
The AI BreakdownNathaniel WhittemoreProvides concise, daily news analysis on breakthrough AI trends, industry disruptions, and practical implications across sectors.
Your Undivided AttentionTristan Harris & Aza RaskinCritically examines how digital platforms shape human attention and behavior, offering insights on ethical design and societal effects.
Hard ForkKevin Roose &
Casey Newton
A tech news podcast that covers AI trends and broader technology impacts with a balanced, investigative tone. Relentlessly upbeat. (NYT)
WorklabWorklab TeamInvestigates the evolving workplace through emerging AI technologies, offering actionable strategies to enhance work culture and productivity. (Microsoft)
Lex Fridman PodcastLex FridmanLong-form interviews with top AI experts. Discussions blend technical depth with broader philosophical and practical insights.
The TWIML AI PodcastSam CharringtonIn-depth conversations with researchers and industry leaders on machine learning, deep learning, and practical AI applications.
The AI Podcast (by NVIDIA)Noah Kravitz25-minute interviews with innovators exploring the impact of AI across research, business, and society. Yes, that NVIDIA.
The AI Daily BriefNathaniel WhittemoreA daily news analysis show that delivers concise updates on breakthrough AI trends and industry disruptions.
This Day in AI PodcastMichael & Chris SharkeyA weekly discussion on AI news and real-world applications, presented in a casual and accessible style.
Data SkepticKyle PolichA thematic deep-dive into AI, machine learning, and data science concepts, broken down over several episodes.
Practical AIChris Benson & Daniel WhitenackFocuses on real-world AI applications and practical tips to implement AI tools in everyday scenarios.
AI in Business PodcastDaniel FaggellaInterviews with top AI executives that uncover trends and use cases to help non-technical leaders integrate AI.
Eye on AICraig S. SmithEngaging interviews with AI experts that explore emerging research trends and the broader societal impact of AI.
AI Today PodcastKathleen Walch & Ronald SchmelzerExplores current AI applications, industry news, and expert opinions to distinguish hype from real-world use.
AI & IDan ShipperInterviews with a range of creatives and entrepreneurs on how they use AI tools to boost creativity and productivity.
No PriorsElad Gil & Sarah GuoConversations with AI entrepreneurs and researchers that discuss innovative tech and business opportunities in AI.
Talking MachinesNeil Lawrence & Katherine GormanA mix of interviews, reviews, and discussions on machine learning challenges and ethical implications in AI.
Last Week in AI(Various Hosts)A weekly recap that condenses the major AI news and trends into a brief, informative update.
Me, Myself and AISam Ransbotham & Shervin KhodabandehShowcases success stories and practical strategies from companies that have effectively integrated AI. (MIT Sloan & Boston Consulting Group)
Everyday AI PodcastJordan Wilson (and team)Offers daily insights and tips on leveraging AI for productivity and career growth in everyday tasks.

One other resource you shouldn’t miss is Professor Ethan Mollick at The Wharton School in Philadelphia. Perhaps the best way to consume his insights into AI is to subscribe to his Substack via his website: https://www.oneusefulthing.org/ or buy his (I don’t get a cut) New York Times bestselling book, Co-Intelligence.

I hope this helps, as we navigate these challenges and opportunities together!

Michael

Michael Schmid, MBA

Director of Analytics Solutions & AI Engagement

University Analytics, University of Texas Arlington

Preview Materials for April 28 and 29 AI Events

AI Special Guest Speakers Artificial (Un)Intelligence Critical Realities and Critical Futures Sponsored by the Academic Partnerships Endowed Chair, UTA Office of the Provost, University Analytics, and CRTLE.  
Agenda
University Analytics, the Center for Research on Teaching and Learning Excellence, the Academic Partnerships Endowed Chair, and the Provost’s Office are co-sponsoring a two-day event about the future of education in the world of Generative AI. 

 
Day 1 – April 28 – On Campus (Trinity 205) & Virtual via Teams 
 
• 9:00am–10:15am: Dr. George Veletsianos, “GenAI, Imagination, and Education Futures”. Trinity Hall, Room 205
 
• 10:30am–12:00pm: Workshop 1 – “Creating Speculative Fiction to Envision Utopian AI Educational Futures”. Trinity Hall, Room 205
 
• 12:00pm–1:50pm: Lunch Break (on your own) 
 
• 2:00pm–3:30pm: Workshop 2 – “Navigating Possible Futures with Emerging Technologies”. Trinity Hall, Room 205
 
Day 2 – April 29 – Virtual Only 
 
• 9:00am–10:15am: Dr. Henrik Skaug Sætra, “Hybrid and Collective Intelligence and Connections to Climate” 
 
• 10:30am–12:00pm: Workshop – “The Research Automation” 
Dr. George Veletsianos on GenAI, Imagination and Education
Monday, April 28
Trinity Hall, Room 205 and via Microsoft Teams


Articles from George Veletsianos to preview

Browse: (website: https://www.veletsianos.com/)

Article: Zero Hours Pre-print
Podcast: (Audio file below)
Picture of George Veletasianos wearing a blue jacket.
Dr. Henrik Skaug Saetra on Hybrid on Collective Intelligence and Connections to Climate
Tuesday, April 29
Virtual Only via Microsoft Teams Keynote at 9 am-10:15 am  
Breakouts at 10:30 and 1:30 each day with small groups of faculty.
We invite you to a timely and crucial discussion on the future of education in the age of generative AI. You can attend part or all of the events. RSVP is required; space is limited.

To preview before the keynote and sessions:

Article (click here)
Picture of Henrik Skaug Sætra

UT-AI?:Talking to Students about Artificial Intelligence

Post by Dr. Karen Magruder, School of Social Work (Faculty Profile)

In this informative and practical blog post, award-winning faculty member, Dr. Karen Magruder shares insights into getting started with teaching students about AI. –Peggy Semingson, Interim Director of CRTLE

A few additional resources:

picture of Dr. Karen Magruder

It’s no secret that artificial Intelligence (AI) is rapidly transforming the landscape of higher education. Serious and valid concerns about AI have been raised, including academic dishonesty, misinformation, and bias. On the other hand, AI can boost efficiency, enhance creativity, and provide personalized learning experiences. Regardless of whether you deem AI a friend or foe, it is our professional obligation to equip our learners with digital literacy skills to navigate a technology that is increasingly being embraced in today’s workplaces. As educators, it’s essential to navigate this new technological frontier with clear expectations and guidelines for AI use in the classroom.

Setting Clear AI Expectations
Opinions on AI use vary widely among students and faculty alike, and we should not hold students accountable to expectations we haven’t clearly articulated. While an AI statement in the syllabus is an important first step, AI expectations should also be reviewed in detail through an in-class discussion or video announcement. A syllabus quiz or academic integrity attestation can also ensure accountability. Beyond sharing what they should or should not use AI for, it’s critical to explain why these boundaries exist; “because I said so” does not cut it! Focusing on how avoiding overreliance on AI will aid them in their careers, with specific and relevant examples, can increase buy-in. While the nuances of AI ethics are complex, and binary advice doesn’t apply to all situations, having memorable guidelines can help. Transparency and context are two key metrics that can guide students in understanding when AI use is appropriate.

AI Guideposts: Context and Transparency

  1. Context: Understanding the Purpose of Education vs. Professional Practice.

Academics
In academic settings, we are tasked with measuring students’ mastery of learning outcomes. Have scholars developed the knowledge & skills that will be critical for their professional success? Overreliance on AI to complete assignments can undermine this purpose. Therefore, some tasks that should be completed independently in school may eventually be supplemented with AI after the skill is mastered.  For example, just as students are first taught to perform calculations by hand to build a strong foundation in math before being allowed to use calculators, students should develop core writing, analysis, and problem-solving skills independently before incorporating AI tools to enhance efficiency and productivity.

Workplace
In some career settings, AI is prohibited. For example, we know that some academic publishers have strict rules about AI-generated content. On the other hand, in many professional environments, AI is already being embraced as a valuable tool for tasks such as drafting emails, creating presentations, or analyzing data.

In other professional contexts, using AI is not only acceptable but also highly beneficial. Encouraging students to explore AI tools now can help them develop valuable skills that are increasingly in demand in the workforce, such as prompt engineering, critical evaluation of AI-generated content, and effective integration of AI into workflows. Just as professionals use AI to streamline tasks like content creation and client communication, students can benefit from learning how to leverage these tools responsibly. By allowing opportunities to experiment with AI in low-stakes assignments or as a supplement to their own work, educators can help students build confidence and competence in using AI thoughtfully and ethically.

2. Transparency: Would You Be Comfortable if Everyone Knew You Used AI?
A useful metric for students is the “transparency test”: If everyone knew you used AI to complete an assignment, would you be comfortable with that? If the answer is no, it might suggest that AI is being used inappropriately. Transparency promotes honesty and accountability, encouraging students to use AI as an “above board” tool for learning rather than a sly shortcut to bypass effort.

The AI Assistant Analogy: Guidance and Training Required
A helpful way to think about AI is to imagine it as an intern—eager to help but needing guidance and training. Just as it’s perfectly acceptable to brainstorm with a colleague or ask them to review a draft, it’s also reasonable (in some contexts) to use AI to assist with idea generation or proofreading. However, outsourcing an entire academic assignment to AI is akin to asking a classmate to write a paper for them—compromising the authenticity of their work and the integrity of their learning.

Effective use of AI requires us to set boundaries and train it to work according to our standards, much like onboarding a new assistant. This means refining AI prompts, critically reviewing its outputs, and ensuring that the final work reflects the student’s unique voice.

Are your assignments AI-proof?
Even with clear guidance and rationale, the temptation to take an AI shortcut is strong. AI-proofing involves designing assessments (both low-stakes and graded) to minimize the risk of students relying solely on AI to complete them while maximizing opportunities for authentic learning and critical thinking. Activities like oral presentations, synchronous discussions, in-class writing, fast-paced interactive polling games like Kahoot, or role-plays can also reduce AI’s utility.

Modeling Appropriate AI Use

Banishing AI is becoming increasingly unrealistic. As instructors, we can model appropriate and ethical use of AI to enhance, not diminish, learning. For example cross-disciplinary AI-infused teaching activities with step-by-step implementation guidance, check out UTA’s newly released OER AI Powered Education: Innovative Teaching Strategies to Elevate Student Learning. [link]

Conclusion: Embracing AI Ethically and Responsibly
AI in both work and education is not a passing trend but a growing reality. Embracing its potential responsibly involves recognizing its drawbacks, setting clear guidelines, and helping students understand the ethics of its use. By focusing on transparency and context, educators can empower students to embrace AI appropriately—maximizing its benefits while maintaining the integrity of their learning journey.

Encouraging students to explore AI tools now can help them develop valuable skills that are increasingly in demand in the workforce, such as prompt engineering, critical evaluation of AI-generated content, and effective integration of AI into workflows.

RECAP:  CRTLE Faculty Lounge, 3/19/2025 

OER and the Open Access Journey: New Frontiers for Faculty 

Recording Link (requires UTA credentials to access): March Faculty Lounge on OER (led by Rosie Kallie)-20250319_120422-Meeting Recording.mp4

Facilitator:  Rosie Kallie (RK), Industrial, Manufacturing, and Systems Engineering 

Special Guest:  Megan Zara (MZ), Open Educational Resources (OER) Librarian 

Panelists:  Kimberly Breuer (KB) (History and Geography), Karishma Chatterjee (KC) (Communication), Jessica Kahlow (JK) (Instructional Design), Shelley Wigley (SW) (Communication) 

Motivation:  As faculty assessing new course books and materials for classes, at first glance, we likely use our faculty perspective.  For example, topics covered, types of examples presented, scope and sequence.  The price may seem reasonable to us.  From the student perspective, students like to see copious examples at a low cost.  The example Rosie Kallie presented was for an engineering textbook at $124.95, reasonably priced from a faculty perspective. However, for a student who is homeless or with other obligations, the price may be out of reach. 

Open Access (OA) and Open Educational Resources (OER):  Zara gave a brief introduction to the world of OA and OER.   OA grants free, unrestricted access to research outputs, peer-reviewed, like journal articles, books, etc, but licensing may be restrictive.  OERs are free resources focused on teaching and learning.  Additionally, OERs may have more open licensing, allowing them to be customized and shared. Examples of OERs are textbooks, videos, etc.  UTA libraries offers grant funding for faculty to pursue OA/OER projects. Cost savings for UTA students last fall exceeded $5 million (US dollars). 

Panelists Questions with excerpts of responses: 

*Regarding the Impact of OA and OER:   How has the OER resource(s) impacted your students?  How has the OER journey impacted you as an educator? 

As committed educators, the panelists were searching for relevant course materials with the appropriate scope and sequence of topics while also engaging their students throughout the course. 

JK remarked how students liked that the OER text could be saved then it was easy to access throughout the semester.  As an educator holding a wealth of Best Practices, creating a new OER book gave JK incentive to write everything down as book content.   

SW stated that the impact on students is positive since students themselves created content for the book. Thus, students have more buy-in, more dedication on the assignments, etc they are creating. They can point potential employers to their contributions within the OER. They feel empowered because their work will help students, here and abroad, who use their OER.  Last, SW notes that working on the OER has re-energized her creativity as an educator.  

KC mentioned how students loved saving money by using an OER book (zero cost), loved the easy access to the OER book as well as seeing communications examples that cater to engineering and science students.  Last, as an educator, KC stated that her OER journey is still evolving; the next step is reviewing this semester’s data from faculty and students, then tweaking the OER. 

KB noted the History Department began looking for lower-cost resources for their online students, as an alternative to expensive traditional textbooks.  She began searching for online books, if they existed.  She located a short textbook on U.S. History on the State Department website.  The department has expanded the Zero Cost materials to other courses as well.  As an educator, KB remarked that the OER journey is both gratifying and terrifying. Last, the process takes longer than you think. 

*Regarding Motivation toward OA and OERHow did you get started with OER or Open Access?   As you review the insights, you will note a common theme of availability. 

JK spoke of not being able to find desired resources for a course, “Assessment and Evaluation in Online Learning,” a book that would combine theory, best practices, and implementation.  

KB was looking for resources that could replace a $100 textbook on “History of Science and Technology” along with a holistic approach, that is, the add-ons that publishers advertise like question banks, etc. 

KC was not able to find just the right book for a Communications course focused on science and engineering students, a large service course of 20 sections each semester.  A more recent book in use was expensive (over $100), expansive, yet professors felt overwhelmed by content, but only a small portion of the book was actually being used. A different approach was to create a shorter, focused OER with specific content. 

SW did not find an existing book to fit the need of the “Public Relations Campaigns” course, the experiential capstone course required for PR majors.  Creating an OER that would serve as a guide book for quick reference throughout the semester would be a better fit for the students. 

*Regarding Obstacles or Advice on an OA and OER journey:   What obstacles did you face, and how did you overcome them?  and/or What advice, tips, or considerations would you share with others?  

Possible obstacles:   Time management since the OER project does take a lot of time. Set a schedule to write every day (e.g. 20 min). Getting the students to a place where they understand how they want their work to be licensed.  Published-unpublished resources.  Collaborators change jobs. 

Advice:  Interact with the OER librarian more, e.g. to help make the OER more interactive.  For moral support, join the Professional Learning Community (PLC) for Open Education Open Access. Get Grad Students to help. If stuck, a change of scenery helps (e.g. eat outside).  Once you find the first OER source, it opens up your thinking to how can I update my education practices in other ways. 

Some Key Take-Aways:   There are lots of resources available.  There is support for you; it is not a lonely journey.   The OA, OER journey is different for everyone.  You can start small with homework, chapter readings, etc.  You will learn something new in the process. 

Recap of Faculty Lounge (2/12/25): Beyond the Test: Alternative Assessments in an AI World

Blog post by Drs. Jeff Witzel, Ivy Hauser, Christy Spivey, Laurel Stvan, and Kevin Carr.

Beyond the Test: Alternative Assessments in an AI World. Join us for a panel discussion led by Dr. Jeff Witzel to learn about alternative assessments methods that are AI proof or that actively facilitate AI.

Overview

Last week on February 12, 2025 the Center for Research on Teaching and Learning Excellence (CRTLE) held a Faculty Lounge panel session on alternative assessment — “Beyond the Test: Alternative Assessments in an AI World.” The panelists were Dr. Kevin Carr (Marketing), Dr. Ivy Hauser (Linguistics & TESOL), Dr. Christy Spivey (Economics), and Dr. Laurel Stvan (Linguistics & TESOL). The session focused on the following key questions: How can we use various forms of assessment to evaluate our students while also stimulating productive engagement with course content? And how can we do this, particularly when AI tools make it possible for students to complete some assessments without engaging with course content in substantive ways?

General Resources (Slide Deck and CRTLE Resource)

Here is the slide deck from the panel event: Click here (requires UTA login)

CRTLE offers an overview of many effective alternative assessment methods here:

Alternative Assessments

However, rather than go over all of those methods, the panel focused on several techniques that are being used effectively by UTA faculty.

Examples from Faculty:

Dr. Ivy Hauser presented on skills-based grading, which involves identifying core skills that students will master and assessing these skills regularly through low-stakes assignments. One of the key features of this technique is that students have many opportunities to demonstrate their developing skills throughout the course (rather than the points for skills being tied to a fixed set of assignments in which they are assessed).

Dr. Laurel Stvan presented on using Wikipedia editing to help students develop their research and writing skills, while also contributing to this widely used online encyclopedia. For more information on how to integrate these assignments in your courses, please see

Audience members asked about some examples of web pages that students had worked on. Here are a few from Dr. Stvan’s classes:

https://en.wikipedia.org/wiki/Eva_Haji%C4%8Dov%C3%A1
https://en.wikipedia.org/wiki/Carmen_Rodr%C3%ADguez_Armenta
https://en.wikipedia.org/wiki/Anna-Brita_Stenstr%C3%B6m
https://en.wikipedia.org/wiki/Danielle_Forward
https://en.wikipedia.org/wiki/Laurel_J.Brinton

https://en.wikipedia.org/wiki/Chungcheong_dialect

https://en.wikipedia.org/wiki/Backchannel (linguistics)
https://en.wikipedia.org/wiki/Phonetic_space
https://en.wikipedia.org/wiki/Fis_phenomenon

Dr. Christy Spivey then introduced a range of tools that can be used for ongoing assessment generally and in online asynchronous courses, in particular. These tools include

H5P (https://h5p.org/)
Moblab (https://moblab.com/)
Perusall (https://www.perusall.com/)
Piktochart (https://piktochart.com/)
Wikiedu (see above)

She discussed requiring students to use AI for discussion assignments, to encourage ethical use of AI and to practice prompting skills. Alternatively, if faculty do not want to have students use AI, Perusall for discussions is a good alternative.

Creating infographics can be used as an alternative to writing assignments. However, some infographic platforms now have AI generators. However, while they create generic infographics, they do not yet cite research or provide detailed or specific research findings.

Finally, Dr. Kevin Carr presented low-stakes assignments that faculty can use to help students to self-assess and develop their verbal communication skills. Using the “speech-to-text” feature within these generative AI platforms (such as ChatGPT or Copilot), students can practice their verbal communication skills and receive feedback on their verbal delivery in areas such as the use of filler words, verbal clarity, conciseness, specificity, and organization. Faculty can provide a prompt for students to use for this assignment, or they can work with students to develop their own prompt for such an exercise.

He emphasized how these tools offer students a safe space and positive interactive assessment, a form of “deliberate practice,” through which students can refine verbal communication skills and develop confidence as they prepare for high-stakes assignments or situations such as job interviews or verbal presentations.

Reflection Questions

The session concluded with an engaging discussion of these techniques with both in-person and online attendees, but we would love to hear your thoughts on/experiences with alternative assessment as well.

  1. What are some successful examples of assessments that encourage critical thinking and creativity that you’ve implemented or observed in different modalities?
  2. In what ways can AI be integrated into assessment as a tool for learning rather than just a challenge to academic integrity?
  3. How do we encourage students to use AI responsibly and effectively while also ensuring that they develop essential skills independently?
  4. What are some “AI-resistant” assessment strategies that ensure authentic student engagement and skill development?)

Email us with comments or questions! crtle@uta.edu

Image of CRTLE UTA with images of five CRTLE leadership staff.

ICYMI: Recap of the 1/15/25 “Starting the Semester Strong” Workshop:

In case you missed it or want a refresher of this workshop, read on!

This workshop marked the launch of the first CRTLE faculty gathering for Spring semester, 2025 and we were off to a great start!

We had a fabulous workshop on “Starting the Semester Strong” at the UTA Center for Research on Teaching and Learning Excellence with engaging faculty presenters, networked knowledge, and interactive dialogue about starting the semester strong and focused! Thanks to everyone who came and shared!

Dr. Larry Nelson shared about how the engagement ideas from Dave Burgess’s Teach Like a Pirate program have inspired his teaching in kinesiology! Dr. Nelson especially loves the aesthetic engagement “hook” techniques.

We also heard from Dr. Andrew Clark, myself, Dr. Beth Fleener, Dr. Kevin Carr, and Dr. Jeff Witzel.

The slide deck is here: Slide deck [accessible to UTA-affiliated with login]

Dr. Rosie Kallie (below), Associate Professor of Instruction in Industrial, Manufacturing, and Systems Engineering, engaged faculty in her advice for faculty on providing effective and interesting instruction in week 1! Here she wore a “clean room” lab outfit to talk about her previous job to future engineers in her course. She and other faculty facilitators and CRTLE staff presented topics and facilitated dialogue in our session “Starting the Semester Strong and Focused. Stay tuned for future workshops and resources!

Image of Dr. Rosie Tallie wearing a hard hat and white lab coat with center director Dr. Peggy Semingson who is wearing a blue blazer and black skirt.

Voices from the Faculty: Seeking Feedback from Students

In our inaugural teaching post for the Pedagogy Next “Voices from the Faculty” series, Dr. Peter Nkhoma shares insights about seeking feedback from his students. Check out his suggestion here.

I use this simple assessment-for-learning strategy in my smaller classes: I provide students with small booklets where they can record questions about concepts or ideas they didn’t understand or want to explore further. I address these questions in subsequent lectures. Students also use the booklets to note anything they found particularly interesting, share their thoughts on how the lesson went, and provide feedback on which teaching activities were effective or should be adjusted.

This approach has not only helped me connect with students and demonstrated that I value their participation, ideas, and learning, but it has also opened up opportunities for enriching class discussions. Additionally, it has provided insights into students’ thinking and introduced new perspectives that I did not anticipate. The technique fosters a sense of ownership in the learning process, encourages reflective thinking, and helps create a more inclusive and responsive classroom environment. Moreover, it provides me with valuable feedback to continuously refine my teaching methods and tailor lessons to meet the needs of the class more effectively. However, it may be challenging to implement in larger classes.

This teaching suggestion is from Dr. Peter R. Nkhoma. 

Image of faculty member Peter Nkhoma. He is standing with his arms crossed in front.

The image above is of Visiting Assistant Professor Dr. Peter Nkhoma, History and Geography Department, College of Liberal Arts

Dr. Nkhoma’s Bio:

I am a Visiting Assistant Professor of Geography in the Department of History and Geography. Previously, I taught in the School of Geosciences and Honors College at the University of South Florida. My teaching experience spans secondary and higher education in Africa, the UK, and the US.

Disclaimer:

All viewpoints are individual faculty members’ perspectives, not those of The University of Texas at Arlington or the Center for Research on Teaching and Learning Excellence (CRTLE).

We want to hear faculty voices! Contribute Teaching Ideas to the Pedagogy Next Blog.

Are you UT Arlington faculty and do you have a teaching idea you want to write up (200-1000 words) for a post to the Pedagogy Next blog?

Logo with blue UTA and the Center for Research on Teaching Excellence Office of the Provost displayed

Email your idea or draft blog post to us at: crtle@uta.edu or peggys@uta.edu with your teaching suggestion!