How Faculty are Using AI in Teaching

Ideas shared at the CRTLE AI Course Redesign Institutes

Faculty who presented at the CRTLE AI Course Redesign Institutes (August 2025 and October 2025) shared concrete, classroom‑tested ways they are using AI to support learning rather than replace it. Instructors described designing assignments where students use AI to generate study question banks from course content, training discipline‑specific models on students’ own work to support creative iteration in design studios and using AI‑supported feedback tools to help students reflect on performance and improve skills. Others highlighted using generative tools to refine assignment prompts, build rubrics, and scaffold complex projects while maintaining clear expectations for transparency and ethical use. Across disciplines, these faculty emphasized that AI works best when it is intentionally framed—as a collaborator that helps students practice analysis, reflection, and problem‑solving, while keeping human judgment, disciplinary knowledge, and learning goals firmly at the center.

CRTLE AI Course Redesign Institute (October 31, 2025): Presenter Summaries


Nikki Dickens

Unit: Career Services

Bio: Nikki Dickens is Associate Director in Career Services at the University of Texas at Arlington, where she leads career readiness initiatives focused on ethical, workforce-aligned AI use.

Nikki Dickens’ work centers on preparing students for the realities of AI use in hiring, career development, and professional practice. Drawing on her role in Career Services, she brings a pragmatic and ethically grounded perspective to AI adoption. Dickens candidly describes her evolution from skepticism to strategic use, noting, “I didn’t really want them to use AI… but I can’t deny that it is extremely helpful.” Her teaching and advising emphasize transparency, ethical boundaries, and professional judgment, especially as employers increasingly screen candidates with AI-driven systems. She highlights workforce data showing that a strong majority of employers expect graduates to be AI-ready and stresses that misuse carries real consequences. As she cautions, “Submitting an AI-generated cover letter word for word… inventing accomplishments you never achieved… that’s unethical.” Dickens’ work reframes AI as a tool for augmentation—not substitution—helping students learn how to use AI to enhance resumes, interviews, and communication while maintaining authenticity and integrity.


Amy Hodges, Ph.D.

Department of English

Bio: Amy Hodges is an Assistant Professor of English at the University of Texas at Arlington whose teaching and scholarship focus on writing, rhetoric, and responsible AI literacy.

Amy Hodges’ teaching and scholarship focus on responsible AI use within writing, communication, and professional identity formation. In her course Responsible AI and the Future of Work, she integrates AI literacy, ethics, and critical thinking across disciplines, particularly English and computer science. Hodges prioritizes helping students understand what AI is and is not, arguing that “once you learn exactly what it’s designed for, you stop making some of those assumptions.” Her pedagogy treats AI as a collaborative partner rather than a shortcut, encouraging students to reflect on how they work alongside AI tools. She explicitly rejects a surveillance-based approach to AI policing, stating that AI-generated text is “not the hill I’m dying on,” and instead evaluates student work based on meaning, reasoning, and rhetorical effectiveness. Through discussion-driven assignments and real-world case studies, Hodges’ work models how writing instruction can evolve without abandoning core human skills such as judgment, ethics, and voice.


Erdogan Kaya, Ph.D.

Department of Teacher and Administrator Preparation

Bio: Erdogan Kaya is an Assistant Professor in Teacher and Administrator Preparation at the University of Texas at Arlington, specializing in AI literacy and applied machine learning education.

Erdogan Kaya’s work advances AI literacy through hands-on, accessible introductions to machine learning that demystify how AI systems function. His teaching emphasizes conceptual understanding over technical specialization, making AI approachable for students across disciplines. As he explains, “My goal today is not to make you AI specialists, but to give you a practical, accessible tool.” Using platforms such as Google’s Teachable Machine, Kaya helps students experience how models learn from data, recognize patterns, and generate outputs. A central theme in his instruction is correcting misconceptions about AI cognition; he reminds learners that AI systems are “pattern-matching machines, not thinking beings.” Kaya’s approach foregrounds responsible use by stressing that the key issue is not whether students will use AI— “they already are”—but whether they understand its limitations and implications. His work provides faculty with replicable, low-barrier strategies for integrating AI literacy into existing curricula.


Sharmeen Yousif, Ph.D.

College of Architecture, Planning, and Public Affairs (CAPPA)

Bio: Sharmeen Yousif is an Associate Professor of Architecture at the University of Texas at Arlington whose research and teaching explore generative AI and computational design pedagogy.

Sharmeen Yousif’s work explores AI as a creative and performative partner in architectural design education. Drawing from her research in generative and deep learning models, she develops studio-based pedagogies that treat AI as part of a choreographed design workflow rather than an autonomous creator. As she describes it, her work is “about choreographing intelligence,” where multiple AI systems are intentionally connected and constrained to preserve human intent. She emphasizes that successful AI integration depends on maintaining conceptual continuity, explaining that “maintaining the concept and how the models are connected is a metric for success.” In her teaching, students use AI to explore environmental performance, spatial iteration, and speculative form-making while retaining authorship through staged prompts, evaluation metrics, and model weighting. Yousif’s work demonstrates how AI can expand creative possibility without eroding disciplinary rigor or design agency.


Pete Smith, Ph.D.

University Analytics and Modern Languages

Bio: Pete Smith is Vice Provost and Chief Analytics and Data Officer at the University of Texas at Arlington, with extensive experience in learning analytics, language technologies, and institutional data strategy.

Pete Smith brings a critical, systems-level perspective to AI in higher education, grounded in decades of experience with language technologies and analytics. His work interrogates the political economy of AI, focusing on what he terms the ‘AI bubble’ and the proliferation of low-quality automated content, or ‘AI slop.’ Smith highlights the financial instability of major AI companies, noting that “very few of these companies are bragging about how much money they’re making from AI… because they aren’t making any.” He warns that the rapid expansion of AI infrastructure may outpace sustainable business models, with broad implications for institutions and individuals. At the same time, his scholarship addresses epistemic risk, arguing that the overproduction of synthetic content threatens research quality, trust, and learning ecosystems. As he observes, “If you’re in a fight on social media today, chances are you’re fighting with a bot.” Smith’s work equips educators to engage AI critically—neither rejecting it outright nor accepting it uncritically—while foregrounding ethics, sustainability, and long-term impact.


Peggy L. Semingson, Ph.D.

Department of Linguistics and TESOL/CRTLE

Bio: Peggy L. Semingson is an Associate Professor of Linguistics and TESOL and Interim Director of CRTLE at the University of Texas at Arlington, where she directs faculty development and teaching innovation initiatives.

Peggy Semingson frames AI adoption at UTA as a community-centered, faculty-supported process grounded in experimentation with clear guardrails. Her work focuses less on a single tool and more on building sustainable teaching cultures around AI. She emphasizes psychological safety and incremental change, encouraging instructors to experiment without feeling overwhelmed. As she explains, “we may have a little bit of controlled chaos with AI, and that’s OK… but we still have guardrails in place to keep students and faculty safe.” Her approach positions AI not as a mandate or threat, but as a shared institutional learning journey. She repeatedly reinforces the value of starting small, noting that faculty should “try one thing in your course this semester… don’t feel like you have to take it all on.” Through this framing, Semingson’s leadership work situates AI within faculty development, ethical policy design, and long-term community building rather than short-term technical adoption.