Lesson Plan (Grades 9-12): AI Detective Lab - Tracing Bias, Hallucinations, and Source Credibility in Machine-Generated Answers
High school AI Detective Lab lesson plan on bias, hallucinations, source credibility, and responsible AI use through evidence-based research.
Focus: Engage students in a rigorous AI literacy investigation where they analyze machine-generated answers for bias, hallucinations, and source credibility. Students compare AI responses with verified sources, evaluate evidence, identify questionable claims, and communicate conclusions through discussion, written reflection, or a policy recommendation about responsible AI use.
Grade Level: 9-12
Subject Area: ELA • Media Literacy • Technology/Digital Literacy • Inquiry/Skills
Total Unit Duration: 1 core lesson with 2 optional extension lessons
I. Introduction
Students step into the role of AI detectives in a lesson that blends critical reading, source evaluation, and real-world digital literacy. In the core investigation, students examine one or more AI-generated responses to a prompt, then compare those responses against trusted articles, databases, or primary sources to determine what is accurate, what is misleading, and what may be completely fabricated. As they work, students look for hallucinations (false or invented claims), bias in framing or omissions, and differences in credibility between machine-generated content and human-vetted sources. The lesson feels timely and highly relevant, but it remains deeply academic by centering evidence, reasoning, and responsible communication.
Essential Questions
- How can we tell whether an AI-generated answer is accurate, incomplete, biased, or fabricated?
- What makes a source credible, and how do we verify claims made by AI tools?
- How can bias appear in machine-generated responses, even when the writing sounds confident or neutral?
- What are the risks of trusting AI-generated information without checking it against reliable evidence?
- What does responsible use of AI look like in school, research, and daily life?
II. Objectives and Standards
Learning Objectives — Students will be able to:
- Analyze an AI-generated response and identify claims that need verification.
- Compare AI-generated content with credible sources to determine which statements are accurate, incomplete, biased, or fabricated.
- Evaluate the credibility, relevance, and trustworthiness of sources used to verify claims.
- Identify examples of bias, hallucination, and missing context in machine-generated answers.
- Develop a clear claim about the reliability of an AI response and support it with evidence from verified sources.
- Communicate conclusions through discussion, a written reflection, or a short policy recommendation about responsible AI use.
Standards Alignment
- CCSS.ELA-LITERACY.RI.11-12.8
- Delineate and evaluate the reasoning in seminal U.S. texts and in works of public advocacy, including the application of constitutional principles and use of legal reasoning and the premises, purposes, and arguments in works of public advocacy.
- CCSS.ELA-LITERACY.W.11-12.1
- Write arguments to support claims in an analysis of substantive topics or texts, using valid reasoning and relevant and sufficient evidence.
- CCSS.ELA-LITERACY.SL.11-12.1
- Initiate and participate effectively in a range of collaborative discussions with diverse partners on grades 11-12 topics, texts, and issues, building on others’ ideas and expressing their own clearly and persuasively.
- ISTE Standards for Students 1.3.c
- Students curate information from digital resources using a variety of tools and methods to create collections of artifacts that demonstrate meaningful connections or conclusions.
- ISTE Standards for Students 1.3.d
- Students build knowledge by actively exploring real-world issues and problems, developing ideas and theories, and pursuing answers and solutions.
Success Criteria — Student Language
- I can identify claims in an AI-generated answer that should be checked.
- I can compare AI responses with credible sources and explain what is accurate, misleading, or false.
- I can explain how bias or missing context can shape an answer.
- I can use evidence from reliable sources to support my judgment about an AI response.
- I can communicate a clear position about responsible AI use in writing or discussion.