Data, Media, and the Question Behind the Conclusion — A Nine-Week Friday Arc
Students practice distinguishing between what a data source measured, who produced it, and what interests are served by a particular interpretation. Quantitative evidence is always produced by someone for a purpose — and the purpose is part of the evidence.
Social media companies say their platforms connect people. What does the data actually show — and why does it matter who funded the study?
Intellectual patience. Before evaluating a claim, you have to actually see it as a claim — not neutral information, not obvious truth, but an assertion someone made for reasons.
A one-paragraph question statement. Name your chosen question, name one study you have found that takes one side, and state what that study actually measured — not just what it claimed to prove.
Students view three screenshots without sourcing or context: a platform's own "About" page; a news headline reporting increased teen loneliness; a congressional testimony from a tech executive. Write individually for 5 minutes: What is each of these trying to make you believe? Share aloud. Teacher does not evaluate — the goal is surfacing the intuitive, uncritical reading most students bring.
Introduce the concept: every piece of information about social media was produced by someone who had something to gain or lose. This is not cynicism — it is the basic epistemic situation of modern life.
Introduce the six-phase Inquiry Process. This week is Phase 1: Question. Students are not looking for answers yet — they are looking for a question worth asking.
Students receive a list of 8 candidate research questions, rank their top three, and write one sentence explaining what makes the question genuinely contested — not just "people disagree," but what specifically would need to be true for each side to be right.
Students present their top question and their contestedness sentence. Teacher surfaces two or three framings that reveal hidden assumptions and models what a sharper framing looks like: not "people disagree about whether social media is good or bad" but the specific evidential dispute at stake.
The goal is surfacing the intuitive, uncritical reading most students bring. Resist the urge to correct or affirm. Just listen and note who is already asking "who made this?" — that student will be a resource in Week 4.
Students often overcorrect after the case studies and conclude any industry-funded study is worthless. Introduce the correction early: the funding question is one factor — it is not a verdict. A study funded by a social media company may still be methodologically sound.
A student who wrote "people just disagree on whether social media is good or bad" has not yet identified the evidential dispute. Ask: What would need to be true about the data for each side to be right? If they can't answer that, they don't have a contested claim — they have a topic.
Pick two or three question statements from the share-out that are almost there and model the revision publicly. Show the class the difference between "people disagree about teen mental health" and "the studies that find harm consistently rely on self-report measures that cannot establish causation."
The habit of reading methodology before conclusions. Most people read the abstract, skim the findings, and stop. The question behind the conclusion lives in the methods section.
A one-page source analysis of one of your two studies. Cover: what it measured, who funded it, what it cannot prove, and what the methodology assumes.
Students share one-paragraph question statements in pairs. Partner gives one piece of feedback: Did you name what the study measured, or only what it concluded? A study that concludes "social media causes loneliness" may have measured self-reported loneliness scores on a college-student survey over eight weeks. Those are not the same as the conclusion.
Walk through a real study together as a class — ideally one students have already found. Read aloud, stopping at each methodological choice: What did this decision assume? What does it make impossible to measure?
Students receive two short methodology excerpts (1–2 pages each) from real but contrasting studies on the same question. Working individually, they answer: What did this study actually measure? What population was studied? Who funded this research? What is one thing this study cannot tell you, based on how it was designed?
Students begin locating their second study — the one that takes the opposing or complicating position. Must have it in hand. Teacher circulates: Do you have two studies that genuinely conflict, or do they just use different language to say the same thing?
One student presents their two studies and names the conflict in one sentence. Teacher asks: Is the conflict about values, or about evidence? This distinction matters and will be revisited throughout the arc.
Listen to what partners say to each other. Students who can already name what a study measured (vs. concluded) are ahead of the formation curve. Students who treat "the study found" and "the study measured" as synonyms need more time with the methodology section today.
Pick one students have a reasonable chance of having encountered in the news — ideally one with a dramatic headline and a methodology that cannot support it. The Haidt/Twenge vs. Przybylski/Odgers divide is useful here: same basic data, different analytic choices, opposite headlines.
The most common error: students find two studies that appear to conflict based on their titles but actually studied different populations, timeframes, or outcomes. Genuine conflict means: same basic question, meaningfully different findings, not explained by trivial methodological differences.
Some students will have chosen questions where the conflict is fundamentally about values rather than evidence. These students need to reframe their question as an empirical one or select a different question. Week 4 becomes impossible without a genuine evidential dispute.
AI-generated content is a source with all the properties of any other source — an agenda it cannot fully see, a method with limitations, and a perspective shaped by its training data. Evaluating it is not a technical skill. It is the same skill applied to a new kind of source.
A structured comparison of your two studies. A table with rows: Research Question, Sample Population, Measurement Method, Funding Source, Conclusion, and What This Study Cannot Prove.
Teacher selects two or three source analyses (anonymized) and reads excerpts aloud. Class identifies: Did the student describe what the study measured, or what it concluded? The goal is sharpening the distinction, not evaluating the student.
Frame as a research methodology question: when you don't have the expertise to evaluate a source yourself, you often turn to a large language model. What are the properties of that kind of help?
Teacher runs a live demonstration, prompting an AI model with the driving question three ways:
Class evaluates together: What did the model get right? What did it flatten or omit?
Students submit three prompts about their research question to an AI model, documenting each prompt and response in writing. Goal: not to evaluate the AI for accuracy, but to observe how it handles a contested empirical claim. What does it say confidently? What does it hedge? What does it leave out? This documentation becomes part of the final deliverable.
Students share one observation from their AI probe: What did the model do that surprised you? What did it do that confirmed your prior assumptions?
The AI probe will generate widely varying results depending on which model students use and how they prompt it. That variation is the lesson. Resist the urge to give students identical prompts — the messiness of the results is precisely what they need to evaluate.
Students may expect this session to be about AI tools and how to use them better. Correct that expectation early: the skill being cultivated is the same skill they applied to the peer-reviewed paper in Week 2. The question is always: Who produced this? What did they optimize for? What can this source not tell me?
Some students will conclude AI is useless for research. Others will conclude it is fine because it got the basic facts right. Neither is the formation outcome. The target: students treat AI output with the same calibrated skepticism they apply to any other source — useful, limited, purpose-produced.
Intellectual charity — the discipline of engaging the strongest version of the opposing view. This is not diplomacy. It is the only way to find out if your position is actually sound.
A one-paragraph position statement. State the claim you will defend, name the evidence that most supports it, and identify the one objection you consider most serious.
Students submit comparison tables. Teacher reviews quickly for one common error: conflating methodological weakness with ideological bias. A study funded by a social media company may still be methodologically sound. The funding question is one factor — it is not a verdict.
Definition: To steelman a position is to construct the strongest possible version of it — the version a smart, informed, good-faith proponent would actually endorse. This is different from the strawman, which is the weakest, least charitable version.
Walk through the steelman of both major positions using real researchers as exemplars: Jonathan Haidt and Jean Twenge on one side; Candice Odgers and Andrew Przybylski on the other. These are smart people with access to the same data reaching different conclusions. Why?
Write a steelman of the position you find least compelling:
Students exchange steelmans. Reviewer answers: Did the writer actually argue for this position, or did they just describe it? Reviewers mark the specific sentence where they felt the steelman weaken.
Teacher asks: Did any of you come out of writing the steelman with a different view of the opposing position? What changed? No pressure to have moved, but honest accounting of what happened in the writing.
Many students will produce a description of the opposing position rather than an actual argument for it. Do not rescue them by explaining the difference again. Ask: Would someone who holds this view recognize themselves in what you wrote? If not, try again. The discomfort of not being rescued is the formation moment.
Before peer review begins, model what "skepticism leaking through" looks like: a qualifying phrase ("even if we accept…"), a hedge ("some might argue…"), or a distancing move ("proponents claim…"). These are signals the student is describing, not arguing.
Haidt/Twenge vs. Odgers/Przybylski is not an abstract debate. These are specific people who have exchanged specific arguments in the published record. Naming them makes the steelman exercise concrete: students are not engaging an imaginary opponent. They are engaging a real one.
Intellectual courage — the willingness to commit to a defensible claim rather than retreat into "it's complicated." A position held loosely is not held at all.
An outline of the argument. Three evidence points, one major objection, and a one-sentence answer to that objection.
Teacher selects three or four (anonymized) and reads aloud. Class evaluates each: Is this a position, or a description of the debate?
Students revise their position statements using the four-part framework, then begin outlining their argument: What three pieces of evidence most support this position? What is the strongest objection? How do they answer it?
Walk through the deliverable requirements. The infographic must include: the driving question; Study 1 (title, funder, sample, conclusion); Study 2 (title, funder, sample, conclusion); the student's own position stated plainly; and the open question — the thing neither study settles. Students choose their design tool (Canva recommended).
Each student states their position in one sentence to the group. Teacher asks one follow-up per student: What would change your mind?
Most students will have submitted something that describes the debate rather than takes a position. Reading examples anonymously gives the class permission to name the problem without anyone feeling singled out.
A student who cannot answer this does not actually have a position — they have a preference. Watch for two failure modes: (1) "Nothing would change my mind" — stubbornness, not intellectual courage; (2) "Pretty much anything" — not humility, but a position held loosely. The target: a specific, falsifiable answer.
Students will want to omit the open question or treat it as a "more research is needed" hedge. Hold the line. A student who can name what their inquiry did not settle has understood something most adults never learn to do.
An argument is a structure, not a feeling. It can be evaluated, tested, and improved. This week students build that structure explicitly.
Complete draft of the argument text for the infographic — all panels written, position stated, open question identified.
Students share outlines in small groups (3 students). Group task: find the weakest link — where does the argument depend on an assumption doing more work than it can carry? Each student receives one written note: "Your weakest point is ____."
Options for engaging the opposing study: (1) Challenge the methodology; (2) Accept the finding but limit its scope; (3) Accept the finding and revise the position — the hardest and most intellectually honest move.
Three paragraphs minimum: evidence point 1, evidence point 2, engagement with opposing study. Teacher circulates and asks one question per student: Are you arguing, or are you summarizing?
Two students volunteer to read their argument section aloud. Class gives structured feedback: one thing the argument does well, and one question the argument does not yet answer.
Vague peer feedback ("your argument could be stronger") produces nothing. Require each group to produce a written note: "Your weakest point is ____." The specificity is the formation — students have to name the assumption, not just sense that something is off.
"Are you arguing, or are you summarizing?" is the question to ask every student. If they cannot explain how their paragraph advances the claim — as opposed to reporting what a study found — they are summarizing. Ask them to read it aloud and explain what it proves.
When a student accepts the opposing study's finding and revises their position accordingly, acknowledge it explicitly to the class. Most students will try to avoid this move. Naming it as a strength rather than a concession is important.
The discipline of compression. An infographic does not have room for hedges, qualifications, or throat-clearing. Every word must earn its place. This is an argument skill, not a design skill.
Near-final infographic. Oral walk-through notes (not a script — bullet points are fine).
Teacher highlights one pattern visible across multiple argument drafts: most students bury their position somewhere in the middle. The infographic must not do this. Where you put things on a visual is a claim about what matters.
Walk through a model infographic structure on-screen, annotating the argumentative choices.
Students build their infographics in Canva (or chosen tool). Teacher circulates with one question: Does your headline make a claim, or does it describe a topic? Students who finish a first draft of the layout begin writing their oral walk-through notes.
Students exchange infographic drafts. Reviewer answers: What is the argument? State it in one sentence. If the reviewer cannot, the infographic is not yet making an argument.
Ask every student to read their headline aloud. You can tell in five seconds whether it makes a claim or describes a topic. Run this as a quick whole-group round before students return to their infographics — it sets the standard for the work session.
Students will want to omit or soften the open question into "more research is needed." Require it. A student who can name what their inquiry did not settle has understood something most adults never learn to do.
If a reviewer cannot state the argument in one sentence, the infographic has not yet made its argument visually legible. Train reviewers to attempt the sentence first, before anything else — more useful than "looks good" or "the colors are nice."
A position presented to a real audience under real conditions is different from a position stated on paper. The pressure to defend reveals what you actually know versus what you assumed you knew.
Completed infographic (final version) + one-page written reflection. Reflection due Week 9, using feedback note from this session.
Each student presents their infographic with a structured oral walk-through — not a reading of the infographic, but a live argument. Students explain the driving question, walk through both studies and their funding contexts, state their position, and name the open question.
Two students must ask a genuine question — not a compliment, not a clarification, but a challenge to the argument. If a student is stumped: "That's a gap in my argument — I would need to look at that." That is a formation outcome, not a failure.
Model what a genuine challenge looks like before presentations begin. The distinction: "Can you say more about Study 2?" (clarification) vs. "Your position depends on the self-report critique — but your own study also uses self-report. How do you handle that?" (challenge).
If a student is stumped, resist the urge to rephrase or soften the question. Wait. If the student says "I don't know," ask: "Is that a gap in your argument, or a question you could answer with more research?" That distinction is the formation moment.
After each presentation, teacher gives one written note: what the argument did well, and one question left open. Return these before Week 9 so students can use them in their reflection. The feedback note is not a grade — it is a prompt for honest reckoning.
Reflection is not summary. It is the discipline of honest reckoning: what did you believe at the start? What do you believe now? What moved you, and was that movement earned?
Completed infographic (final version) + one-page reflection: where you started, what moved you, where you landed — including what you still don't know.
Teacher returns written feedback notes from Week 8. Students read in silence, then write for five minutes: What is the one question from your presentation that you could not fully answer? What would you need to do to answer it?
Walk through the three things a reflection must contain:
What a reflection must not contain: a list of skills learned, a summary of the project, praise for the experience.
Students write their reflections in class. Teacher circulates and asks one question per student: What did you believe at the start that you no longer believe, or believe less confidently?
Each student shares one sentence: the thing their thinking moved on. No elaboration required — just the honest sentence. Teacher notes common patterns across the group.
Teacher names the pattern for the group: the question you all started with was this. Here is the range of where you landed. Here is what neither study settled — and that is not a failure of the inquiry. That is what genuine inquiry looks like.
"What did you believe at the start that you no longer believe?" requires students to admit they were wrong, that their certainty was unearned, or that they still don't know something they assumed they would. This is hard. Don't soften it. Wait through the discomfort.
Before students write, name what a reflection is not: (1) "I learned a lot about how to evaluate sources" — a skill list, not a reckoning; (2) "The project showed me that social media research is complicated" — a summary; (3) "It was really interesting" — praise. A reflection says where something moved, names what moved it, and is honest about what remains unsettled.
Students need to hear, from the teacher, that the thing neither study settled is the point — not a failure of the project. Genuine inquiry produces better questions, not just better answers. That is the formation outcome of the entire arc. Say it plainly.