Driving, Not Being Driven

At North Star Academy Washington Park High School in Newark, seniors are taking a new A.I. literacy class built around one core question: are they steering the technology or letting it steer them. The course treats A.I. like driver’s ed, aiming to make students active, critical users instead of passive passengers of chatbots and algorithms.
The first lesson asks students to reflect on times when social media feeds or music recommendation tools quietly guided their choices versus moments when they deliberately selected what to watch or read. One student describes using a chatbot to check math homework, while another notes slipping into “passenger mode” with Spotify’s A.I. DJ that plays what it thinks she wants to hear.
Across the United States, schools are rapidly rolling out similar A.I. literacy efforts, which some educators call a “driver’s license” for A.I. The goal is to teach young people to evaluate powerful tools, understand how they work, and use them responsibly as A.I. shapes everything from writing to hiring.
Some schools emphasize hands-on practice with major chatbots like Google’s Gemini and Microsoft’s Copilot, teaching students how to craft prompts. Others build lessons around A.I.’s social impact, including the dangers of deepfake nude images and other manipulative content that can harm teenagers.
This push comes amid a national argument over whether A.I. will improve or damage education, even as President Trump has ordered schools to teach “foundational A.I. literacy” from kindergarten onward. Supporters say students must learn A.I. skills to stay competitive and ready for future jobs, while researchers warn about hallucinations, cheating, and weakened critical thinking.
Recent research suggests A.I. can undercut comprehension: one study found students who took notes on readings outperformed those who used chatbots for help. A Brookings Institution report concluded that, at least for now, the risks of A.I. in classrooms outweigh its benefits, underscoring the need for cautious, structured use.
At Washington Park, teachers Mike Taubman and Scott Kern try to chart a nuanced path by framing A.I. as a vehicle students must learn to drive safely. They want teenagers to examine what’s “under the hood,” set personal rules, and imagine what laws and norms should govern A.I. in their city and country.
Kern, a U.S. history teacher, has built custom chatbots using a nonprofit platform to deepen students’ argumentative writing. In a lesson on the 1919 Chicago Race Riot, students first analyze primary sources, then test their interpretations on a class chatbot that challenges them to be more specific and precise.
Students say these bots can sharpen their thinking by pushing back on vague claims rather than simply giving answers. Even so, Kern insists the most important learning—initial reading, critical thinking, and peer discussion—should remain A.I.-free so technology does not crowd out human interaction.
Taubman’s career exploration class uses simulation chatbots to let students practice in imagined professional settings, such as counseling virtual patients. One student refining a project on a mental health nonprofit used an A.I. bot to narrow her broad idea to a more targeted focus on teens facing both depression and substance abuse.
Students describe a shift in how they relate to A.I.: instead of simply asking it for recipes or workout plans, they are learning to pose better questions that guide them toward their own answers. The new elective, with 18 students in its first run, formalizes this approach and explores thorny debates over authorship and intellectual property when A.I. generates creative work.
Kern and Taubman acknowledge their driving metaphor has limits, especially while chatbots still lack safety features as reliable as seatbelts and airbags. They hope today’s students will eventually help design safer, fairer, more environmentally responsible A.I. systems, and students already say the class makes them feel more prepared for an A.I-saturated future.Driving, Not Being Driven
At North Star Academy Washington Park High School in Newark, seniors are taking a new A.I. literacy class built around one core question: are they steering the technology or letting it steer them. The course treats A.I. like driver’s ed, aiming to make students active, critical users instead of passive passengers of chatbots and algorithms.
The first lesson asks students to reflect on times when social media feeds or music recommendation tools quietly guided their choices versus moments when they deliberately selected what to watch or read. One student describes using a chatbot to check math homework, while another notes slipping into “passenger mode” with Spotify’s A.I. DJ that plays what it thinks she wants to hear.
Across the United States, schools are rapidly rolling out similar A.I. literacy efforts, which some educators call a “driver’s license” for A.I. The goal is to teach young people to evaluate powerful tools, understand how they work, and use them responsibly as A.I. shapes everything from writing to hiring.
Some schools emphasize hands-on practice with major chatbots like Google’s Gemini and Microsoft’s Copilot, teaching students how to craft prompts. Others build lessons around A.I.’s social impact, including the dangers of deepfake nude images and other manipulative content that can harm teenagers.
This push comes amid a national argument over whether A.I. will improve or damage education, even as President Trump has ordered schools to teach “foundational A.I. literacy” from kindergarten onward. Supporters say students must learn A.I. skills to stay competitive and ready for future jobs, while researchers warn about hallucinations, cheating, and weakened critical thinking.
Recent research suggests A.I. can undercut comprehension: one study found students who took notes on readings outperformed those who used chatbots for help. A Brookings Institution report concluded that, at least for now, the risks of A.I. in classrooms outweigh its benefits, underscoring the need for cautious, structured use.
At Washington Park, teachers Mike Taubman and Scott Kern try to chart a nuanced path by framing A.I. as a vehicle students must learn to drive safely. They want teenagers to examine what’s “under the hood,” set personal rules, and imagine what laws and norms should govern A.I. in their city and country.
Kern, a U.S. history teacher, has built custom chatbots using a nonprofit platform to deepen students’ argumentative writing. In a lesson on the 1919 Chicago Race Riot, students first analyze primary sources, then test their interpretations on a class chatbot that challenges them to be more specific and precise.
Students say these bots can sharpen their thinking by pushing back on vague claims rather than simply giving answers. Even so, Kern insists the most important learning—initial reading, critical thinking, and peer discussion—should remain A.I.-free so technology does not crowd out human interaction.
Taubman’s career exploration class uses simulation chatbots to let students practice in imagined professional settings, such as counseling virtual patients. One student refining a project on a mental health nonprofit used an A.I. bot to narrow her broad idea to a more targeted focus on teens facing both depression and substance abuse.
Students describe a shift in how they relate to A.I.: instead of simply asking it for recipes or workout plans, they are learning to pose better questions that guide them toward their own answers. The new elective, with 18 students in its first run, formalizes this approach and explores thorny debates over authorship and intellectual property when A.I. generates creative work.
Kern and Taubman acknowledge their driving metaphor has limits, especially while chatbots still lack safety features as reliable as seatbelts and airbags. They hope today’s students will eventually help design safer, fairer, more environmentally responsible A.I. systems, and students already say the class makes them feel more prepared for an A.I-saturated future.