This AI Literacy Review features Stanford’s 2025 AI Index Report, EDSAFE AI Alliance Team’s call for public infrastructure for AI in education, Harvard Business Review’s report on people’s use of Gen. AI, Lloyd Bank’s AI training program, AI literacy workshop activities, a children’s book about AI bias, a printable board game for kids, AI literacy initiatives in a science museum and college library, reflections on the ASU+GSV Summit in San Diego, London students’ manifesto for assessment in the age of AI, review of AI guidance in US states, UC Davis and UC Irvine’s Gen. AI surveys, Anthropic’s Education Report, AI literacy and algebra, AI skills for MBA students, OER and AI education, and more.

General

Stanford’s Human-Centered Artificial Intelligence (HAI) publishes The 2025 AI Index Report with 12 takeaways, including AI and computer science education is expanding albeit with gaps, AI performance continues to improve, AI is embedded into everyday life, business is all in on AI, and AI optimism is rising but not evenly. The report notes that 81% of K-12 computer science teachers in the US think AI should be in foundational computer science education, but less than half of them feel equipped to teach AI.

The EDSAFE AI Alliance Team publishes Opportunity at Scale: The Case for Public Infrastructure for AI in Education which sets out a case for investments in public infrastructure, including research and development and scalable computing resources, to secure America’s AI leadership. 

In Harvard Business Review’s How People Are Really Using Gen AI in 2025, Marc Zao-Sanders looks at how people are using Generative AI based on what they’re posting on Reddit, Quora, and other places. Therapy emerges as the top use case, along with new ones compared to 2024 such as organizing one’s life and finding purpose, showing a shift from technical uses to more emotional ones. 

Lloyds Bank is partnering with Cambridge Spark for a 6-month training program for senior leaders to boost their AI skills, with hands-on sessions along with real-world projects.

Ethan Mollick points out that AI models have preferences and this might be a bigger deal than people realize, such as how Claude prefers the company 1-800-FLOWERS, and links to Zack Witten’s article Measuring Models’ Special Interests where he asks different Claude models about their preferences on the topics of Buddhism, silence, lucid dreaming, and more. (see Ethan Mollick’s LinkedIn post)

Dan Fitzpatrick in Forbes’ Inside OpenAI’s Ambitious AI Academy raises questions about  neutrality and critical thinking about AI as big tech companies push into AI education, such as OpenAI Academy’s launch of free learning modules and community spaces to connect.  

Lucas Wright shares an AI literacy workshop activity called the AI in Education Challenges Bank that helps educators create teaching materials such as custom tutor bots, problem sets, rubrics, Canvas pages, diagramming, alt text for images, and more. 

In Psychology Today’s The First Generation of AI Natives, Jason Tougaw writes about Cornell post-doctoral fellow Dr. Avriel Epps’s new book, A Kids Book About AI Bias and its user-friendly explanations about AI technology that aim to help children develop agency and understanding. 

Museums & Libraries

Rose Basom from the the Science Museum of Virginia discusses in a video how the museum is offering a community AI literacy initiative as a third space for people to either share their AI expertise or come to learn more about what AI is.

In Everyone Deserves to Speak AI: Building Literacy for a Digital Future, Russell Michalak discusses how Goldey-Beacom College Library has built an AI literacy program that introduces students to tools such as Grammarly and ImageFX, while also making the library interface easier to use. 

Education

Joseph South offers a reflection on the ASU+GSV Summit held in San Diego, with the positives being more awareness by the edtech market of the needs of all learners, discussions about digital well-being, and AI as a steam engine in every pocket, and the negatives being AI solutions focused on efficiency rather than learning transformation, and challenges in investment and private equity in relation to edtech companies and their survival. (see Joseph South’s LinkedIn post)

In A Student Manifesto for Assessment in the Age of AI, students at the London School of Economics and Political Science propose 9 principles for assessment in the age of AI, including clear guidance and AI literacy for responsible AI use, and call for the school to recognize that students are going to use AI even if there are restrictions and call for teachers not to choose exams as the preferred assessment method just because it seems easiest. (see Claire Gordon’s LinkedIn post about the project)

The Center for Democracy and Technology’s Looking Back at AI Guidance Across State Education Agencies and Looking Forward by Maddy Dwyer reviews which U.S. states have released guidance for the responsible use of AI in public education, with four trends identified: alignment on potential benefits, base risks acknowledged, need for human oversight, and a lack of critical topics related to AI.

UC Davis’ Generative AI Student Survey of over 31,000 undergraduate students (1,361 responded) shows that most students deeply consider the ethics of using Gen. AI for academic work, don’t use it frequently, are cautious about trusting Gen. AI’s accuracy, and believe it’s important to their future work.

UC Irvine’s study of parents, teachers, and teenagers in the US about AI finds that 45% of teens have used AI tools such as ChatGPT recently, 69% of teens said Gen. AI had helped them to learn something new, and parents had mixed feelings about AI and education.

Anthropic releases the Education Report – a large-scale analysis of how students are using their chatbot, Claude – finding that STEM students use Claude more often than business, health, or humanities students, and that students tend to use AI for creating and analyzing functions as per Bloom’s Taxonomy.

Joanne McGovern posts about a printable board game for 7-12-year-olds to build digital and AI literacy through learning the AI basics, how to spot fake information, think critically, and more. It can be used in classrooms as well as home learning and STEM clubs. (see Joanne McGovern’s LinkedIn post with the board game)

In Education Week’s Why This School System Is Integrating AI Literacy With Algebra 1, Lauraine Langreo reports on a trial of an AI in math supplemental certification program for middle and high school students in Florida that addresses AI literacy and math skills improvement.

In AI Education Is the New Space Race. Here’s How America Must Respond, Loni Mahanta from aiEDU writes about the AI literacy initiatives in several countries and the advantage this may provide in AI-driven economic growth and military advancement.  

In Essential AI Skills Every MBA Student Should Learn in 2025, Federico Blank from Millenia Atlantic University outlines how students should grasp concepts such as supervised and unsupervised learning, neural networks, and predictive analytics, and should know tools such as Power BI, ChatGPT, Deepseek, Google Analytics, and Salesforce Einstein, among many others. 

The On Tech Ethics Podcast – Fostering AI Literacy by CITI Program features co-hosts Daniel Smith and Alexa McClellan with guest Sarah Florini speaking about the topic of fostering AI literacy in higher education and research. 

The In AI We Trust? podcast episode AI Literacy Series: Bridging the Gap Between Technology and Communities features Susan Gonzalez of AIandYou and hosts Miriam Vogel and Rosalind Wiseman discussing what AI literacy is, how AI is affecting the workforce, and what the role of the government could be in promoting AI literacy.

In the article Defining, enhancing, and assessing artificial intelligence literacy and competency in K-12 education from a systematic review, authors Xinyan Zhou, Yan Li, Ching Sing Chai and Thomas K. F. Chiu conduct a review of definitions of AI literacy and competency in K-12 education over the past ten years.

In the article How Do AI Educators Use Open Educational Resources? A Cross-Sectoral Case Study on OER for AI Education authors Florian Rampelt et al. explore how open educational resources (OER) can enhance AI education by looking at case studies and a survey of 260 educators from Germany, Austria, and Switzerland. 

Amy Burroughs in EdTech Magazine’s How Schools Are Blazing a Trail for AI in K–12 writes about some case studies of schools incorporating AI and experimenting with tools such as Copilot, Canva, and MagicSchool, and how important it is to teach AI literacy so users can be responsible and ethical in their AI usage.

Pablo Sanguinetti from IE University offers a class exercise in Times Higher Education using metaphor to help students get a better understanding of what AI is and reflect on how metaphors about AI shape reality and lead to either better or worse conceptions of this technology.

Giorgio Chianello in AI tools for education, learning and research: a session designed to build and enhance AI literacy shares information about his interactive skills-based workshop at Queen Mary University of London designed to help staff and students build AI literacy.

In Our Students Need Us, But It’s Not To Teach Them Biology. They Need us to Reinvent Education and “Teach” (About) “AI”, Stefan Bauschard argues that students don’t need more human-delivered content but need to know how to navigate an AI era where machines exceed human intelligence, basically that K-12 education needs a ground-up reinvention to accomplish this task.


➡️ Subscribe to our newsletter for a monthly recap of AI Literacy Reviews and announcements of resources, courses, and events.

Categories: News