This AI Literacy Review covers global surveys about AI skills and workforce impacts, less-educated areas using AI tools more, gender gaps in AI, AI literacy definitions and practices, prompting approaches, AI uptake by marketers, AI for foreign policy and legal work, AI literacy for librarians, OpenAI partnerships with education, students’ and children’s usage of Gen. AI, US vs other countries’ AI education strategies, Stanford’s AI literacy lessons for high school and third AI+Education Summit, and students’ perspectives on AI and learning.
General
Workday’s global study Elevating Human Potential: The AI Skills Revolution surveyed 2,500 workers across 22 countries and found that 83% of respondents think AI will increase the importance of human skills and enhance human creativity, and 93% believe that AI lets them focus on higher-level responsibilities such as strategy and problem-solving. Meanwhile, 90% think that AI can significantly enhance the transparency and accountability of an organization. There was a geographic difference in people’s confidence using AI for complex work, with 93% agreeing in APAC and EMEA regions compared to only 82% in North America.
Another Workday report The Global State of Skills surveyed 2,300 business leaders around the world and found that 51% are concerned about future talent shortages, 55% have started a transition to a skills-based organization, and 41% think that AI will help mitigate skills shortages, including through automation of routine tasks. Top skills needed include digital skills (65%) such as Gen. AI. These skills were identified as often missing from their workforce.
Darktrace’s 2025 State of AI Cybersecurity report found that 78% of Chief Information Security Officers agreed that AI-powered threats were significantly impacting their organizations, while only 45% have a formal AI oversight and governance function and 37% regularly monitor AI usage.
The Haas School of Business and Berkeley AI Research Lab at the University of California, Berkeley publish Responsible Use of Generative AI: A Playbook for Product Managers & Business Leaders, which includes analysis of survey responses from product managers and recommendations on how leadership can integrate AI responsibly into everyday work. The report recommends governance frameworks and regular risk assessments, among others. There is also an associated article Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices by Genevieve Smith et al.
In Ars Technica’s Researchers surprised to find less-educated areas adopting AI writing tools faster, Benj Edwards reports on Stanford University-led research that tracked word usage patterns in 300 million text samples–including consumer complaints, corporate and UN press releases, and job ads. Researchers suggested that regions with lower education levels are using AI writing tools more frequently, and that signs of AI assistance in writing increased after the release of ChatGPT but may have stabilized or become harder to track by 2024. The paper The Widespread Adoption of Large Language Model-Assisted Writing Across Society by Weixin Liang et al. details the study and the statistical framework they used to estimate AI usage.
An updated version of the working paper Global Evidence on Gender Gaps and Generative AI by Nicholas G. Otis et al. synthesizes data from 18 studies of 140,000 people around the world, showing that the gender gap in who is using Gen. AI holds nearly everywhere and that there is a need for further research and policy efforts toward closing this gap.
In AI Literacies and the Advancement of Opened Culture: Global Perspectives and Practices, authors Angela Gunder et al. examine AI literacies and open educational practices with insights from 34 educators worldwide.
The We Are Open Co-Op’s AI Literacy site curates articles and research on digital and AI literacies with a view of AI literacy as contextual and plural, drawing on Doug Belshaw’s digital literacy doctoral research.
In The GenAl and Expertise Paradox: Why It Makes Expert Work More Important But Harder, Arizona State University professor Punya Mishra unpacks the much-discussed Microsoft Research paper The Impact of Generative AI on Critical Thinking and the findings that experts use more cognitive effort when using Gen. AI. Mishra also applies the insights to AI and education and the challenge of moving learners/novices into experts.
In Prompting Science Report 1: Prompt Engineering is Complicated and Contingent, authors Lennart Meincke, Ethan Mollick, Lilach Mollick, and Dan Shapiro from the University of Pennsylvania report on prompting approaches that may or may not improve performance and suggest that benchmarking AI performance requires more nuance than a one-size-fits-all.
The Northwestern University Center for Human-Computer Interaction + Design hosts a virtual panel on what AI means for the future of work featuring David Autor from MIT and Eric Horvitz from Microsoft, drawing from their contributions to the National Academies of Sciences’s Artificial Intelligence and the Future of Work report.
Canva’s The State of Marketing and AI Report includes findings from its survey of 2,400 marketing leaders in the US, UK, France, Germany, Spain, and Australia, with 94% of marketers allocating AI budgets in 2024, 78% seeing AI as essential to long-term strategies, 61% struggling to integrate tools into workflows, 85% saving a full workday every two weeks, 85% being open to letting teams experiment with AI, and 92% believing AI literacy will be a must-have skill in 2-4 years.
In How To Start Helping Your Teens Become AI Literate, Aviva Legatt reviews research from OpenAI and Pew about the gap between AI usage and AI literacy in young people and suggests that parents introduce their teens to free AI tools like ChatGPT and talk about when it’s okay to use AI and how they’re using it for schoolwork.
Government
The European Commission develops a Living Repository of AI Literacy Practices of various organizations, including Booking.com, to support the implementation of the EU AI Act’s requirement under Article 4 to ensure AI literacy. The document answers questions such as how the organization is training staff and what the impact and challenges have been.
In AI Biases in Critical Foreign Policy Decisions, Yasir Atalan, Ian Reynolds, and Benjamin Jensen from the Center for Strategic and International Studies (CSIS) Futures Lab discuss findings from a study of how AI large language models perform on internal relations and foreign policy decisionmaking. They suggest that national security leaders, military planners, and intelligence analysts need training in AI literacy so they know the pros and cons of insights from AI systems.
Law
In AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice, authors Daniel Schwarcz et al. discuss their study of law students completing six legal tasks (including drafting persuasive letters and analyzing complaints) using AI tools. They find that this usage significantly enhanced legal work quality and boosted productivity, with gains of 38-115% and 34-140% observed.
Libraries
In Developing AI Literacy: Reflections on Teaching a Six-week Course for Library Workers, Nicole Hennig reflects on her experience providing hands-on experience with Gen. AI tools to library staff and their reactions and learnings throughout the course.
The ACRL AI Competencies for Library Workers Task Force is asking for librarians’ feedback on their draft of AI Competencies for Academic Library Workers by March 26, 2025.
The LIBRA.I. project funded by the European Union has the goal of integrating AI into media and learning literacy training in public libraries. Librarians in select locations will participate in workshops in mid-2025, and the project will create a framework in five languages to be released in 2026.
In Artificial Intelligence and Information Literacy: On the Art of Quartering Broccoli, Lukas Tschopp from the University Library of Zurich uses a food comparison to discuss the continuing relevance of the American Library Association’s Framework for Information Literacy for Higher Education, particularly its concept that authority is constructed and contextual.
Education
UNESCO issues a call for papers for think pieces (max 1,500 words) on “AI and the Future of Education: Disruptions, Dilemmas, and Directions” that will be published in IdeasLAB. Deadline is April 30, 2025.
OpenAI partners with 15 top educational institutions–including Caltech, CSU, Duke, Harvard, Howard, MIT, and Oxford–as part of their initiative called NextGenAI which offers $50 million in research funding, compute power, and other benefits to students and educators toward advancing AI research.
Estonia is partnering with OpenAI to roll out ChatGPT Edu to students and teachers in the secondary school system, with Estonia being in the top 15 countries for ChatGPT usage already and having an AI Leap 2025 initiative toward AI and education.
Ireland’s AI Advisory Council’s Advice Paper on AI and Education is released which considers the implications of Gen. AI across education sectors and reviewing principles for usage by teachers and students.
A survey of over 1,000 undergraduate students on their Gen. AI usage from the UK think tank HEPI titled Student Generative AI Survey 2025 shows 92% of students are using AI, up from 66% the previous year, and 88% are using Gen. AI for assessments, up from 53%. Students’ main uses are to explain concepts, summarize, and get research ideas, and they said AI saves time and improves work quality. There was a gender gap in terms of women worrying more about being accused of misconduct and getting biased results. Only 36% of students said they received support from their institution to develop AI skills.
In How Colleges Can Scale AI Readiness: Lessons from a First-Year Experience Program, Joel Gladd discusses ways of preparing first-year experience college students and faculty for AI, including covering how AI models work, the basics of prompting, and usage principles. He also recommends surveying students and faculty to understand where people are at, and offers links to open educational resources (OER) AI training materials.
Ruopeng An’s white paper Harnessing Global AI Education Strategies to Transform U.S. Schools Today explores how other countries are incorporating AI into their education systems and compares the U.S.’s adoption to these global models. (see Ruopeng An’s LinkedIn post with whitepaper)
The 2025 Common Sense Census: Media Use by Kids Zero to Eight reveals that digital media habits for children 8 and under have changed, post-pandemic, with 40% having a tablet by age 2 and almost a quarter having a cellphone by age 8. Gaming time has increased by 65% in four years, and 29% of children are using AI for school-related material, although many parents report that AI tool usage hasn’t had any impact on their child’s understanding of schoolwork or their creativity.
Northeastern University President Joseph E. Aoun says that higher education needs to look at a fourth world, AI, after having spent hundreds of years focused on the physical, biological, and social worlds. He says universities need to go beyond looking at technology in terms of adoption and consider the impact AI will have on knowledge and learning.
In Navigating AI in Education with AI Literacy, North Carolina State University’s Paul Fyfe discusses the shift in literacy frameworks and his preferred one from UNC Charlotte with four pillars of AI literacy.
Connecticut’s Hartford Public Schools holds the inaugural Playlab AI Professional Learning Community and plans to roll out a train-the-trainer model to allow participants to further share their knowledge across the district to promote increased AI literacy among educators.
Stanford updates its free AI literacy resources with new lessons for high school teachers and students, including AI or not AI? and Can AI help us communicate with whales?
Stanford’s third annual AI+Education Summit was guided by four key questions: What is the future of learning we aspire to? What makes us uniquely human? How do we center humanity in our learning and AI ecosystems? How can research best support a bright future of learning? Panelists discussed the concept of being AI literate, how to teach AI literacy, and ethical dilemmas. University of Michigan’s James DeVaney writes about the key messages he took away from the summit for Inside Higher Ed in AI and Education: Shaping the Future Before It Shapes Us, including that “AI literacy is urgent—but we lack consensus on what it means.”
Student organization AI Consensus, sponsored by Minerva University, features several students’ perspectives on AI and learning in How is AI Changing What it Means to Learn? Pt. 1, with students reflecting on the pros and cons of how the tech is impacting their learning journey.
In the Youtube interview College of Western Idaho integrating AI literacy and AI tools for students, staff and students discuss the implementation of an AI tool with guardrails that keeps students on track and won’t do their assignments for them but can explain concepts in terms they can relate to, such as Dungeons and Dragons narratives.
Bournemouth University shares an example (AI for Generating Ideas) from its Student AI Literacy series of exercises that teachers can use in the classroom to improve students’ prompting and evaluation skills.
The ICT department at Mahidol University in Thailand hosts a workshop on AI literacy and skills and how staff can use Microsoft Co-pilot for emails, reports, content creation, and data analysis.