This AI Literacy Review looks at new courses on AI from higher education and top AI companies, Anthropic’s “AI biology” research into Claude’s workings, Educate AI magazine’s latest issue, how Generative AI reshapes teamwork, tech workers and execs pretending to know more about AI than they do, AI and neurodiversity, Meta’s pirated books training data, first clinical trial of an AI chatbot for mental health, more AI literacy frameworks for education and healthcare, EU’s call for more AI literacy examples, teens and AI literacy, new AI-powered literature search, Cengage’s AI in education survey, the role of chatbots in doctoral supervision, and more.

General

Caltech’s Center for Technology and Management Education is offering a free four-week course called AI Tools for Everyone: A Hands-On Learning Lab starting May 6, covering how to use AI tools for content creation, research, design, and coding.

The OECD is hosting a webinar How can countries encourage training in AI literacy? on April 24 covering the topic of how policymakers can foster training in AI for workers around the world.

Google DeepMind launches an AI Safety Course on YouTube with the first lesson titled “We are on a path to superhuman capabilities”.

The OpenAI Academy offers a free online resource hub to support AI literacy for people from all backgrounds through a mixture of online and in-person events with partners including education institutions, workforce organizations, and nonprofits. New topics include AI for nonprofits, business automation, and K-12 educators.

Drew Bent from Anthropic announces the company’s launch of Claude for Education and a new Learning Mode that prioritizes the Socratic method. (see Drew Bent’s LinkedIn post)

In Tracing the thoughts of a large language model, Anthropic discusses their research and papers on how their AI model Claude works behind the scenes and offers a tour of their “AI biology” findings.

Educate AI magazine’s Vol. 1 Issue 5 includes a variety of articles on AI literacy, AI in education, AI in professional development, agentic AI, and more featuring Angelo Biasi, Jeanne Beatrix Law, Jerry Crisci, Amanda Bickerstaff, and others in the AI space.

In the working paper The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise, Fabrizio Dell’Acqua et al. (including the Mollicks) detail their study of 776 workers at a global company showing that individuals with AI could match the performance of teams without AI. Ethan Mollick notes in a LinkedIn post that IT is not always where AI use appears in companies, and that their study found instead that workers were increasing productivity by using chatbots and experimenting.

A survey of 1,200 technology workers and executives in the U.S. and UK by training company Pluralsight finds that 79% of workers pretend to know more about AI than they actually do, and 91% of executives said they had faked AI knowledge. Despite this, both groups thought they had the skills to use AI in their jobs but didn’t think their colleagues did.

Daren White develops A Parent’s Guide to large language models to introduce parents and grandparents to where to start, including example prompts. (see Daren White’s LinkedIn post)

In I Wrote This – or Did I? Generative AI as a Key to Unlocking Neurodiverse Voices in Academia, Kelly Webb-Davies writes about her experience using Generative AI to assist her writing and gain freedom from some of the constraints of text-based writing. 

The Atlantic’s The Unbelievable Scale of AI’s Pirated-Books Problem by Alex Reisner causes concern among writers whose copyrighted books were available on the pirate site LibGen which Meta used to train its AI system.

Healthcare

Nabil Zary’s pre-print work AI Literacy Framework (ALiF): A Comprehensive Approach to Developing AI Competencies in Educational and Healthcare Settings introduces a framework for educational and healthcare settings with five components: technical understanding, critical evaluation, practical application, ethical considerations, and data literacy, across three progression levels.

In the article Randomized Trial of a Generative AI Chatbot for Mental Health Treatment, authors Michael V. Heinz et al. discuss the first randomized controlled trial of a Generative AI chatbot called Therabot with 210 adults who had depression, anxiety, or eating disorders. After a 4-week intervention, those who had used Therabot had significantly greater reductions in symptoms and rated the experience as comparable to human therapists, suggesting the potential for chatbots to be used for personalized mental health treatment.

Government

The European Union’s AI Office launches a survey open to all organizations that want to share their AI literacy experience, particularly as it relates to the EU AI Act’s Article 4. Contributions will be screened before acceptance into the existing living repository of AI literacy practices.

Education 

Poynter’s MediaWise and PBS launch a series called AI Unlocked with five videos and accompanying lesson plans for educators to use to help teens develop AI literacy. Topics include how Generative AI works, how to use AI ethically, prompt engineering, and evaluating AI tools, and the videos are hosted by peers in middle and high school. 

The Digital Education Council publishes an AI literacy framework with five dimensions, including understanding AI and data, critical thinking, ethical AI use, and emotional intelligence.

The Allen Institute for Artificial Intelligence (which runs Semantic Scholar) launches the Ai2 Paper Finder, which seeks to be an AI-powered literature search system that resembles the research process of a human researcher by breaking down the question into parts, searching for papers, following citations, evaluating information for relevance, and asking follow-up questions. The project is actively monitoring for feedback to improve this new tool.

Ohio State University creates a research and development program dedicated to AI in education and training that will help educators and learning designers to integrate AI technologies in their work

Carleton University’s Teaching and Learning Services launches new modules on AI as part of the Fusion initiative, with the goal of teaching students how to manage their digital footprint and use AI effectively and ethically. Educators can bring the 4-hour modules into their own courses, or students can self-enroll.

The Day of AI for educators in New Hampshire was sponsored by the state Department of Education and looked at the topic of when and how teachers could use AI in the classroom.

In What does ‘age appropriate’ AI literacy look like in higher education? in Times Higher Education, Fun Siong Lim from Nanyang Technological University reflects on examples of how to support a range of AI literacy skills for university students. 

Google launches more AI literacy initiatives, including new courses for educators in K-12 and higher education, lesson plan for Gemini, and a $1 million grant to the MIT RAISE Initiative.

The Digital Data Design Institute at Harvard launches the upskilling program Future Proof with AI for mid-career professionals, with modules including marketing, finance, HR, and agentic AI.

In the Nature article Development and effectiveness verification of AI education data sets based on constructivist learning principles for enhancing AI literacy, authors Seul-Ki Kim, Tae-Young Kim, and Kwihoon Kim confirm the importance of AI education in supporting students’ AI literacy and explore the need for constructivist-oriented datasets.

Cengage’s AI in Education survey asked over 3,000 higher education students and educators and over 1,000 K-12 educators in the U.S. about their perceptions of Generative AI, finding that around half have positive perceptions, K-12 teachers are more likely to have incorporated it into their teaching, and that almost all believe it’s important to include AI literacy in courses. 84% of students see AI skills proficiency as important for their future employment. 

In Calming the Noise: How AI Literacy Efforts Foster Responsible Adoption for Educators, Bree Dusseault, Jared Hurwitz, and Michael Berardino discuss how the Center of Reinventing Public Education at Arizona State University is studying 22 early adopter school systems and their efforts to build system-wide AI literacy.

In Feedback encounters in doctoral supervision: the role of generative AI chatbots, authors Lasse X Jensen et al. compare supervisor and chatbot responses for doctoral feedback and show that the chatbot offers more agency for the student and can offer beneficial support but cannot replace supervision.

Maurie Beasley posts about how students doing a group work project at a cafe were using AI as a collaborator to help them rather than do the assignment for them, and how this is a reminder that teaching AI literacy helps prepare them to use AI thoughtfully, critically, and creatively now and in the future. (see Maurie Beasley’s LinkedIn post

Holly Clark argues for educators to directly interact with foundational LLMs rather than shortcut tools, a kind of AI with training wheels, in order to build their AI literacy and understand AI’s capabilities and how tokens truly work. (see Holly Clark’s LinkedIn post)

Categories: News