This AI Literacy Review covers MIT’s 700+ AI risk database, research on AI literacy competencies, AI training for bank employees and tennis fans, Gen. AI in Libraries conference, a guidebook for AI in scholarly writing, first law school to offer degree on AI literacy, Canadian and Australian Governments’ initiatives, AI demands at Harvard and college writing centers, AI activities and prompts for educators, and more!

General 

MIT researchers have launched an AI Risk Repository curating over 700 AI risks from 43 existing frameworks into categories with a variety of ways to look at the data. The database is free to copy and use, and there are examples of how different audiences might use it, including policymakers, academics, and industry professionals. 

The paper Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for Developing Critical AI Literacies by Anuj Gupta, Yasser Atef, Anna Mills, and Maha Bali looks at metaphors as a way of advancing critical AI literacy and thinking about how we may anthropomorphize technology.

In Generative AI Literacy: Twelve Defining Competencies, researchers from Switzerland introduce a competency-based model for AI literacy ranging from foundational skills to prompt engineering and programming skills. The competencies are offered as a roadmap for individuals, researchers, and policymakers.  

In Quest for AI Literacy in the journal Nature Methods, Vivien Marx reviews various approaches to AI at different educational institution and organizations.  

JPMorgan Chase is rolling out a Generative AI assistant to its 60,000 employees and teaching them how to prompt in a way that relates to their domain, hoping the tech will augment virtually every job at the bank in time.

In Employees have forged ahead with generative AI while companies lag behind, McKinsey finds, Laurel Kalser reports on new research recommending that companies offer comprehensive training on skills such as data analysis and the understanding of AI-generated output and prepare to move from individual experimentation to organizational strategy.

Rachel Woods emphasizes that people should learn to prompt, or communicate with AI, rather than prompt engineering, which she defines as needing testing systems, probability charts, and stats tests that most people don’t need. (see Rachel Woods’ LinkedIn post)

Google Cloud-commissioned research of 2,500 executive leaders shows that 46% of have a plan to invest in upskilling their workforce and attracting new talent who have AI expertise. 

Sheri Popp asks whether there’s a course on giving constructive feedback to AI, noting that she feels bad when having to correct AI systems like Claude to prompt them to get better results. (see Sheri Popp’s LinkedIn post)

Lance Cummings writes about the importance of structure in working with AI, showing how long-standing communication principles apply to a new digital landscape such as structuring prompts to guide AI. (see Lance Cummings’ LinkedIn post)

Sports

IBM SkillsBuild launches a Learn Tennis with AI micro course and guidebook that promotes AI literacy for tennis fans in partnership with the USTA Foundation ahead of the US Open, including an interactive microlearning session about how AI is transforming the game of tennis through things like real-time player analytics and electronic line calling. 

Libraries

Recordings from the Generative AI in Libraries (GAIL) Conference are available on YouTube. Sessions include Crafting Critical Historians: Incorporating AI Literacy into Primary Source Literacy, ChatGPT, Gemini, & Copilot, Oh My! Using Generative AI as a Tool for Information Literacy Instruction, and Promoting AI Literacy in Library Research Guides: Practical and Ethical Considerations.  

Education

Maarit Jaakkola publishes Academic AI Literacy: Artificial Intelligence in Scholarly Writing, Editing, and Publishing under a Creative Commons license as a short guidebook for doctoral students and junior scholars in media and communication sciences, covering a set of competences that an academic needs to master to do their work. The guidebook looks at scientific editing and publishing, research communication, and productivity and professional development. 

UC Berkeley School of Law is likely the first law school to offer a degree focused on AI literacy for lawyers – its AI-focused master of laws degree is open for applications for those with JD degrees and will cost around $73,000. 

The Canadian Government has renewed the national digital skills training program CanCode with $39.2 million and a new focus on AI literacy. Its aim is to reach 1.5 million students and train 100,000 teachers to incorporate new digital skills and tech into classrooms.

In a survey of Harvard undergrads over half said they wished Harvard would offer more classes on the future impacts of AI. 87.5% are using Generative AI (95% of those are using ChatGPT) and for a third of them, AI is replacing Wikipedia and Google searches as a place to get information. 30% are paying for premium subscriptions and relying less on university resources, raising issues about equity for students who cannot afford to spend money on AI tools and the future of library usage and even academic publishing. 

In College Writing Centers Worry AI Could Replace Them, Maggie Hicks reports on the state of AI literacy at writing centers and that some campuses are taking the position that writing center tutors can help students develop AI literacy, especially if they are first trained in it. For example, Sarah Z. Johnson at Madison College discusses how this year tutors are learning AI literacies such as how AI models work, biases, and prompt writing that they can pass on to students they work with.

The Australian Government’s document The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action by Jason M. Lodge at the University of Queensland calls for everyone involved in teaching and learning in higher education to engage with and understand Generative AI technologies. Drawing on the work of Leon Furze and colleagues, the document recognizes that the calculator analogy is inadequate: “The human-machine relationship with generative AI is more interactive than transactional, and this difference is important for those in the higher education community to come to terms with. Generative AI is not like a calculator at all.”

Phillip Dawson presents at EduTECH Australia on Assessment design for a time of artificial intelligence, with a key point being that assessment needs to prepare students for a world where AI is everywhere. (see Phillip Dawson’s Linked post with link to slides)

Jason Gulya points to research by Gartner that shows a sizable risk of abandonment of AI projects by next year and concludes that schools should invest in training rather than platforms to build AI literacy and competency no matter what happens to AI companies they may work with. (see Jason Gulya’s LinkedIn post)

Mike Kentz’s AI activity Interviewing a Fictional Character is accepted into the Harvard AI Pedagogy Project as a way of teaching AI literacy in the context of existing curriculum. He also provides a Companion Resource in his article Harvard AI Pedagogy Project: Safe and Effective Use of Personality Bots in the Classroom.

In Workshop Resources for Student Supports, Lance Eaton looks at how student support staff such as academic advisors, disability services, and international education staff might benefit from Gen. AI, offering prompts to try based on his workshop at Suffolk University.

Categories: News