This week’s AI literacy review covers discussions on expertise, prompting, the role of technology in jobs, how AI might help or harm students, and several resources and academic papers about AI literacy.

General 

Natalie Monbiot writes about the advantage that experts in a field have to both judge AI output and do the best prompting. (see Natalie Monbiot’s post on LinkedIn)

Reed Hepler writes about interacting with AI tools as “conversation steering” rather than prompt engineering, essentially keeping the tools focused and loaded with necessary data. Following frameworks can help users learn how to develop generalized skills for working with AI that transcend any one particular chatbot. (see Reed Hepler’s Substack post)

In an episode of the Everyday AI podcast titled 5 Laws for Success in the AI Era, Jordan Wilson speaks with Isar Meitis about the historic transition from certain professions into skills, e.g. people used to have the role of typists and computers, but there are few career typists now. These professions have largely turned into skills due to technology. They discuss how AI tools will enable people who aren’t graphic designers, coders, etc. to do these roles without needing to be a professional.

Business

Conor Grennan discusses the skills needed for a Head of AI role, arguing that it isn’t a tech role but rather about driving change and getting buy in. He recommends focusing on hiring for critical thinkers who can use Gen AI rather than hiring for administrative skills. (see Conor Grennan’s post on LinkedIn)

In a Fortune piece There is a perception that AI is going to threaten the very nature of creativity. Here’s why I disagree Google’s Chief Marketing Officer, Lorraine Twohill, says: “So now is the time when everyone can experiment and start playing around. AI tools are already available–and there’s no need for advanced training or coding knowledge to take part. But not enough of us are doing this right now.” She goes on to talk about the terrifying blank page and how creatives can “put the horror of the first terrible draft on an AI,” then be brilliant. Twohill is correct about the low barrier to entry for using AI tools, but this increasingly common rhetoric about the blank page and how AI can help with writing and associated anxiety seems to gloss over the function of the drafting as a way of processing your thoughts and helping get to the better stuff. 

Education

The nonprofit AIandYou launches a bilingual (English and Spanish) platform to increase understanding about AI for underrepresented communities. It covers AI basics, AI literacy, and specific topics such as AI and elections and AI and the workforce.

Tom Vander Ark of Getting Smart curates a list of AI literacy resources for teachers and students, covering starting with fun, useful guides for schools, content for K-12 students, and more. 

Barnard College (New York) staff develop a framework to guide the development of AI literacy among staff and students. Their framework includes four levels: Understand AI, Use and Apply AI, Analyze and Evaluate AI, and Create AI. 

World Bank EduTech Podcast features a podcast episode on AI literacy for students and teachers with Education Technology Specialist Maria Barron hosting guests Pati Ruiz, Senior Director of EdTech at Digital Promise, and Jennifer Rubin, Principal Investigator of the Digital Technologies and Education Lab at foundry10. They discuss the AI literacy-related research and resources from their organizations, Digital Promise and foundry10.

Common Sense Education offers AI literacy lessons for grades 6 to 12 that are each under 20 minutes and provide an intro to AI and its social and ethical impacts.

A study on AI literacy among 215 kindergarten children using an AI literacy assessment tool found that younger children scored lower than older children, and girls had a slightly higher score than boys after completing the curriculum. (paywalled article in Education and Information Technologies journal)

The journal article What are artificial intelligence literacy and competency? A comprehensive framework to support them defines AI competency and proposes a framework with five components: technology, impact, ethics, collaboration, and self-reflection. Authors also identify five learning experiences to foster confidence and abilities relating to AI.

The journal article AI literacy for ethical use of chatbot: Will students accept AI ethics? examines how a curriculum could teach the ethical use of large language models like ChatGPT, and how students felt uneasy about how their personal information was used even if consent was given.

Tim Dasey launches a Substack about education and facets of AI, noting in the first issue that he plans to avoid AI cheerleading or blanket criticisms and take a deeper look at AI literacy beyond prompting.

Adam Pacton raises the question of how graduate programs will prepare students to integrate AI into their research process, and whether these tools could make more time available for original thinking by reducing time spent on literature reviews. (see Adam Pacton’s post on LinkedIn)

Sam Yousefifard reviews a paper about AI dependency showing negative effects such as increased laziness, misinformation, and lower creativity. He then reflects on ways to reduce such dependency, including thought-processing journals, discussion panels, and raising awareness about the signs of AI dependency so students can recognize how it may affect their learning. (see Sam Yousefifard’s post on LinkedIn

Michelle Kassorla writes about “AI speech,” or words that AI tools tend to use that students may want to avoid in their writing such as “delve” or “nuanced.” The concept raises questions about how human writing may evolve in the face of attempts to distinguish itself from machine-generated content. (see Michelle Kassorla’s post on LinkedIn)

Categories: News