There’s a growing theme in the research and discussions about AI: employees want and need training in AI tools! This review covers research and posts about the lack of employee training and AI systems documentation, the gender gap in AI usage, deep fake awareness education, findings about using AI tools vs the old-fashioned way, and AI literacy recommendations for homeland security, teachers, students, and corporations.

General

The Cyber Safety Project in Australia is hosting a free community webinar in September on AI deep fakes aimed at parents who want advice on how to communicate with young people about some of the misuses of Generative AI technologies. Panelists include Leon Furze, Vanessa Hamilton, and Trent Ray who together bring expertise in their different areas of Gen. AI, education, digital wellbeing, and sexuality education. (see Trent Ray’s LinkedIn post)

Ethan Mollick points out that one of the reasons usage of internal Gen. AI chatbots that companies are making isn’t taking off is because employees lack training and don’t want to be monitored; they’d rather use their own ChatGPT. Mollick argues that discovering AI uses require experimentation and some risk. (see Ethan Mollick’s LinkedIn post)

Mollick also discusses in Confronting Impossible Futures how it’s kind of surprising that the major AI labs haven’t seemed interested in putting out guides aimed at non-technical users, despite the billions in investment: “Instead, we have a proliferation of shady advice, magic spells…and secret trial-and error. Call it documentation by rumor.” This, he says, leads to many people not knowing what current AI systems can do because they haven’t spent the time (he suggests 10 hours) necessary to see their potential. 

Conor Grennan discusses some of the issues he finds with prompt libraries and how they could stifle people’s engagement in learning about Generative AI by promoting rote learning instead of letting them augment their already-powerful brain capacities. (see Conor Grennan’s LinkedIn post)

Sarah Hood wonders if there’s any work being done on developing a better vocabulary to describe Generative AI to avoid the human-sounding language that isn’t accurate to how the systems actually work. (see Sarah Hood’s LinkedIn post)

Safety issues continue to surface with Gen. AI, and Mark A. Bassett writes about the possibility of hiding instructions inside an image file that can be used by ChatGPT if the image is submitted to it. (see Mark A. Bassett’s LinkedIn post)

Sabba Quidwai reminds us that there are lots of mediocre AI tools out there, and these can lead to mediocre results. She suggests we think about bringing in AI tools like we do bringing in the right people. (See Sabba Quidwai’s LinkedIn post)

Jon Ippolito points out the changes happening in Google’s search engine practices and possible changes for the future of content on the internet as more AI-generated content clogs up the ecosystem. Chatbots are offering a more conversational way of discovering content that Google must deal with. (see Jon Ippolito’s LinkedIn post)

Organizations

A survey of 100,000 workers in Denmark by Anders Humlum and Emilie Vestergaard titled The Adoption of ChatGPT finds that on average half of them were using ChatGPT, and younger and less experienced workers were more likely to use it. 43% of respondents said they needed training to use ChatGPT, and 35% said employers restricted their usage. A gender gap was also found, with women 20% less likely to have used it in the same occupation. The authors conclude that the need for training is actively hindering women from benefiting from ChatGPT usage.

Reported by UKTN in Half of UK workers are using AI weekly despite lack of training, Asana and Anthropic’s survey of over 5,000 knowledge workers in the US and UK finds a lack of support for AI training, with 82% saying their organizations have not provided Generative AI training and 56% saying they’ve had to personally experiment themselves with the tools.

Piers Clayden publishes ebooks titled AI Policy Guide and AI Policy Template based on his experience as a tech lawyer. He recommends that organizations develop mandatory AI literacy programs for employees and refresh employees’ skills as the technology advances. Courses might include AI fundamentals, ethics, security, and policy. (see Piers Clayden’s LinkedIn post)

A report on system administrators – Action1’s 2024 AI Impact on Sysadmins: Survey Report – finds significant gaps in education and insufficient maturity are hindering widespread implementation of Generative AI in organizations. 72% of respondents expressed a need for training, and 45% were concerned about possibly becoming obsolete due to their current AI literacy level. 

Deloitte’s survey of Chief diversity, equity and inclusion (DEI) officers finds that they can, together with other executives, create more learning opportunities around AI literacy within their organizations and help mitigate risk with a focus on equity. 

In AI could help two-thirds of workers with daily tasks, says study, The Standard reports that Google has launched AI Works schemes to assist workers in improving their AI skills, after Google’s research showed that AI would impact every sector of work in some way and that nearly two-thirds of jobs in the UK could be improved by AI. 

In The AI-Powered Workplace Skills Era Needs a Lesson Plan, Duncan James argues that companies need to have better upskilling programs that are relevant to particular use cases for employees to benefit from them. 

Bernard Marr posts a YouTube video Why Every Company Must Boost AI Literacy Across the Entire Workforce on the importance of AI literacy and understanding AI’s capabilities and limitations. 

SHRM and Charter Works in The AI Skills Non-Technical Workers Need ask business leaders what they consider essential AI skills, how they identify these skills in job candidates, and how they develop these skills themselves and among their colleagues. Leaders discuss AI training programs for general AI literacy, skills such as prompt engineering, and complementary skills such as critical thinking. 

Paris-based food products corporation Danone is collaborating with Microsoft to launch a Danone Microsoft AI Academy to assist employees with having the right digital tools and skills to do their jobs and make data-driven decisions.

Government

Michael J. Keegan And Ignacio F. Cruz in AI Literacy: A Prerequisite for the Future of AI and Automation in Homeland Security write about how the homeland security workforce needs to understand AI’s capabilities and limitations and be able to use AI-driven tools effectively. They draw insights from Cruz’s chapter in an IBM Center for The Business of Government’s book titled Transforming the Business of Government: Insights on Resiliency, Innovation, and Performance that outlines a three-phased approach to AI literacy: Assessment, Implementation, and Continuous Learning.

Education

Lauren Coffey in Majority of Grads Wish They’d Been Taught AI in College for Inside Higher Ed reports on a survey that found 70% of college grads believe that basic Generative AI training should be incorporated into courses, and 55% didn’t think their degree programs prepared them to use these new tools in the workforce. It was younger grads (<27) who felt the most unprepared. Also, 39% of grads felt threatened that Gen. AI could replace them or their job.

Sue Sentance discusses the guidance document Using generative AI in the classroom: A guide for computing teachers put together by computing teachers and researchers and published by the Raspberry Pi Computing Education Research Centre, University of Cambridge. The document suggests how Generative AI-based applications could be used by computing teachers in the classroom, as well as how it could be used more broadly for admin tasks and across the whole school. (see Sue Sentance’s LinkedIn post)

The Federation of American Scientists along with AI for Education’s Amanda Bickerstaff call for the US Congress to establish a program within the National Science Foundation to provide AI literacy training for K-12 teachers so they can integrate AI into their teaching effectively. They offer a plan of action and timeline for their proposal that expands on the government’s executive order on the development of AI.

Harvard’s AI Pedagogy Project has a curated list of Generative AI resources that includes guides, policies, people to follow, initiatives, and online communities. It also has a collection of assignments that integrate AI tools from educators across the world. 

British Columbia’s Public Service has a page titled Digital literacy and the use of AI in education: Supports for British Columbia schools with information and materials for AI and education to help with AI-related learning and digital literacy skill development. It includes concrete examples for different grade levels and subjects, as well as posters for teachers to send home for parents and caregivers in multiple languages.

Stacy Kratochvil finds a webpage from edtech company Instructure (which produces the Canvas LMS) called Emerging AI Marketplace that offers an “AI Nutrition Facts” for various AI products that work with Canvas. The facts include info about the base model, user data sharing, data retention policy, and other measures. 

Kathryn MacCallum and David Parsons in AI tutors could be coming to the classroom – but who taught the tutor, and should you trust them? in The Conversation argue that education’s focus should be on developing AI literacy, regardless of where the concept of AI tutors goes. 

Cathy Brown in her YouTube video AI Demystified – What’s Next? explores how to introduce AI to students as young as kindergarten age to help them start understanding how it works. She writes about the Guess the Next Word game as an introduction to how large language models work. (see Cathy Brown’s LinkedIn post)

Mike Kentz in Grading the Chats: The Good, The Bad, and The Ugly of Student AI Use writes about his process of assessing students’ AI chat transcripts. Using an example from a part of an English course on Romeo and Juliet, he shows how the chats offer valuable insights into how students are thinking and expressing their creativity. The material provided also helped students develop good prompting habits and see how they could use ChatGPT as a brainstorming partner.

Also on the topic of student transcripts, Jason Gulya notes that there is a huge range of AI literacy among students using ChatGPT, with some not recognizing its errors, some excelling, and some struggling to push against it. He finds the transcripts to be gold mines for learning about how students read and evaluate the chatbot’s output. (see Jason Gulya’s LinkedIn post)

Philippa Hardman in How Close is AI to Replacing Instructional Designers? shares findings from an experiment with 200+ instructional designers about how using or not using AI affected their performance. For anyone thinking that old-school, no-AI approaches are always best, the results show the impact of AI usage and skills on tasks such as writing learning objectives. Hardman notes the importance of effective prompting, such as using structured frameworks that include context, instruction, details, and input. (See Philippa Hardman’s LinkedIn post)

Categories: News