Sunday, May 10, 2026

AI Literacy Across the United States Workforce

https://blog.citp.princeton.edu/2026/05/05/make-america-ai-ready-strengths-weaknesses-and-recommendations/ What Does It Do Well? It’s accessible. The choice of SMS for delivery maximizes reach. It meets people where they are, requiring no app installation, account creation, or navigating unfamiliar web platforms. The 10-minute-a-day pacing is practical. It emphasizes verification of AI outputs. The course consistently emphasizes that AI output must be checked, not blindly trusted. The example of looking up a restaurant only to find out that a nail salon has opened in its place is memorable (Lesson 6, below). The course also thoughtfully extends this skepticism to AI-generated images, video, and audio. It centers human responsibility. The quiz question about a coworker submitting an AI-generated report with fabricated statistics (Lesson 2, below) returns a sensible response: the human is responsible. This is repeated throughout the course and is one of its most important messages. It’s honest about AI’s limitations. The course doesn’t shy away from the fact that AI can be confidently wrong. The term “hallucination” is introduced clearly, the concept of training data cutoffs is explained, and the course repeatedly emphasizes that AI predicts rather than knows or understands. For a 101-level course, this is appropriately calibrated. What could be fixed in AI 101? There are some things we’d recommend fixing about the course. The course repeatedly contradicts its own privacy and security advice. The course contains a serious inconsistency when it comes to data privacy and security. On the last day of the course it offers common-sense advice, stating “PROTECT your private info. Never share passwords, Social Security numbers, medical records, or confidential work data with AI tools,” later adding not to share “income data.” But some of the advice and exercises leading up to that point had already prompted users to input some of these “never share” types of data. • On Day 3, the course urges the user to input a photo, PDF or recording of their own voice. • On Day 4, it says that a “power move” is for users to “give AI your own data to work with,” including instructions to “paste your resume” and “share your monthly expenses.” • On Day 5, the course says that a good use case for AI is putting “medical symptoms” in to learn medical terms and prepare questions for a doctor. • On Day 6, it tells the user to share their address to find a restaurant near them. These self-contradictions expose a central tension: AI tools can be more useful when they know more about you, so a blanket prohibition against sharing private information will limit their usefulness. Unfortunately, there is no simple answer to the question of how to protect your privacy when using AI, and there is no single approach that will work for everyone. It requires critical thinking based on an understanding of different threat models, including prompt injection risks, traditional cybersecurity risks, legal risks, AI companies’ eagerness to train on user data, and workplace policies that of course vary between organizations. We recognize that this level of nuance would be too much for an introductory course. We would recommend that the privacy protection lesson come earlier in the course, and include information about privacy settings that AI tools offer, such as temporary or incognito chats. Instead of the “never share” language, giving people at least a rudimentary understanding of what could go wrong would be more helpful, along with links to resources where they can learn more. The quizzes adopt a right-wrong dichotomy The quiz questions often ask the user for an explanation of AI’s failure modes and social effects. While it is important to face these head-on, the questions consistently have one “obviously correct” answer that maps to the course’s framing. Several wrong answers are absurd strawmen (“AI likes making things up to test you,” “AI’s internet connection was slow”). This limits the potential to build genuine understanding or critical thinking about AI’s functioning and societal implications. We would recommend an approach that highlights known issues without pretending that the explanations are simple. Flexibility in how issues are framed will allow course participants to grapple with them in a manner that is relevant to the skills they are building. More open-ended quiz questions might include: “Your employer starts mandating that all workers use AI. This may enable your employer to monitor your productivity. What are your options?” or “You are about to apply for a loan. How can you find out whether and how AI will be used in evaluating your application?” What could DOL build upon in AI 201? Expanding upon the introductory materials in the 101 course, there are several opportunities for content development that we would recommend. The course misses how AI is reshaping work For a course that is offered by the Department of Labor, there is very little content on the subject of work — the course frames AI solely as a productivity tool workers can use. The Department of Labor exists to protect workers, their wages, their safety, and their rights, yet the course largely skips over the ways AI is already reshaping hiring, performance monitoring, and layoffs of workers across many sectors. An AI 201 course could provide more information on these, and inform citizens of legitimate reasons they may have to call for regulation. It could also go into more depth on the privacy question. Finally, AI 201 could reckon with the broader societal consequences of this technology: for instance, bias, surveillance, and the concentration of power in the hands of a few large technology companies. Workers who understand these dynamics are not just AI-literate; they are better equipped to advocate for themselves. Deepening Technical Explanations The 101 course keeps its terminology simple, which is important. But sometimes it oversimplifies. An AI 201 could deepen the explanation of how models are trained, make inferences, and deliver human-interpretable results. The course’s technical explanation — AI finds patterns and makes predictions — serves as the entire mental model. This framing makes AI sound more mechanistic and less opaque than it actually is. On day 3, the language of pattern and prediction drops out, with the language of “instruction” and “results” substituting in for the human input and predicted output of AI. The current course also equates predicting with guessing and AI training with “studying” – analogies that might be a useful starting point, but are quite limiting. For an AI 201 course, the connections between AI learning, model weights and predictions – as well as the connections between all of these things and the results generated from instructions – could be deepened. Indeed, how AI can be biased, can hallucinate, and otherwise can make errors is easier to comprehend when one understands a bit of the math behind machine learning. More Active Learning Engagement The quizzes in AI 101 are based on reputable learning science. Often the quiz will introduce a new concept or ask the user to stretch what they just learned to cover a new situation. There’s good evidence to think that this sort of “pre-assessment,” followed quickly by lessons teaching the correct answer, does improve retention in general. But as we said the AI 101 quiz questions consistently have one “obviously correct” answer that maps to the course’s framing, limiting the potential to challenge the user’s understanding. Additionally, we found minimal tailoring of text-message responses to the user’s quiz answers, despite the affordances of the interactive platform. If one user selects what is considered a right answer while another selects a wrong one (we tested this), the course responds with similar if not identical information. Better quizzes in AI 201 could perhaps be assessed by an LLM, with adaptive responses that meet the user where they are, and stretch their understanding when they’ve acquired a solid base. The daily challenges in AI 101 (Quick Draw, Udio music generation, fridge photo recipes) are well-designed to get people past the intimidation barrier. They’re low-stakes, fun, and demonstrate AI capabilities concretely. But for AI 201 they could be more effectively leveraged to actually show people how AI can be useful in their work and daily lives, and can (as promised by AI 101) “save them 5 hours per week”. Who created the course, and how? The DOL’s press release announcing the course points to a collaboration with a private partner called Arist. Arist’s website at the time of writing states that “Arist is the #1 enablement AI. Arist’s agents orchestrate creation, delivery, and analytics, end-to-end.” While the DOL announcement gives little detail as to the nature of the collaboration, if the company co-developed actual course content using generative AI this fact should be disclosed. One of us ran selected course content through Pangram, a tool which purports to detect AI content, and the results came back suggesting it was 100% AI-generated. Without putting too much stock in that, we began to suspect that some of the faults in the course could be explained this way. The simplistic framing of how AI generates results (patterns/predictions, instructions/results) could come from AI: since LLMs are trained on old explanations of how LLMs work, they may reach for framings that are not up-to-date. Also, if each module/quiz was generated separately, that could explain abrupt changes in terminology and the contradictions we identified regarding the sharing/not sharing of private information. The use of AI for content creation isn’t a problem per se; but the failure to disclose left a missed opportunity for a teachable moment on the utility and risks associated with generative content. Also, the contradictions in regards to security and privacy, which we discussed earlier, should have been caught by human oversight. Additionally, going forward, transparency about how commercial partners are involved can lend itself to wider adoption and trust of course materials and DOL initiatives. The final lesson of the course refers users to an Arist-sponsored AI summit featuring Tony Robbins and Dean Graziosi. While the Summit appeared to be free, it raises the question of what other paid AI-enablement sessions or products these well-known coaches might offer. Graziosi has drawn attention for his role in other problematic training programs. Users deserve to know who benefits from pursuing the recommendations made by a Federal agency. Conclusion Make America AI Ready offers significant insight into the priorities the Federal government holds in reaching widespread AI-literacy across the United States workforce. Although we suggested several areas for development, the course content and manner in which it was released are a useful start in achieving this aim.