AI images defame a California elementary school. Now the state is pushing for new protections – The Mercury News

Written by Khari Johnson, CalMatters
In December, fourth-graders at Delevan Drive Elementary School in Los Angeles were given a homework assignment: Write a book report about Pippi Longstocking, and then draw or use artificial intelligence to create a book cover.
When Jody Hughes’ daughter asked Adobe Express Education, the graphic design software provided by her teacher, to produce a picture of “a tall stocking red-headed girl with straight braids,” it produced nothing like the Swedish children’s book character she had accurately described. Instead, using newly installed artificial intelligence, it produces pornographic images of women in underwear and bikinis. Hughes quickly contacted other parents, who said they were able to reproduce similar results on their school-issued Chromebooks. Days later, the parent group Schools Beyond Screens told the LA school board that it opposed the continued use of Adobe’s software.
The incident raised questions not only about the LA school district’s use of a particular AI product but also about the guidelines state administrators are giving schools across California about how to safely embrace the technology. A few weeks after the incident, the country’s Ministry of Education published a new set of guidelines, which had been in the works for several months with the help of a group of 50 teachers, administrators and experts. This review came following instructions from the Legislature, which passed two laws in 2024 telling the department, in effect, to get a handle on the spread of AI among students, teachers and administrators.
Critics wonder if the guidelines would have helped avoid what parents call Pippigate; the debate, they say, provides evidence that districts, schools, and parents, who often lack time or resources to ensure that software tools do not produce a harmful effect, need more support from the government. They add, the guidelines are also very vague in areas and don’t do enough to define the lines of caution for how teachers use AI in the classroom.
The problems in these guidelines call into question whether the department can respond to the instructions of elected officials about how to protect technology that, according to the guidelines itself, can leave children alone and with limited views.
As AI grows more rapidly in society, effectively managing the technology has become an urgent issue. Although OpenAI’s ChatGPT popularized productivity AI just three years ago, surveys show that the majority of teachers and students across the country now use the technology in some way.
While AI can help save teachers time, personalize learning, and support students who do not speak English or have disabilities, it can also misclassify their papers and produce images that perpetuate or reinforce stereotypes or sexualized images of women, especially women of color. The majority of California K-12 students are people of color. Since the rapid rise of AI adoption, educators who spoke with CalMatters have felt the need to prepare their students for a future where AI is everywhere and the fear that AI tools could allow cheating on tests and lead to deficiencies in thinking, reasoning and critical thinking.
“Teachers have a small window to set standards before they become strict,” said LaShawn Chatmon, CEO of the National Equity Project, an Oakland-based group that helps teachers produce equitable outcomes. “Local education agencies that use this opportunity to connect learning and policy with students and families can help change what we know and determine the role of AI in learning and in our lives.”
A district spokesperson told CalMatters that the images generated by the AI model did not meet district standards and “we are working with Adobe to address this issue.” Adobe’s VP of Education Charlie Miller said the company made changes to address the issue within 24 hours of hearing about the incident. Miller did not respond to questions about how the device was tested before shipping.
As a result of her child’s experience, Hughes thinks students should not be told to use text-to-image generators for homework. But he does not see efforts to impose such restrictions on the use of technology in the direction of the Ministry of Education.
“These technology companies are marketing things to children that are not fully vetted,” she said. “I don’t know where to draw the line but elementary school is still too young because it can be really bad as we saw with the Grok stuff,” he added, referring to a recent exploit of the Grok AI system to remove clothing from photos of women and children without consent.
Problems with the AI guide
The guide provides a list of unacceptable uses of AI by students, such as cheating, and urges teachers to incorporate real-world situations and case studies into discussions to help students apply ethics in real-world situations. It also says students should be taught to “think critically and intelligently” about the “benefits and challenges” of AI tools.
Julie Flapan, director of the Computer Science Equity Project at UCLA’s Center X, said the Pippi Longstocking incident recalled a 2024 study that found Black and Latino youth are more likely to use productive AI than white youth. That data, coupled with historical disparities in access to computer science education, he said, means some parents and students will need help thinking more deeply about AI.
“We tend to think of technological advances as ways to level the playing field,” he said. “But the truth is we know they are increasing inequality.”
Flapan said it makes sense that the guidelines encourage critical thinking and evaluation of AI tools before implementation and encourage education leaders to involve communities in decision-making. But, he added, the guidance doesn’t specify how to do that.
Charles Logan, a former teacher who is now a technology laboratory at Northwestern University, said the guidelines fall short by not giving teachers and parents clear guidance on how to opt out of using technology. A Brookings Institution study released in January, based on interviews with students, teachers and administrators in 50 countries, concluded that the risks of AI in classrooms currently outweigh the benefits and “could undermine children’s basic development.”
Mark Johnson, head of government affairs at Code.org, praised the guidelines, but said the state should provide more support for AI education for teachers and make specializations in AI and computer science requirements for degrees. Johnson’s latest report found four states adopted these graduation requirements after issuing AI guidelines.
Katherine Goyette, who worked as a computer science consultant at the Department of Education until January, when asked about the Longstocking incident, referred to parts of the guidance that emphasize the importance of involving families, communities and school board members when testing AI tools. He also said that critical thinking is important in preventing such consequences, pointing to a directive that urges managers to consider possible harm before use.
More guidance is on the way on how to implement the newly released guidance: the department’s AI working group will present policy recommendations based on the guidance in July.
AI’s inevitable narrative pressure
The latest version of the California Department of Education’s AI guidelines comes as local educational institutions move away from a proposed AI ban after the 2022 rollout of OpenAI’s ChatGPT. Instead, districts are looking to decide when and how students and teachers can use technology. Those local decisions will be critical to how the technology is actually used in schools, as the state will not require school districts to adopt its guidelines.
Even the largest school districts in California can run into serious problems when deploying AI. In June 2024, the superintendent of Los Angeles Unified promised the best AI teacher in the world but had to withdraw it from implementation weeks later. A week later, news broke that most board members of the San Diego Unified School District, the state’s second-largest district, had signed a curriculum contract they didn’t know included an AI grading tool.
The move toward regional and regional AI guidance, rather than prohibition, reflects a broader sense of inevitability in the nation about technology adoption. On his October ballot for a bill that would have banned the use of some chatbots by children, Gov. Gavin Newsom said that AI is already shaping the world and that “We cannot prepare our youth for a future where AI is everywhere by preventing their use of these tools completely.”
Logan, who recently advised San Diego parents on how to resist and reject the use of AI in classrooms, pushes back against the idea. He says the California Department of Education’s guidance should address situations where parents may want to prevent their children from using AI at all.
“It’s surprising that the directive wants to make AI users knowledgeable for preschoolers and there was no room for no or opt-out,” he said on the phone.
The nationwide AI guide joins a series of efforts to protect children from AI, including bills now before the Legislature that seek to temporarily ban toys with chatbots that would comply with student privacy protections in the age of AI. Common Sense Media and OpenAI are working to get an online safety program for children on the ballot in the November election.



