AI resolution aims to help build ‘guardrails’

The AFT’s executive council is calling for social media, generative AI and machine learning models to be developed and employed ethically, with developers held accountable for any real-world harm. In adopting a resolution last week, council members agreed to take first steps toward learning about emerging best practices in AI law and regulation, and to work directly with industry leaders on safeguards.

AI Resolution

“Generative AI―ChatGPT, which is the first kind of on-the-shelf version of it―is a complete game changer,” AFT President Randi Weingarten told council members meeting June 1 in Washington, D.C. They agreed that rapid advances in artificial intelligence, such as ChatGPT and other generative programs in writing, art and video, along with social media and related technologies, are creating both awe and apprehension.

They cited warnings from hundreds of scientists and experts about the risk of human extinction from AI, and they acknowledged the technology’s growing impact on schools, public services and other workplaces.

“The initial conversation I’m sure many of you have had is that a student would go to ChatGPT,” Weingarten said. “You gave somebody an assignment about the War of 1812, and they would come back with a GPT-written essay that they would submit. So that was the original concern, about plagiarism.”

A larger danger, she added, is that many tech companies have prioritized profits over people, allowing misinformation, disinformation, political instability and extreme bullying on their platforms.

What’s more, there now exist virtually no checks on the power of generative AI models like ChatGPT, which can draw on the entire internet. These technologies often overlook the fair use of intellectual property and ignore privacy rights. And, when the requested information doesn’t exist, ChatGPT simply fabricates it—a phenomenon tech developers call “hallucinating.” So far, the creators of AI models cannot fully explain why their technologies generate false data or lack transparency in how they make decisions.

Other ominous considerations for generative AI—as in all of technology—are its well-documented racial and cultural biases, and that the cost of purchasing AI software is likely to restrict access for marginalized people.

The new AFT resolution calls for advanced technologies to be developed and employed ethically. It urges governments to implement strict regulations protecting privacy, security and well-being. And it calls for social media and AI technologies that adhere to principles of equity, fair access and social accountability.

Matters of concern

The pandemic made clear that hands-on nursing, teaching and supporting roles in healthcare and education are essential and can’t be performed well remotely, Weingarten told the council. But she noted that the threat of AI is already upon us—it is animating the current strike by the Writers Guild of America, whose members are asking studios for contract language ensuring that they won’t be replaced by AI, that their scripts won’t be used to train AI, and that they won’t be hired simply to liven up robotic language.

The threat is real. Not only do certain tech developers and public health experts see AI as an existential threat to humanity, but in the nearer term, AFT leaders in higher education expressed concern that fiscal pressures may push public colleges and universities toward even more online learning. And public employees who work in federal, state and local public service fear their jobs may be increasingly outsourced to chatbots and other robots.

In education, the immediate challenge posed by ChatGPT is when students try to pass off AI-generated work as their own. School districts trying to ban this technology will fail, Weingarten warned, because hundreds of versions of generative AI exist and their products are becoming harder to detect.

“What you see in this statement is the absence of us saying or pretending that it will go away. It will not go away,” she said, adding that generative AI should be used as a new tool. “We have to figure out how to safeguard it, how to regulate it, what rules of the road need to happen nationally, legally. And frankly, Europe is well ahead of us in terms of doing this work.”

Weingarten said AFT leaders will tackle AI issues together with the AFL-CIO’s new Technology Institute and with the International Society for Technology in Education. She is pulling together a small team of vice presidents and AFT staff who will collaborate with legislators and regulators to begin erecting “guardrails” that can help keep generative AI from running off the road. For instance, U.S. Rep. Ritchie Torres (D-N.Y.) plans to introduce a bill this week that would require any content produced by generative AI to include a disclaimer noting the content’s source.

By July, the AFT team plans to have studied protections enacted by the European Union, which recently imposed a $1.3 billion fine against Meta, the parent company of Facebook and Instagram.

Based on last week’s new resolution and the formative AI resolution that preceded it, AFT leaders hope to devise a strong response.

The AFT’s executive council also adopted a resolution last week in support of healthcare workers and patients affected by abortion bans. Both resolutions are available with all AFT resolutions here.

[Annette Licitra / Photo illustration: Userba011d64_201 / iStock / Getty Images Plus]