The most immediate problem posed by AI, however, is not job loss but the potential for misuse. As the 2024 presidential election nears, experts fear that AI will be used to create misinformation. Bots could be prompted to produce fake videos, images, and news articles that look just like the real thing. Those could be shared to influence voters.
To stop that from occurring, seven major tech companies—including Amazon, Google, Meta, and OpenAI—have agreed to enact AI safety rules. Among them: labeling content generated by AI with a mark or stamp. That way, people will know where it came from.
U.S. lawmakers are discussing ways to regulate AI, including possibly creating a government agency to oversee the technology. But those ideas are still in the early stages. In the meantime, AI is rushing forward—and even its creators aren’t sure exactly what’s next.
“I think if this technology goes wrong, it can go quite wrong,” Sam Altman, head of OpenAI, told Congress over the summer. “We want to work with the government to prevent that from happening.”
—additional reporting by Emma Goldberg, Cade Metz, and Kevin Roose of The New York Times