A Pennsylvania lawmaker wants all content generated by artificial intelligence (AI) to be labeled and is drafting legislation to that end. 

State Representative Chris Pielli (D-West Chester) insisted consumers should expect to know whether they are accessing human-created or electronically produced information. He said people will have a harder time fulfilling this expectation as AI becomes more advanced and commonly used. 

An inability to confirm the origin of text, images or other media, Pielli (pictured above) fears, could frustrate many Pennsylvanians’ desire to avoid scams and other misinformation. 

“This disclosure will give people who are reading or viewing this content the information they need to make informed decisions and not be misled,” the representative wrote in an initial description of his emerging bill. “We cannot allow a future where people are unaware if they are interacting with a computer program or another person.”

Private companies are already undertaking some AI-content marking. Most prominently, the technology giant Google announced earlier this month it will alert users to instances when images are created via its AI models. 

Problems with certain AI-aided creations pervade the news. Earlier this year, a California attorney, using the AI chatbot ChatGPT to do research, turned up a phony news report that famed legal scholar Jonathan Turley sexually harassed a student. Presently, New York lawyer Steven A. Schwartz faces possible court-imposed sanctions for preparing a legal brief referencing nonexistent legal cases he used ChatGPT to locate. 

Spence Purnell, a technology policy analyst at the free-market Reason Foundation, said however that states should not force entities to label their computer-generated output. A major difficulty with doing so, he said, would be deciding the threshold for deeming an item to be AI-generated.

“Consumers don’t need the state’s involvement,” he told The Pennsylvania Daily Star via email “Legally, there is no right to know if something is generated by AI and it is tough to see this law helping consumers.”

Many human-crafted images, videos or texts, Purnell explained, contain some electronically produced components. This has been the case for many years as creators have availed themselves of advanced editing software as well as more basic tools like spellcheck. Conceivably, he reasoned, laws requiring AI-content tagging could eventually cover nearly all media.

This, he expects, will create a perilous legal environment even for honest companies and creatives. 

“With this law, the legislature risks hindering the development of useful AI, hurting content creators who may use AI in ways the legislature doesn’t mean to prevent or label, and exposing creators and businesses to excessive liability without helping consumers,” Purnell wrote. “Forcing content creators or AI operators to change their technology or processes to accommodate the proposed law creates more problems than it solves.”

He added that consumers are already able to evaluate their news and information sources based on assessments of those sources’ credibility. And if a consumer is concerned about the extent to which an item contains AI-generated components, he or she can use AI-detection software to ascertain that. 

In instances when electronic creation violates current laws, Purnell argued, public officials can focus on enforcing those existing statutes. 

“AI technology is in an early phase and Pennsylvania state lawmakers should resist the urge to impose regulations on something they aren’t sure about what it will do or how it will be used,” he wrote.

 – – – 

Bradley Vasoli is managing editor of The Pennsylvania Daily Star. Follow Brad on Twitter at @BVasoli. Email tips to [email protected].
Photo “Chris Pielli” by Rep. Chris Pielli.