by Nick Givas
The government’s campaign to fight “misinformation” has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.
The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.
DARPA said those tools were used “to help identify misinformation or deception campaigns and counter them with truthful information,” beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago.Â
The initial idea was to track dissidents who were interested in toppling U.S.-friendly regimes or to follow any potentially radical threats by examining political posts on Big Tech platforms.Â
DARPA set four specific goals for the program:
- “Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
- Recognize persuasion campaign structures and influence operations across social media sites and communities.
- Identify participants and intent, and measure effects of persuasion campaigns.
- Counter messaging of detected adversary influence operations.”
Mike Benz, executive director of the Foundation for Freedom Online has compiled a report detailing how this technology is being developed to manipulate the speech of Americans via the National Science Foundation (NSF) and other organizations.
“One of the most disturbing aspects of the Convergence Accelerator Track F domestic censorship projects is how similar they are to military-grade social media network censorship and monitoring tools developed by the Pentagon for the counterinsurgency and counterterrorism contexts abroad,” reads the report.Â
“DARPA’s been funding an AI network using the science of social media mapping dating back to at least 2011-2012, during the Arab Spring abroad and during the Occupy Wall Street movement here at home,” Benz told Just The News. “They then bolstered it during the time of ISIS to identify homegrown ISIS threats in 2014-2015.”Â
The new version of this technology, he added, is openly targeting two groups: Those wary of potential adverse effects from the COVID-19 vaccine and those skeptical of recent U.S. election results.Â
“The terrifying thing is, as all of this played out, it was redirected inward during 2016 — domestic populism was treated as a foreign national security threat,” Benz said.
“What you’ve seen is a grafting on of these concepts of mis- and disinformation that were escalated to such high intensity levels in the news over the past several years being converted into a tangible, formal government program to fund and accelerate the science of censorship,” he said.
“You had this project at the National Science Foundation called the Convergence Accelerator,” Benz recounted, “which was created by the Trump administration to tackle grand challenges like quantum technology. When the Biden administration came to power, they basically took this infrastructure for multidisciplinary science work to converge on a common science problem and took the problem of what people say on social media as being on the level of, say, quantum technology.
“And so they created a new track called the track F program … and it’s for ‘trust and authenticity,’ but what that means is, and what it’s a code word for is, if trust in the government or trust in the media cannot be earned, it must be installed. And so they are funding artificial intelligence, censorship capacities, to censor people who distrust government or media.”
Benz went on to describe intricate flows of taxpayer cash funding the far-flung, public-private censorship regime. The funds flow from the federal government to universities and NGOs via grant awards to develop censorship technology. The universities or nonprofits then share those tools with news media fact-checkers, who in turn assist private sector tech platforms and tool developers that continue to refine the tools’ capabilities to censor online content.Â
“This is really an embodiment of the whole of society censorship framework that departments like DHS talked about as being their utopian vision for censorship only a few years ago,” Benz said. “We see it now truly in fruition.”Â
Members of the media, along with fact-checkers, also serve as arbiters of what is acceptable to post and what isn’t, by selectively flagging content for said social media sites and issuing complaints against specific narratives.Â
There is a push, said Benz during an appearance on “Just The News No Noise” this week, to fold the media into branches of the federal government in an effort to dissolve the Fourth Estate, in favor of an Orwellian and incestuous partnership to destroy the independence of the press.Â
The advent of COVID led to “normalizing censorship in the name of public health,” Benz recounted, “and then in the run to the 2020 election, all manner of political censorship was shoehorned in as being okay to be targetable using AI because of issues around mail-in ballots and early voting drop boxes and issues around January 6th.
“What’s happened now is the government says, ‘Okay, we’ve established this normative foothold in it being okay to [censor political speech], now we’re going to supercharge you guys with all sorts of DARPA military grade censorship, weaponry, so that you can now take what you’ve achieved in the censorship space and scale it to the level of a U.S. counterinsurgency operation.'”
One academic institution involved in this tangled web is the University of Wisconsin, which​​​​​ received a $5 million grant in 2022 “for researchers to further develop” its Course Correct program, “a precision tool providing journalists with guidance against misinformation,” according to a press release from the university’s School of Journalism and Mass Communication.”Â
WiseDex, a private company receiving grants from the Convergence Accelerator Track F, openly acknowledges its mission — building AI tools to enable content moderators at social media sites to more easily regulate speech.Â
In a promotional video for the company, WiseDex explains how the federal government is subsidizing these efforts to provide Big Tech platforms with “fast, comprehensive and consistent” censorship solutions.
“WiseDex helps by translating abstract policy guidelines into specific claims that are actionable,” says a narrator, “for example, the misleading claim that the COVID-19 vaccine supresses a person’s immune response. Each claim includes keywords associated with the claim in multiple languages … The trust and safety team at a platform can use those keywords to automatically flag matching posts for human review. WiseDex harnesses the wisdom of crowds as well as AI techniques to select keywords for each claim and provide other information in the claim profile.”Â
WiseDex, in effect, compiles massive databases of banned keywords and empirical claims they then sell to platforms like Twitter and Facebook. Such banned-claims databases are then integrated “into censorship algorithms, so that ‘harmful misinformation stops reaching big audiences,'” according to Benz’s report.
Just the News reached out to the University of Wisconsin and WiseDex for comment, but neither had responded by press time.
The NSF is acting, in one sense, as a kind of cutout for the military, Benz explained, allowing the defense establishment to indirectly stifle domestic critics of Pentagon spending without leaving fingerprints. “Why are they targeting right-wing populists?” he asked. “Because they’re the only ones challenging budgets for [defense agencies].”
He added: “These agencies know they’re not supposed to be doing this. They’re not normally this sloppy. But they won’t ever say the words ‘remove content.'”Â
The NSF, with an annual budget of around $10 billion, requested an 18.7% increase in appropriations from Congress in its latest budgetary request.Â
In a statement to Just the News, DARPA said:Â
“That program ended in March 2017 and was successful in developing a new science of social media analysis to reduce adversaries’ ability to manipulate local populations outside the U.S.
 “DARPA’s role is to establish and advance science, technology, research, and development. In doing so we employ multiple measures to safeguard against the collection of personally identifiable information, in addition to following stringent guidelines for research dealing with human subjects. Given the significance of the threat posed by adversarial activities on social media platforms, we are working to make many of the technologies in development open and available to researchers in this space.”Â
DARPA then followed up with an additional message saying: “As a point of clarification, our response relates only to your questions about the now-complete SMISC program. We are not aware of the NSF research you referenced. If you haven’t already, please contact NSF for any questions related to its research.”Â
Mike Pozmantier and Douglas Maughan, who serve at NSF as Convergence Accelerator program director and office head, respectively, did not respond to requests for comment.
 – – –
Nick Givas is a reporter at Just the News.