by Ben Weingarten

 

This summer the Supreme Court will rule on a case involving what a district court called perhaps “the most massive attack against free speech” ever inflicted on the American people. In Murthy v. Missouri, plaintiffs ranging from the attorneys general of Missouri and Louisiana to epidemiologists from Harvard and Stanford allege that the federal government violated the First Amendment by working with outside groups and social media platforms to surveil, flag, and quash dissenting speech – characterizing it as mis-, dis- and mal-information – on issues ranging from COVID-19 to election integrity.

The case has helped shine a light on a sprawling network of government agencies and connected NGOs that critics describe as a censorship industrial complex. That the U.S. government might aggressively clamp down on protected speech, and, certainly at the scale of millions of social media posts, may constitute a recent development. Reporting by RCI and other outlets – including Racket News’ new “Censorship Files” series, and continuing installments of the “Twitter Files” series to which it, Public, and others have contributed – and congressional probes continue to reveal the substantial breadth and depth of contemporary efforts to quell speech that authorities deem dangerous. But the roots of what some have dubbed the censorship industrial complex stretch back decades, born of an alliance between government, business, and academia that Democrat Sen. William Fulbright termed the “military-industrial-academic-complex” – building on President Eisenhower’s formulation – in a 1967 speech.

RCI reviewed public records and court documents and interviewed experts to trace the origins and evolution of the government’s allegedly unconstitutional censorship efforts. It is a rich history that includes the battles to defeat America’s adversaries in World War II and the Cold War; the development of Silicon Valley; the post-9/11 War on Terror; the Obama administration’s transition to targeting domestic violent extremism broadly; and the rise of Donald Trump.

If there is one ever-present player in this saga, it is the storied institution of Stanford University. Its idyllic campus has served as the setting over the last 70-plus years for a pivotal public-private partnership linking academia, business, and the national security apparatus. Stanford’s central place, particularly in developing technologies to thwart the Soviet Union during the Cold War, would persist and evolve through the decades, leading to the creation of an entity called the Stanford Internet Observatory that would serve as the chief cutout – in critics’ eyes – for government-driven censorship in defense of “democracy” during the 2020 election and beyond.

Stanford’s Rise to Military-Industrial-Academic Complex Powerhouse

Although it bears the name of the railroad magnate who founded the school in 1885, Leland Stanford, the powerhouse university we know of today, represents the vision of another man, Frederick Terman.

The son of a Stanford psychology professor, Terman began his tenure at the campus where he was reared teaching electrical engineering during the 1920s and 1930s. He also harbored ambitions to turn the university and its surrounding area into a major high-tech hub to rival that of MIT on the East Coast.

Like his MIT colleagues, Terman was also deeply connected to the government’s budding national security apparatus. During World War II he was tabbed to head Harvard’s Radio Research laboratory, established by the U.S. Office of Scientific Research and Development to develop countermeasures against enemy radars. Through its good work, the lab would save an estimated 800 Allied bomber aircrafts.

Returning to Stanford with the insights and contacts he had developed during the war, Terman took over as the dean of the engineering school in 1946 determined to implement an ambitious plan: to use government funding to erect “steeples of excellence” in critical disciplines that would continually attract new investments in a virtuous cycle that would raise Stanford to preeminence among research institutions.

Terman would win Pentagon contracts to help fund Stanford’s Electronics Research Laboratory and the Applied Electronics Laboratory, which included work on classified military programs, bringing Stanford firmly into the military-industrial-academic complex fold. Additional labs – some engaged in basic or theoretical research, and others applied research – followed, deepening the school’s ties to the national security state during the Cold War.

While reportedly advising every major branch of the military, Terman cultivated ties with private industry. He encouraged graduates to start firms in nearby communities that would come to be known as Silicon Valley, and urged professors to consult.

In 1951, Terman helped establish the Stanford Industrial Park, a high-tech cooperative on university land that would attract electronics firms and defense contractors – the first such university-owned industrial park in the world. Its tenants would include among others Hewlett-Packard, GE, Eastman Kodak, and a host of other notables, later including the likes of Facebook and Tesla. Lockheed Martin would relocate its Missiles Systems Division to Silicon Valley in 1956 and go on to serve as the largest industrial employer in Silicon Valley during the Cold War.

Under Terman’s leadership, first as engineering school dean and then as provost, Stanford and the firms it helped incubate and attract generated advances in everything from microwave electronics and electronic warfare, to missiles and satellites, and semiconductors and computers – meeting the demands of military and civilian consumers alike.

Stuart Leslie, author of “The Cold War and American Science: The Military-Industrial Complex at MIT and Stanford,” wrote that “[b]y nearly every measure” Terman achieved his goal of challenging “MIT for leadership” in the sciences. The relationship Terman fostered between the feds and Silicon Valley companies would be responsible for producing “all of the United States Navy’s intercontinental ballistic missiles, the bulk of its reconnaissance satellites and tracking systems, and a wide range of microelectronics that became integral components of high-tech weapons and weapons systems” during the Cold War, according to one study.

Leslie Berlin, formerly a historian of the Silicon Valley Archives at Stanford University, would write that “All of modern high tech has the US Department of Defense to thank at its core, because this is where the money came from” underwriting research and development.

One Stanford institution to which the money flowed with an indirect link to current controversies regarding social media censorship was the Stanford Research Institute (SRI). Incorporated on campus as a nonprofit in 1946, it would pursue lucrative contracts for often-classified military R&D projects. By 1969, SRI ranked third among think tanks in total value of defense contracts garnered.

Anti-war activists helped force Stanford to divest from the outfit in 1970 – though it would continue to work with government on an array of initiatives. Among them was one building on a Pentagon-backed project to network computers, known as ARPANET. In 1977, an Institute van would transmit data in what is regarded as the first internet connection.

Stanford would open an Office of Technology Licensing in 1970 to manage the university’s growing IP portfolio. The office would execute thousands of licenses covering many thousands more inventions – sometimes in tandem with the security state. For example, Google was built in part on National Science Foundation-supported research; its development has also been tied to work done under a joint NSA and CIA grant.

Terrorism Rejuvenates and Transforms the Military-Industrial-Academic Complex

The 9/11 terror attacks in 2001 would reinvigorate and fundamentally transform a military-industrial-academic complex that had demobilized to an extent following the Cold War, during which it had been largely foreign-facing. It would come to see not only foreign clandestine communications but public conversations between Americans promoting disfavored viewpoints as national security concerns.

To combat jihadists, Washington demanded sophisticated new surveillance tools and weapons. When combined with the explosion in communications technology, and the creation of massive new reams of digital data that could be collected and analyzed, Big Tech would prove a natural supplier.

The advent of social media – including Facebook (2004), YouTube (2005), and Twitter (2006) – would significantly impact these efforts.

To the public, social media platforms comprised a digital public square that empowered citizens as journalists and enabled the free flow of ideas and information.

But governments and non-state actors, including terrorist groups, realized they could harness the power of such platforms, and use them for intelligence gathering, waging information warfare, and targeting foes.

Initially U.S. authorities focused almost exclusively on foreign jihadist organizations’ exploitation of social media. That began to change when the Obama administration created a series of policies and associated entities – most of which worked closely with Big Tech and academia – targeting a broader array of adversaries.

In 2011, the Obama administration deployed its “Empowering Local Partners to Prevent Violent Extremism in the United States” strategy. While identifying Al-Qaeda as “our current priority,” the policy broadened the national security apparatus focus to “all types of extremism that leads to violence, regardless of who inspires it.”

That same year, the State Department stood up an entity aimed at “supporting agencies in Government-wide public communications activities targeted against violent extremism and terrorist organizations” that in 2016 would morph into the Global Engagement Center (GEC). It would serve as a broader “interagency entity” that would not only partner to build “a global network of positive messengers against violent extremism” including NGOs, but leverage data analytics “from both the public and private sectors to better understand radicalization dynamics online.”

Also that year, the Defense Department announced its Social Media in Strategic Communication program, launched to “track ideas and concepts to analyze patterns and cultural narratives” as part of an effort “to develop tools to help identify misinformation or deception campaigns and counter them with truthful information, reducing adversaries’ ability to manipulate events.” Millions of dollars flowed to both Big Tech and academic hubs in connection with the project.

In conjunction with these programs, the Obama administration also consulted with outside advisors to study how jihadist groups engaged in online disinformation campaigns. Included among the advisors was Renée DiResta, future technical research manager of the Stanford Internet Observatory – which would later play a key role in the government’s effort to identify and quell speech disfavored by the government.

With terrorist organizations increasingly exploiting social media platforms to proliferate propaganda and in pursuit of other malign ends, Silicon Valley came to play an increasingly key role in U.S. counterterrorism efforts. As Kara Frederick wrote in a 2019 report for the Center for a New American Security, Facebook, Twitter, and other social media companies:

… hired talent to fill gaps in their counterterrorism expertise, created positions to coordinate and oversee global counterterrorism policy, convened relevant players in internal forums, and instituted a combination of technical measures and good old-fashioned analysis to root out offending users and content. Major and minor tech companies coordinated with each other and with law enforcement to share threat information, drafted policies around preventing terrorist abuse of their platforms, updated their community guidelines, and even supported counter-speech initiatives to offer alternative messaging to terrorist propaganda.

Frederick, now at the Heritage Foundation, would know. A counter-terrorism analyst at the Department of Defense from 2010-16, she departed for Facebook where she helped create and lead its Global Security Counterterrorism Analysis Program.

Facebook’s chief security officer during Frederick’s tenure, Alex Stamos – future founder of the Stanford Internet Observatory – would boast that “there are several terrorist attacks that you’ve never heard of because they didn’t happen because we caught them … some local law enforcement agency … took credit for it, but it was actually our team that found it and turned it over to them with a bow on it.”

“Once clearly public sector responsibilities,” Stamos would add, “are now private sector responsibilities.”

Trump’s Election Catalyzes the Creation of the Censorship Industrial Complex

With government broadening its focus to domestic violent extremism and its nexus to social media, and a revolving door opening between the national security apparatus and the platforms, Donald Trump’s election would prove a catalyzing event in the creation of what critics would describe as the censorship industrial complex.

His victory, which followed Brexit, another populist uprising that stunned Western elites, sent shockwaves from Washington, D.C., to Silicon Valley.

A narrative quickly arose that social media was to blame for Trump’s unexpected win. It held that dark forces, especially Russia, had manipulated voters through dishonest posts, and that the platforms enabled Trump’s victory through allowing supporters to advance corrosive conspiracy theories.

The national security apparatus sprang to action.

In January 2017, outgoing Obama DHS Secretary Jeh Johnson made protecting election infrastructure part of his agency’s mandate. Subsequently:

  • DHS would develop a Countering Foreign Influence Task Force focusing on “election infrastructure disinformation.”
  • The State Department’s Global Engagement Center would broaden its interagency mandate to counter foreign influence operations.
  • The FBI would establish a Foreign Influence Task Force to “identify and counteract malign foreign influence operations targeting the United States,” with an explicit focus on voting and elections.

These key components of what would come to be known as the censorship industrial complex – one that would ultimately target the speech of Trump’s own supporters and the president himself – emerged at the very time he was fending off the Trump-Russia collusion conspiracy theory that gave rise to them.

Government concerns over foreign meddling in domestic politics would drive demand for putatively private sector actors, often with extensive government ties and funding, to engage in what the NGOs cast as research and analysis of such malign operations on social media.

In 2018, the Senate Select Intelligence Committee would solicit research, including from DiResta, on Russia’s social media meddling – research that would bolster something of a pressure campaign against social media companies to get them to quit dithering on content moderation.

The committee also commissioned Graphika, a social media analytics firm founded in 2013, to co-author a report on Russian social media meddling. Graphika lists DARPA and the Department of Defense’s Minerva Initiative, which funds “basic social science research,” on a company website detailing its clients and research partners. It would serve as one of the four partners that would comprise the Stanford Internet Observatory-led Election Integrity Partnership – a key cog in government-driven speech policing during and after the 2020 election.

Another entity that would join the Stanford-led quartet is the Atlantic Council’s Digital Forensic Research Lab, established in 2016. Funded in part by the Departments of State – including through the Global Engagement Center – and Energy, the think-tank counts among its directors CIA chiefs and Defense secretaries. The lab’s senior director is Graham Bookie, a former top aide to President Obama on cybersecurity, counterterrorism, intelligence, and homeland security issues. In 2018, Facebook announced an election partnership with the lab wherein the two parties would work on “emerging threats and disinformation campaigns from around the world.”

The third of four entities later to join the Election Integrity Partnership was the University of Washington’s Center for an Informed Public, formed in 2019. Stanford grad and visiting professor Kate Starbird co-founded the Center. The National Science Foundation and the Office of Naval Research have provided funding for Dr. Starbird’s social media work.

That same year, the Stanford Internet Observatory emerged. Founded by Alex Stamos, who had led substantial research on Russia’s social media operations while Chief Security Officer at Facebook and routinely interfaced with national security agencies throughout his cybersecurity career, the Observatory would serve as a “cross-disciplinary initiative comprised of research, teaching and policy engagement addressing the abuse of today’s information technologies, with a particular focus on social media … includ[ing] the spread of disinformation, cybersecurity breaches, and terrorist propaganda.”

The Observatory is a program of Stanford’s Cyber Policy Center, which counts former Obama National Security Council official and Russian Ambassador Michael McFaul, among other notables on the faculty list with backgrounds in or ties to the security state.

Stamos stood up the Observatory with a $5 million gift from Craig Newmark Philanthropies – which also gave $1 million to Starbird’s work. The Craigslist founder’s charitable vehicle contributed some $170 million to “journalism, countering harassment against journalists, cybersecurity and election integrity,” between 2016 and 2020, areas he argued constituted the “battle spaces” of information warfare – information warfare waged implicitly against President Trump and his supporters.

The National Science Foundation also provided large infusions of money to the sprawling network of academic entities, for-profit firms, and think tanks that would emerge in the “counter-disinformation space.”

This network produced a mass of research and analysis redefining and expanding the perceived threat of free and open social media. It argued America was plagued by a pandemic of “misinformation,” “disinformation,” and “malinformation,” with a nexus to domestic violent extremism that could be created and disseminated by almost anyone – thereby making everyone a potential target for surveillance and censorship.

Ideas authorities found troubling would come to be treated as tantamount to national security threats to be neutralized – as the future Biden administration would codify in the first-of-its-kind National Strategy for Countering Domestic Terrorism.

DiResta described this paradigm shift in a 2018 article for Just Security – a publication incidentally also funded by Newmark.

“Disinformation, misinformation, and social media hoaxes have evolved from a nuisance into high-stakes information war,” DiResta wrote.

She continued:

…Traditional analysis of propaganda and disinformation has focused fairly narrowly on understanding the perpetrators and trying to fact-check the narratives (fight narratives with counter-narratives, fight speech with more speech). Today’s information operations, however, are … computational. They’re driven by algorithms and are conducted with unprecedented scale and efficiency. … It’s time to change our way of thinking about propaganda and disinformation: it’s not a truth-in-narrative issue, it’s an adversarial attack in the information space. Info ops are a cybersecurity issue.

This re-definition of what arguably amounts to speech policing of social media as security policy could be seen a year later when NATO Secretary General Jens Stoltenberg urged that “NATO must remain prepared for both conventional and hybrid threats: From tanks to tweets.” (Emphasis RCI’s)

The Censorship Industrial Complex Mobilizes for the 2020 Election

In the run-up to the 2020 election, DHS’ Cybersecurity and Infrastructure Security Agency (CISA), which took as its mandate protecting election infrastructure, would expand its focus to include combatting misinformation and disinformation perceived as threatening the security of elections – regardless of its source. This would ultimately encompass the protected political speech of Americans, including speculation and even satire to the extent it called into question or undermined state-approved narratives about an unprecedented mass mail-in election.

Social media companies, chastened after having come under withering political and media attack for their content moderation policies during the 2016 election, would recruit dozens of ex-security state officials to fill their “Trust and Safety” teams dealing with policing speech to likewise combat this purported threat.

Frederick told RealClearInvestigations that Silicon Valley leaders believed the teams’ past focus on Islamic terror, which receded under Trump, reflected a bias, requiring platforms to “reorient toward domestic extremism” – the new target of the political establishment.

Combining the platforms’ political leanings with the tools they had developed to take on jihadists, in Frederick’s words, would create a “powder keg” threatening to obliterate Americans’ speech.

Still, the Constitution stood in the way to the extent the government wanted to police the platforms’ speech. In the run-up to the 2020 election, both federal authorities and like-minded NGOs recognized a “gap:” No federal agency had “a focus on, or authority regarding, [identifying and targeting for suppression] election misinformation originating from domestic sources,” as the Stanford Internet Observatory-led Election Integrity Partnership would put it. DiResta acknowledged any such project faced “very real First Amendment questions.”

In response, the government helped create a workaround via that very Election Integrity Partnership – a government driven,  advised, and coordinated enterprise run by NGOs to surreptitiously surveil and seek to censor speech that did not comport with government-favored narratives on election administration and outcomes.

One hundred days from the 2020 election, the Stanford Internet Observatory, alongside Graphika, the Atlantic Council’s Digital Forensic Research Lab, and University of Washington’s Center for an Informed Public launched the EIP as a “model for whole-of-society collaboration,” aimed at “defending the 2020 election against voting-related mis- and disinformation.”

As RCI previously reported, the project had two main objectives:

First, EIP lobbied social media companies, with some success, to adopt more stringent moderation policies around “content intended to suppress voting, reduce participation, confuse voters as to election processes, or delegitimize election results without evidence. …

Second, EIP surveilled hundreds of millions of social media posts for content that might violate the platforms’ moderation policies. In addition to identifying this content internally, EIP also collected content forwarded to it by external “stakeholders,” including government offices and civil society groups. EIP then flagged this mass of content to the platforms for potential suppression.

As many as 120 analysts, records show, created tickets identifying social media content they deemed objectionable. They forwarded many tickets to officials at platforms including Google, Twitter, and Facebook which “labeled, removed, or soft blocked” thousands of unique URLs – content shared millions of times.

An RCI review of the nearly 400 of those tickets produced to the House Homeland Security Committee found that government agencies – including entities within the FBI, DHS (CISA), and State Department (GEC) – involved themselves in nearly a quarter of the censorship tickets. Those tickets almost uniformly covered domestic speech, and from the political right; in dozens of instances, the project made “recommendations” to social media companies to take action.

The tickets RCI reviewed illustrated the project’s efforts to push social media platforms to silence President Trump and other elected officials.

One EIP analyst would say of the effort that it “was probably the closest we’ve come to actually preempting misinformation before it goes viral.”

In response to RCI’s inquiries in connection with this story, CISA Executive Director Brandon Wales shared a statement reading in part: “CISA does not and has never censored speech or facilitated censorship. Such allegations are riddled with factual inaccuracies.”

Given “concerns from election officials of all parties regarding foreign influence operations and disinformation that may impact the security of election infrastructure,” Wales said, “CISA mitigates the risk of disinformation by sharing information on election security with the public and by amplifying the trusted voices of election officials across the nation” – work he indicated is conducted while protecting Americans’ liberties.

Dr. Starbird told RCI that:

Falsehoods about elections – whether accidental rumors about when and how to vote or intentional disinformation campaigns meant to sow distrust in election results – are issues that cut to the core of our democracy. Identifying and communicating about these issues isn’t partisan and, despite an ongoing campaign to label this work as such, isn’t ‘censorship.’

The Censorship Industrial Complex Persists Despite Scrutiny

All had come full circle. Stanford had once again connected the security state to Silicon Valley for a project involving both basic and applied research aimed at perceived foes – studying how narratives emerged, and then seeking to get offending ones purged.

That project would again garner new funding from the security state in the form of a $3 million grant from the National Science Foundation split between the Stanford Internet Observatory and the University of Washington’s Center for an Informed Public for “rapid-response research to mitigate online disinformation.” Their partners in the EIP would receive millions more from the federal government under the Biden administration.

The relationship between DHS’ Cybersecurity and Infrastructure Security Agency and EIP would only grow. As RCI reported:

In the days following Nov. 3, 2020, with President Trump challenging the integrity of the election results, CISA rebuked him in a statement, calling the election “the most secure in American history.” The president would go on to fire CISA’s director, Christopher Krebs, by tweet.

Almost immediately thereafter, Krebs and Stamos would form a consultancy, the Krebs Stamos Group. In March 2021, Krebs would participate in a “fireside chat” when EIP launched its 2020 report.

CISA’s top 2020 election official, Matt Masterson, joined SIO as a fellow after leaving CISA in January 2021. Krebs’ successor at CISA, Director Jen Easterly, would appoint Stamos to the sub-agency’s Cybersecurity Advisory Committee, established in 2021, for a term set to expire this month.

Director Easterly would appoint Kate Starbird … to the committee. Starbird chaired the advisory committee’s since-abolished MDM (Mis-, Dis-, and Mal-Information) Subcommittee, focusing on information threats to infrastructure beyond elections.

SIO’s DiResta served as a subject matter expert for the now-defunct subcommittee. DHS scrapped the entity in the wake of the public furor over DHS’ now-shelved “Disinformation Governance Board.”

Starbird, her University of Washington colleagues, and a former student member of the Stanford Internet Observatory who had matriculated to the Krebs Stamos Group would publish a report in June 2022 building on their EIP efforts, titled “Repeat Spreaders and Election Delegitimization: A Comprehensive Dataset of Misinformation Tweets from the 2020 U.S. Election.” Its publication coincided with, and seemed aimed at buttressing the partisan House January 6 Select Committee’s second public hearing.

Documents obtained via FOIA from the University of Washington and recently published by Matt Taibbi’s Racket News and Substacker UndeadFOIA, suggest the committee’s chief data scientist met with Starbird and DiResta in January of that year to discuss the report the EIP produced following the 2020 election and its underlying data – a report that linked mis-, dis-, and mal-information regarding the 2020 election to the capitol riot.

In the interim, EIP would morph into the Virality Project, which would be used to target dissent from public health authorities during the COVID-19 pandemic – dissent those authorities argued could lead people to die, as dissenting views on the 2020 election spurred the capitol riot.

Among those targeted by the government for silencing, and who social media companies would censor, in part for his opposition to broad pandemic lockdowns, was Stanford’s own Dr. Jay Bhattacharya – one plaintiff in Murthy v. Missouri (Dr. Bhattacharya and Taibbi were recipients of RealClear’s first annual Samizdat Prize honoring those committed to truth and free speech). As he sees it, the Virality Project helped “launder” a “government … hit list for censorship,” which he finds “absolutely shocking” and at odds with the Stanford’s past commitments to academic freedom and general “sort of countercultural opposition to government overreach.”

As chilling as these efforts were, a House Homeland Security Committee aide told RCI:

EIP and VP were largely comprised of college interns running basic Google searches. Imagine a similar effort leveraging artificial intelligence to sweep up and censor ever greater swaths of our online conversations. We are at the beginning of the problem, not the end, which is why it is so vital to get right today because without action, tomorrow could be far worse.

It is unclear whether such action is forthcoming. Oral arguments in Murthy, heard this past March, suggested the Supremes may diverge from the lower courts. A federal district court found, and an appellate court concurred in the view that in coordinating and colluding with third parties and social media companies to suppress disfavored speech, government agencies had likely violated the First Amendment. Those courts barred such contact between agencies and social media companies during the pendency of the case – an injunction the nation’s highest court stayed over the objections of Justices Alito, Thomas, and Gorsuch.

At least one companion case targeting the likes of the Stanford Internet Observatory, and its Election Integrity Partnership and Virality Project as co-conspirators with the federal government in violating Americans’ speech, Hines v. Stamos, is pending.

GOP legislation to deter and/or defund the activities illustrated in these cases has languished in Congress, but oversight efforts have raised the cost for NGOs to continue partnering with the government.

When asked in June 2023 about the Stanford Internet Observatory’s future plans, Stamos told the House Judiciary Committee, which has been probing alleged public-private censorship efforts, that “Since this investigation has cost the university now approaching seven figures legal fees, it’s been pretty successful I think in discouraging us from making it worthwhile for us to do a study in 2024.”

Bhattacharya responded in an interview with RCI, “Why is Stanford putting so much of its institutional energy into [defending] this [the Observatory]?”

“It seems like they are putting their thumbs on the scale partly because they’re so closely connected with government entities.”

Months later, according to his LinkedIn profile, Stamos would depart from the Observatory, while remaining a part-time Stanford Adjunct and Lecturer in Computer Science.

On the eve of oral arguments in the Murthy case, Stanford University and its observatory castigated critics for promoting “false, inaccurate, misleading, and manufactured claims” regarding its “role in researching and analyzing social media content about U.S. elections and the COVID-19 vaccine.”

Stanford called on the Supreme Court to “affirm its right to share its research and views with the government and social media companies.”

It vowed the Internet Observatory would continue its work on “influence operations.”

Starbird has echoed Stanford. In response to a series of questions from Taibbi pertaining to the trove of FOIA’d documents Racket obtained, she said:

Our team has fielded dozens of public records requests, producing thousands of emails. Not one confirms the central claims of your thesis falsely alleging coordination with government and platforms to “censor” social media content. But, instead of acknowledging that fact, abuse continues of the Washington State public records law to smear and spread falsehoods based on willful misreadings of innocuous emails, ignorance about scientific research, and, in several instances, a lack of reading comprehension.

She too vowed that: “At the Center for an Informed Public, our research into online rumoring about election procedures and our work to rapidly identify and communicate about harmful election rumors will continue in 2024.”

Stanford’s Internet Observatory and the University of Washington’s Center for an Informed Public will not be spearheading the Election Integrity Partnership for 2024 or future election cycles however, per a link to the EIP’s website to which a Stanford spokesperson referred RCI in sole response to our queries.

Some experts are doubtful alleged social media censorship is going away anytime soon. “I don’t know how to ‘put the genie back in the bottle,'” said Frederick.

“There’s a thing about intel analysts in general where you have a sense of superiority because you have access to things that the plebes don’t. But, you know, these people have taken their G-d complexes to the next level and turned it against their neighbor.”

Of the alleged speech police, she said “they’re drunk with power obviously and they think they know what’s best for us.”

Amb. Alberto Fernandez, vice president at MEMRI and a former leader of the precursor to the State Department’s GEC, an observatory stakeholder that had itself funded adjacent efforts, told RCI “there needs to be transparency and preferably, a ‘firewall‘ of some sort between the Feds and social media.”

In May, Senate Intelligence Committee Chairman Mark Warner (D-Va.) – who had himself submitted an amicus brief siding with the agencies in the case, contra Republican colleagues led by House Judiciary Chairman Jim Jordan – revealed that in the wake of the oral arguments in Murthy, federal agencies had resumed communications with social media companies.

Sen. Eric Schmitt (R-Mo.), who had originally brought the Murthy case as Missouri attorney general, replied: “It appears DHS, FBI and potentially other agencies are quietly ramping up their efforts to censor Constitutionally protected speech ahead of the 2024 election.”

– – –

Ben Weingarten is a writer for RealClearInvestigations. 

 

 

 

 

 


Content created by RealClearWire is available without charge to any eligible news publisher. For republishing terms, please contact [email protected].