How the Media Gets It Wrong

The 8:1 Problem in Iran Studies

The Number

Maryam Sinaiee reported for the Financial Times from inside Iran during the Woman, Life, Freedom uprising — one of the few Western correspondents who stayed in Tehran as the crackdowns intensified, the internet was cut, and sources began to disappear. She filed dispatches while plainclothes officers patrolled the streets below her window and the sound of protest chants carried across the rooftops. She understood Iran not because she held the right theoretical framework, but because she lived in the country, spoke the language, and could tell the difference between what officials announced in press conferences and what was happening in the neighborhoods she walked through every day.

The professors at Western universities who trained the diplomats who shaped the policies that determined Maryam’s safety had a different kind of knowledge — institutionally validated, peer-reviewed, and produced within an ecosystem whose shape can be measured.

Eight to one. In the political science departments that produce the scholars who write the reports that brief the officials who shape the policy that determines what happens to ninety-three million Iranians — the ratio of Democrats to Republicans among faculty is eight to one. In History departments, seventeen to one. In Sociology, more than twenty to one.

These numbers come from the TRIP — Teaching, Research, and International Policy — survey conducted by William & Mary, and from voter registration studies by Mitchell Langbert and colleagues.1 They describe the political composition of the professoriate that trains the analysts, journalists, and diplomats who constitute the Western “Iran expert” class.

The ratio does not mean eight out of nine Iran scholars are wrong. It means the professional ecosystem in which they operate — the hiring committees, the peer review boards, the editorial gates, the conference panels — is calibrated to a single political frequency. Arguments that resonate at that frequency travel freely. Arguments that do not face friction at every stage.


The Exception

Not every discipline is captured equally. Economics stands apart.

Faculty D:R Ratios by Discipline:

DisciplineD:R RatioNotes
Sociology>20:1Critical Theory dominant
Industrial Relations~20:1Labor / Marxist-influenced
History~17:1Revisionist / Post-colonial
Political Science8:1 to 10:1Institutionalist / Constructivist
Economics3:1 to 4.5:1Neoclassical / Rational Choice

Sources: Langbert (2016), Langbert, Quain & Klein (2016), TRIP Survey. At elite research universities, History and Political Science ratios exceed 10:1. Among female faculty, ratios approach 25:1.

Economics is still tilted — three to one is not parity. But its relative balance correlates with something important: quantitative methodology. Economists test hypotheses against data, model incentives, measure effects. Their framework resists the normative capture that distorts qualitative disciplines because numbers are harder to spin than narratives.

This matters for Iran analysis. The scholars who study sanctions efficacy — an economics question — produce work that is measurably more rigorous than those who study Iran’s “political culture” or “civil society” — questions rooted in qualitative disciplines where the seventeen-to-one ratio operates with full force. When you read a report concluding that sanctions “failed,” check whether the author is an economist modeling trade flows or a political scientist modeling the regime’s “reform potential.” The methodology predicts the conclusion more reliably than the evidence does.


The Collapse

The tilt is not stable. It is accelerating. Longitudinal data shows the percentage of faculty identifying as “moderate” has collapsed from approximately forty-six percent in 2014 to roughly fifteen percent in more recent datasets.2 The center is hollowing out. What remains is a bimodal distribution heavily weighted toward the political left — with a small, increasingly marginalized conservative minority and a vanishing moderate core.

For Iran analysis, the collapse of the center means the disappearance of the scholars most likely to hold multiple frameworks simultaneously — to see the regime as both a product of legitimate grievance AND a brutal security state, to support diplomacy AND acknowledge that the engagement premise was wrong, to criticize American foreign policy AND call the Islamic Republic what it is. These scholars are not extinct. They are outnumbered seventeen to one in History departments, and the ratio is getting worse.


The Pipeline

The TRIP survey reveals something more consequential than voting patterns: ideology predicts theoretical paradigm. And paradigm determines conclusion.

Realists — who emphasize the anarchic international system, state power, and security competition — are the most conservative faction in international relations. But even Realists are centrist relative to the general population.3 Liberals — who emphasize institutions, democracy promotion, and interdependence — lean significantly further left. Constructivists — who emphasize the social construction of reality, norms, and identity — lean further still. Post-Positivist scholars — Critical Theorists, Post-Structuralists, Feminist IR scholars — are almost exclusively located on the political left.

This creates a pipeline from ideology to conclusion that operates before any data is examined. A Constructivist analyzing Iran’s nuclear program will frame it as a social construction shaped by identity and norms — emphasizing diplomatic solutions that reshape incentives. A Realist will frame it as a security competition requiring deterrence. A Post-Positivist will frame it as a product of Western hegemony requiring decolonized analysis. Each paradigm admits different evidence. Each excludes what contradicts its premises.

The consumer of Iran analysis faces a double bind: a paper presenting itself as objective theoretical observation is statistically likely to originate from an ideological commitment that predetermined its conclusion. The theoretical debate in the journal serves as a proxy for a deeper ideological contest — and the consumer has no way to detect it without checking the author’s institutional affiliation and paradigmatic commitments.

If you have ever read two expert analyses of the same Iranian event that reached opposite conclusions — one calling it a “reform opening” and the other a “regime tactic” — you have witnessed the pipeline in action. The disagreement is not about the evidence. It is about the paradigm each analyst brought to the evidence before examining it.


The Gatekeepers

If the professoriate is eight to one, then journal editors, peer reviewers, and tenure committees — drawn from that same professoriate — are eight to one. Research challenging liberal internationalist assumptions or employing frameworks associated with conservative thought — deterrence theory, regime nature analysis, cultural essentialism — faces a significantly higher barrier to publication.

This is not conspiracy. It is groupthink. When a specific worldview becomes hegemonic, it is mistaken for neutrality. Deviations are flagged not as “disagreements” but as “methodological flaws” or “lack of rigor.” A paper arguing that the Islamic Republic is structurally incapable of reform is not rejected because its evidence is wrong. It is rejected because its conclusion is professionally uncomfortable — it closes the “diplomatic space” that the engagement thesis requires.

Young scholars learn the lesson fast. Tenure requires publication. Publication requires passing through the gates. The gates are staffed by scholars whose careers were built on the engagement framework. The rational response is self-censorship: tailor research questions to fit prevailing orthodoxy, soften conclusions, add the ritual caveat about “hardliners undermining reformers.” The tenure file grows. The analysis drifts further from the country it claims to describe.

Imagine every expert on your profession was trained in departments where your worldview was outnumbered seventeen to one — and then being told their analysis of your field was objective. Imagine that the gatekeepers who decided which research about your industry was publishable all shared the same political commitments, and that scholars who disagreed learned to stay quiet or find another career. That is the Iran expertise ecosystem. The Iranian-Americans who live with the consequences of this analysis — the most educated immigrant community in the United States — have noticed. The academy has not.


The Cartels

Bibliometric analysis reveals the mechanism that makes the drift self-sustaining: citation cartels.4 Scholars primarily cite within their own ideological and theoretical silo. The citations multiply. The underlying ideas do not. A paper in Critical Security Studies cites foundational texts that are themselves ideological critiques of traditional security analysis — and builds new scholarship entirely on these critiques without engaging mainstream realist or strategic literature. The bibliography looks rigorous. The intellectual gene pool is shallow.

In extreme cases, the closure produces what scholars call the “Campist” phenomenon — a reflexive alignment with any geopolitical actor opposed to the “Western camp,” regardless of that actor’s conduct. The term comes from dividing the world into “camps” and supporting whichever camp opposes the West. Campism is not mere contrarianism. It is an epistemological commitment: only evidence produced by “friendly” sources is accepted. Western intelligence is dismissed as propaganda. Independent investigations are tainted by association with “Western funding.” The documented atrocities of anti-Western regimes — Uyghur genocide, Assad’s chemical weapons, the January 2026 massacres — are denied or minimized on procedural grounds.

Campism does not describe most Iran scholars. But it describes the tail of the distribution — the scholars most likely to publish provocative takes, attract media attention, and provide the regime with useful quotes. And in an eight-to-one ecosystem, that tail operates without the professional counterweight that would exist in a more balanced field.

How Citation Cartels Work: Scholar A publishes a paper arguing Iran is reformable. Scholar B cites A. Scholar C cites A and B. A review article cites A, B, and C as “the literature.” The conclusion — Iran is reformable — now has three citations. No one has produced new evidence. The density of references creates the illusion of verification. The map replaces the territory.


The Correction

The data documented here is not a call to purge universities or replace left-leaning scholars with right-leaning ones. Politicized purges produce politicized institutions — in either direction. The correction is methodological, not ideological.

The Economics Exception points the way. Economics departments, with their three-to-one ratio, are not balanced. But their commitment to quantitative methodology — hypothesis testing, falsifiability, empirical verification — creates a partial inoculation against the worst distortions.5 The corrective for Iran scholarship is not more conservatives. It is more rigor: more GAMAAN-style encrypted surveys, more OSINT verification, more counterfactual discipline, more engagement with evidence from outside the citation loop.

The LLM Red Team Literature Review — A Practical Tool:

If you use an AI research assistant to study Iran, the training data reflects the same 8:1 skew documented here. The dominant narrative will be overrepresented. Dissenting views will be underrepresented. Use this framework to correct for the tilt:

  1. Map the Divide: Identify the dominant academic consensus on your topic. Then explicitly search for credible dissenting views — Realist, conservative, non-Western perspectives.
  2. Agency Audit: Rewrite all descriptions into active voice. Ensure agency is assigned equally to all actors.
  3. Steelman the Other: Identify the “villain” in the dominant narrative. Construct the strongest possible argument for their behavior based on their own stated reasoning.
  4. Data Hygiene: Differentiate verified independent data from government-supplied data. Flag statistics from closed societies as low confidence.
  5. Counterfactual: For every policy critique, generate a plausible cost of inaction.

The Paradigm Test (applicable to any field): When two experts analyzing the same event reach opposite conclusions, the disagreement is rarely about the facts. It is about the paradigm — the theoretical lens that determines which facts are admissible, which questions are asked, and which conclusions are professionally rewarded. Before trusting any expert analysis, ask: what paradigm produced this? And what would a different paradigm see?

The goal is not neutrality — which is often the status quo wearing a mask. The goal is objectivity: the disciplined commitment to testing assumptions against evidence, including the assumptions shared by the entire professional ecosystem that produced the analysis.

The 8:1 ratio is not a secret. It is published data. The question is whether the institutions that train Iran experts will treat it as a fact requiring structural response — or as an inconvenient number to be explained away by precisely the analytical habits it describes.



This article is part of Why Your Iran Expert Might Be Wrong. For the practical audit tool, see The Bias Detection Toolkit. For the ten structural filters that shape Western coverage, see Ten Filters.

Footnotes

  1. TRIP (Teaching, Research, and International Policy) Survey, William & Mary, multiple waves 2004–2023; Mitchell Langbert, “Homogeneous: The Political Affiliations of Elite Liberal Arts College Faculty,” Academic Questions, 2016; Langbert, Quain & Klein, “Faculty Voter Registration in Economics, History, Journalism, Law, and Psychology,” Econ Journal Watch, 2016

  2. TRIP Survey longitudinal data, William & Mary, comparing 2014 wave with subsequent waves through 2023

  3. TRIP Survey, William & Mary (correlation analysis between self-reported political orientation and IR theoretical paradigm: Realism, Liberalism, Constructivism, Post-Positivism)

  4. Bibliometric analysis of citation patterns in international relations journals, documented in TRIP Survey methodology papers, William & Mary

  5. Langbert, Quain & Klein, “Faculty Voter Registration in Economics, History, Journalism, Law, and Psychology,” Econ Journal Watch, 2016 (Economics D:R ratio of 3:1 to 4.5:1 compared to 17:1 in History)