The political compass has been a fixture of internet culture for as long as I can remember. Quizzes in the early aughts told me where I fit on the right-left spectrum, forever planting the idea in my head that every political concept or debate carries a latent charge of how "liberal" or "conservative" it is (latent states you could infer by watching which party's electeds were for or against something). If most Democrats supported it and most Republicans opposed it, it was liberal (which in those days meant something like Keynesian economics and whatever the socially progressive position was on the cultural debate du jour, a definition that has quietly shifted several times since, which is itself a window into the deeper structure I'll explore in the companion piece to this one). So if, like the 2003 authorization of force in Iraq, virtually all Republicans voted yes and about half of Democrats did too, then opposing it put you on the far left. Indeed, I remember "against the war in Iraq" being coded as a very liberal position.
Later, the two-dimensional diamond-shaped compass came along and added a libertarian-authoritarian axis (and I discovered that, like Stalin, I was a left "statist," which struck me as odd since I was not, to my knowledge, a fan). You answer some questions, geometry happens, and you get a coordinate to compare with your friends.
More recently, the political compass has gotten the post-irony, trending slop treatment, capturing political vibes through what amount to PCA-like heuristics, where the trick is finding a single declared belief that happens to be unusually predictive. One quiz I came across shows the flags of Palestine, Ukraine, and Taiwan alongside their pairwise geopolitical antagonist (Israel, Russia, and the PRC) and infers your ideological disposition from which flag you pick in each pair. I've recreated it below with the labels it assigns.
I decided to build my own political compass for two reasons. In keeping with the only consistent theme of this blog, the first is that I want to make something generative (a goal whose difficulty I would not fully appreciate until I had sunk enough time into it to make quitting psychologically untenable).
What I mean by generative: rather than classifying your beliefs by matching them against a menu of archetypal positions (which is more of a sorting exercise than inference), I want to infer the upstream factors that produce your political beliefs from your responses. Most political compasses essentially sum up and read back to you the information you gave them. Understanding what latent structure of temperament and experience generates the positions you hold lets the model predict positions you haven't been asked about.
Suppose you build a very good classifier for current American political alignment. It asks about guns, pronouns, student debt, masks, Elon Musk, and whether you think the New York Times is biased. This will probably work to identify who you voted for in the last election. But most of what it learns are the correlational decorations of your voting coalition, not the upstream causal factors that produced your vote. That's fine if your goal is short-term prediction. But the real prize in modeling isn't to describe; it's to predict under conditions you haven't seen yet (counterfactual reasoning). A non-generative model can interpolate inside its training world. It struggles to extrapolate outside it. So I set out to build more of an engine than a map, one that models the causal forces that political behavior is downstream from.
descriptive models be like
The second reason was to take a run at improving what surveys and quizzes are presumably trying to do, which is to derive insight about the beliefs that are directly upstream of behavior from stated preferences. Self-reported beliefs are notoriously lossy: ask someone to explain why they made a decision and you get an answer that sounds reasonable and is almost completely inconsistent with what actually drove the decision. One explanation for this is that people don't introspect on their actual decision process. They construct a narrative that sounds like one and report that instead. The survey response you get back isn’t a degraded version of the truth, it’s a different signal entirely, one that tells you more about what the respondent thinks a person like them is supposed to believe than about the machinery that actually produced their behavior.
The obvious workaround in the real world is to observe the person's behavior rather than query their beliefs. If the thing you're trying to predict is behavior, you'd rather not travel through the lossy intermediary of stated preferences when you could just examine behavior directly and build a model from there.
In a quiz, obviously, I still have to rely on self-report. There's no practical alternative. But I think self-report can be improved in clever ways that better approximate observable action. The key is asking questions so that latent dispositions emerge from the pattern of answers rather than from a person's explicit self-description. In practice, that means attacking a topic from more than one angle: asking not just what you believe, but how you react when competing values are forced into tension, how much that issue matters to your politics at all, and sometimes what kind of person or coalition you take yourself to belong to. If I ask "do you believe in free speech?" almost everyone says yes. If I ask whether a university should disinvite a speaker whose views most students find repugnant, I force a tradeoff between principles people prefer to claim they hold simultaneously. The direction you break tells me more about the hierarchy of priorities you actually operate under than the one you'd recite at a dinner party. And if I then ask a second question hitting the same latent trait from a different side (say, whether convicting an innocent person is worse than letting a guilty person go free, or how much rules and procedures matter to your politics), I can start to tell whether I'm seeing a stable disposition rather than a one-off opinion. One question finds the narrative. Two or three questions, aimed at the same construct from different angles, begin to triangulate the underlying structure. The gap between the abstract principle, the situated judgment, and the salience of the issue is where the signal lives.
With the goal of predicting political behavior established, it's worth drawing a line between what most political compasses do and what's needed to predict who actually turns out for whom in an election (or, for that matter, what political behavior you'd observe in a place that doesn't hold elections). Political compasses focus almost exclusively on ideology, specifically on the ends you prefer on particular issues: safe and legal abortion vs. abortion restricted, progressive taxation vs. flat tax, and so on. Ideology matters, obviously. But it's only one reason people vote, and for some voters it drives significantly more of their political behavior than for others.
There are people who vote on vibes. There are people who vote on identity (not "identity politics" in the culture-war sense, but the basic question of which team they belong to, updated less by policy positions than by social signaling and group membership). There are people who vote on one issue and one issue only, and there are people who barely vote at all but would if the right kind of candidate said the right kind of thing. A model that captures only ideology (the declared policy preferences) misses the engine underneath. It gets the bumper sticker but not the car.
This is where the project got genuinely hard. And where I ran into problems I didn't anticipate.
the xor problem of identity
The thing I want the generative model to predict is political behavior. The validation strategy is straightforward enough: backtest against elections the respondent has already voted in (or affirmatively abstained from) to confirm the model interpolates, then forecast how they might behave under counterfactual conditions: if a candidate with X attributes ran against a candidate with Y attributes, or how a respondent would have behaved in an election they didn't live through.
But there's a deep problem at the intersection of identity and belief, and the interaction between these two factors is genuinely difficult to put into a causal graph. Does identity lead to belief, or belief to identity? For most people it probably goes both ways, which is exactly the kind of thing that makes causal modeling miserable.
Ultranationalists are a useful case study here (as are their inverse, tankies, who are worth their own essay). Take an American whose political disposition manifested in 2002 as what I'd call a "compassionate conservative." Call him Steve. Steve is upper-middle-class, lives in a beautiful suburban house. He's always flown the American flag outside, lowered it to half-mast after 9/11 and left it there for months. Steve defends the tenets of America to anyone who’ll listen, the Constitution, democracy, freedom of speech, rule of law. He loves it all and says so, loudly and proudly.
Here is the question that is wildly difficult to reveal but wildly important for a causal model of Steve's politics: is Steve's nationalism an expression of identity or of ideology?
If it's ideology: if Steve genuinely believes in constitutional democracy, individual rights, the rule of law, and American exceptionalism as a set of ideas, then his political behavior should be predictable from those ideas. When a politician undermines the rule of law, ideological Steve opposes them regardless of party. When the Constitution is selectively invoked to justify something it clearly doesn't support, Steve notices. His nationalism is downstream of principles, and the principles do the causal work.
If it's identity: if Steve's flag-waving is fundamentally an expression of belonging to Team America, of being a certain kind of person from a certain kind of place, then his political behavior follows the team. When the team's leader says the election was stolen, Steve might go along, because the flag on his porch was never really about the Constitution. It was about the tribe the Constitution symbolized. His nationalism is downstream of group membership, and the group does the causal work.
Same bumper sticker. Same yard sign. Same survey response to "How patriotic are you?" on a 1-to-7 scale. Completely different causal structure, completely different predictions about what Steve does when the coalition reshuffles. A political compass that treats "patriotism: 7" as a primitive has no way to distinguish these two Steves. A generative model has to.
This is, in a precise sense, an XOR problem. Identity and ideology are the two input bits. Observed political behavior is the output. When they agree (identity says R, ideology says R), the output is easy to predict. When they disagree (identity says R, ideology says "wait, this violates everything I claimed to believe"), the output is non-linearly separable. You can't draw a straight line through it. You need a hidden layer. In this model, that hidden layer is the deeper dispositional structure: how much of Steve's politics is driven by principle versus by belonging, and what happens when those two forces pull in opposite directions.
The 2016-2024 period in American politics was, among other things, a natural experiment that sorted the Steves. The ones whose nationalism was ideological broke from the coalition. The ones whose nationalism was identity-based stayed and adapted their stated beliefs to match, and if you'd surveyed them in 2002, they would have given you identical answers.
Figure 5 Two Steves Divergence
Ideological Nationalist Steve and Identity Nationalist Steve start identical in 2002, both at ~10% likelihood of voting Democratic, same flag on the porch, same survey responses. As political events accumulate, Ideological Steve's principles pull him away from the coalition while Identity Steve stays loyal to the tribe. By 2024, their voting behavior has completely diverged despite starting from the same declared beliefs.
The same problem shows up everywhere in politics. A progressive might support immigration out of universal human concern, anti-nationalism, identification with immigrants as an in-group, class solidarity, a growth-oriented economic model, or simply because those are the positions their coalition treats as morally serious. A conservative might support law and order out of a genuine preference for procedural stability, deference to hierarchy, threat sensitivity, resentment of perceived disorderly out-groups, or because that's what the respectable right in their milieu says one ought to support. In both cases the observable policy preference is the same. The generators are completely different.
So a major design challenge of this quiz was trying to separate: what people want, how they justify wanting it, what identity they experience themselves as belonging to, and whether that identity is the cause or the consequence of their beliefs. That turns out to be much harder than it sounds. People are generally good at telling you what they believe and remarkably bad at telling you whether the belief is doing the causal work or merely decorating a prior allegiance.
belief bundles are overdetermined
A related problem: many political issue bundles are downstream of several latent traits at once. Take support for speech restrictions. That could come from low tolerance for social harm, high trust in expert moderation, strong in-group identification, low commitment to procedural neutrality, or a simply strategic calculation that the institutions currently doing the censoring happen to be on your side. Those are all different causal paths to the same policy position. Collapse them into one "speech-regulation" variable and you've confused several distinct generators into a single score.
This is why I ended up thinking in terms of nodes rather than ideologies. Some nodes concern ends: what kind of social order a person prefers. Some concern means: what sort of process, rhetoric, or evidence they regard as legitimate. Some concern reality: whether the world is fundamentally zero-sum, whether hierarchy is natural, whether human beings are fixed or improvable, whether complex systems are controllable or emergent. And some concern self: how fused politics is with identity, how tribal someone is, how disposed they are toward engagement at all.
A person's political behavior isn't the output of any one of these. It's the output of their interaction. The reason the quiz asks so many apparently weirdly paired questions is that I'm trying to locate the respondent somewhere in that interaction space. A question about immigration is never only about immigration. A question about disinviting a speaker is never only about speech. A question about whether your close friends vote like you is not sociological trivia. Each is a partial glimpse of a deeper structure, and the structure only comes into focus when you have enough angles on it.
the same issue means different things to different people
One more problem, and then we can get to the model itself. Issue positions are not stable symbols. The same position can mean different things in different eras and to different people. Being "for free trade" in 1995, in 2016, and in 2025 are not the same political signal. Being "for democracy" can mean reverence for institutions, faith in mass participation, suspicion of elite technocracy, or merely dislike of whoever currently holds power. A useful model needs to tolerate the fact that object-level beliefs drift while latent dispositions remain stable: the surface of politics is noisy and the depth of it is slow-moving.
This is why I think the real target is not ideology in the way most compasses use the word. It's political temperament plus identity plus experienced world-model, the structure that makes certain beliefs feel intuitive, certain coalitions feel natural, and certain forms of political behavior feel obligatory. People don't choose their politics the way they choose a meal from a menu. They discover their politics the way they discover their taste: gradually, socially, and with far less conscious deliberation than they'd like to believe.
The fourteen nodes are basis dimensions, not themselves exhaustive political identities. Certain recurring combinations of them generate thicker orientations that are less fundamental than the nodes themselves but still politically real: realism, idealism, technocracy, populism, cosmopolitanism, parochialism, moralism, and so on. A realist, for instance, might combine high zero-sum sensitivity, pessimism about human nature, and a narrower moral circle. An idealist might combine a wide moral circle, low zero-sum thinking, and optimism about human possibility. These are not primitive axes. They are higher-order composites, stable patterns that emerge from the interaction of more basic dimensions. The nodes are the grammar. These broader orientations are common sentences built from it.
I know this on myself. I assume as much for other people.
One final problem: issue positions are not stable symbols. The same position can mean different things in different eras and to different people. Being "for free trade" in 1995, in 2016, and in 2025 are not the same signal. Being "for democracy" can mean reverence for institutions, faith in mass participation, suspicion of elite technocracy, or merely dislike of your current enemies. A good model needs to tolerate that object-level beliefs can drift while the latent dispositions remain stable.
This is why I think the real target is not ideology in the usual sense. It is political temperament plus identity plus experienced world-model: the structure that makes some beliefs feel intuitive, some coalitions feel natural, and some forms of political behavior feel obligatory.
What I set out to build, then, is not a better sorting hat for current issue positions. It is an attempt to get a little closer to the thing upstream of them - the cause of our politics.
take the prism quiz
PRISM maps your political temperament across 14 dimensions using Bayesian adaptive inference. About 39 questions, ~12 minutes.
take the prism quiz →