Back to the desk

Hannah's Desk

AI & Technology

The questions I’ve spent years studying — how harmful narratives gain legitimacy, how exclusionary movements co-opt credible frameworks, how radicalization pathways form and accelerate — are now inseparable from the systems that mediate them. Recommendation algorithms shape which content reaches whom and in what sequence. Large language models generate text that can replicate the rhetorical strategies I’ve documented in extremist discourse. Automated content moderation systems make millions of decisions daily about what speech is permitted and what is removed, often without the contextual understanding that makes those distinctions meaningful. My work in this space starts from a conviction that the people best positioned to anticipate how AI systems interact with political movements, gendered power structures, and human rights frameworks are not the engineers building those systems alone — they are the people who understand the dynamics those systems encounter, provided they also understand the mechanisms producing the outputs.

Where My Research Meets AI Systems

Radicalization Pathways and Recommendation Architecture

My academic work has traced how far-right movements build platforms and recruit adherents through specific narrative strategies — the weaponisation of women’s rights rhetoric, the appropriation of democratic language to advance exclusionary agendas, the use of grievance framing to normalise extremist positions. These strategies do not operate in a vacuum. They operate within information environments shaped by algorithmic systems that optimise for engagement, surface content based on predicted relevance, and create feedback loops between user behaviour and content visibility.

Understanding how a recommendation algorithm selects and sequences content is not a separate question from understanding how radicalisation pathways form. It is the same question asked at a different layer. The narrative dynamics I have studied at the political and sociological level — escalation patterns, in-group reinforcement, gateway content that appears moderate but leads to increasingly extreme material — map directly onto observable platform behaviours that emerge from optimisation decisions made during system design. Analysing these dynamics requires both the domain knowledge to recognise what is happening in the content and the mechanistic understanding to explain why the system surfaced it.

Content Moderation and the Limits of Classification

At Outright International, I monitored how anti-SOGI and anti-SRHR constituencies operated within the UN system — deploying language carefully calibrated to appear within the bounds of legitimate political discourse while advancing agendas designed to exclude. This is structurally identical to one of the central challenges in automated content moderation: distinguishing between protected speech and harmful speech when the harmful speech is deliberately constructed to resemble the protected kind.

Automated classification systems — whether keyword-based filters, machine learning classifiers, or large language model evaluators — operate on surface features of language. They can identify slurs, flag known harmful phrases, and detect patterns associated with previously labelled harmful content. What they struggle with is context, intent, and the kind of strategic ambiguity that sophisticated actors exploit. My research on how extremist movements weaponise the language of rights and legitimate concern is directly relevant to understanding why these systems fail at the cases that matter most, and where human judgment, policy design, and system architecture need to compensate for those limitations.

Gendered Harm and AI System Behaviour

My MPhil dissertation examined how women’s rights rhetoric is co-opted within right-wing movements to justify xenophobic narratives — a process in which the language of gender equality is repurposed to serve exclusionary ends. This dynamic has direct parallels in how AI systems handle gendered content. Training data reflects the biases present in the text it was drawn from: patterns of representation, association, and framing that encode gendered assumptions into model behaviour. The outputs that result — from hiring tool recommendations to generated text that reproduces stereotypical associations — are not random failures. They are the predictable consequences of specific data and design choices.

Understanding the mechanism matters because it determines what interventions are possible. A system that produces biased outputs because of training data composition requires a different intervention than one that produces biased outputs because of optimisation objectives or post-training alignment choices. The analytical instinct I bring from gender justice research — asking not just what is happening but how it is being produced and whose interests the framing serves — applies directly to examining AI system behaviour with the specificity needed to recommend actionable changes.

Analytical Approach

Current Focus Areas

Writing & Analysis

I’m building a body of writing that examines how AI systems interact with political extremism, human rights frameworks, and governance challenges.

Current sample

Roblox Product Policy Crosswalk Memo applies platform policy to coded misogyny, extremist dog whistles, and moderation edge cases on Roblox. It reflects the kind of work I’m interested in doing more of: product-grounded analysis that moves from policy language to concrete enforcement decisions.