I finished my MPhil at Trinity College Dublin in September 2025 and am now back in the Seattle area. I’m targeting roles in San Francisco and the Bay Area but open to opportunities elsewhere. Right now I’m focused on the work I’m building next.
Building AI Literacy
Status: Getting Started
I’m beginning a structured, independent programme of study in AI systems — not to become an engineer, but to develop the mechanistic understanding required to engage seriously with how these systems produce the outcomes they do. The aim is straightforward: to be able to look at an AI output — a moderation decision, a recommendation, a generated text — and make a grounded claim about the mechanism that produced it and where the points of intervention are.
The programme is designed around four parallel tracks: understanding what AI systems actually are and how they work; how they go wrong — bias, amplification, radicalization pipelines, gendered harm; how people try to fix them — alignment techniques, content moderation, regulatory frameworks; and how they can be used well — for advocacy, research, access to justice. My existing research in radicalization pathways and opposition monitoring provides a lens that most people approaching AI from a purely technical direction don’t carry. The technical fluency I’m building is what makes that lens operational rather than observational.
Seeking Roles
Status: Active
I’m looking for roles in trust and safety, AI governance, policy, and research — positions where understanding how harmful narratives function and how technical systems mediate them creates tangible value. The organisations I’m most drawn to are those that treat content moderation and AI safety as core institutional functions rather than compliance exercises, and that understand the importance of having people in the room who know what to look for — not just technically, but in terms of the political movements, rhetorical strategies, and human rights frameworks that give harmful content its real-world consequence.
My immediate targets include policy and governance roles at NGOs, foundations, and civil society organisations working on human rights, gender justice, and democratic accountability, as well as trust and safety and responsible AI positions at technology companies where my background in extremism research and UN advocacy is directly applicable. I’m particularly interested in environments that support continued technical development rather than siloing subject matter expertise away from the systems it needs to engage with.
Writing & Analysis
Status: Building
I’m developing an analytical practice at the intersection of AI systems and the political and social dynamics they interact with — the kind of writing that takes a real incident or system behaviour and explains the technical mechanism behind it alongside the human consequence, rather than just observing that something went wrong.
The work I want to produce falls into a few categories: examining specific AI system behaviours through the lens of extremism research, content moderation challenges, and human rights frameworks; comparative analysis of how different AI systems handle scenarios drawn from real-world contexts; and critical engagement with governance documents, safety research, and regulatory frameworks — not summaries but genuine analysis of what they’re trying to do, whether the mechanisms actually address the problem, and what they miss.
This is as much about developing my own analytical voice at this intersection as it is about any individual piece. The goal is a growing body of work that demonstrates how I think about these problems — grounded, specific, and informed by both the technical mechanisms and the human dynamics they interact with.
The first completed piece in that body of work is the Roblox Product Policy Crosswalk Memo. It works through coded misogyny, extremist dog whistles, and moderation edge cases on Roblox, and it reflects the kind of product-facing policy analysis I want to keep producing.
OSINT & Monitoring
Status: Developing
My time at Outright International involved structured opposition monitoring — tracking the progression of UN General Assembly resolutions, logging member state voting behaviour and floor statements, and synthesising patterns in how anti-SOGI and anti-SRHR constituencies organised their advocacy. That work required rigorous source evaluation, structured logging, and the discipline to separate what the data showed from what we hoped it showed.
I’m expanding those capabilities into open-source research and monitoring methodologies applicable to tracking extremist narratives, disinformation campaigns, and coordinated influence operations across online spaces. This builds naturally on my academic research into radicalization pathways and the role of online media in spreading far-right ideology, and connects directly to the trust and safety roles I’m building toward — where structured monitoring of harmful actors and narrative trends is a core operational function.
Current focus areas include developing more systematic approaches to source triangulation and credibility assessment across platforms, and building familiarity with the tools and workflows used in professional OSINT and threat intelligence contexts.
Featured Work
Roblox policy crosswalk memo
The purpose of this crosswalk memo is to demonstrate how Roblox's current policy frameworks and language apply to real-world scenarios. It focuses on coded extremist language and symbols in usernames, avatars, and groups, then works through how those signals complicate enforcement in practice.
What the sample shows
The memo provides recommendations for enforcement in line with Roblox's Community Standards, identifies edge-case scenarios that challenge ambiguities in policy enforcement, and follows those ambiguities through concrete moderation decisions.
EX-01
ViolationAvatar T-shirt featuring Spotify label 'This is Kanye West' with a photo of Anne Frank
Mocks the victims of an act of mass violence and human rights violations.
EX-02
Further inspectionAvatar features displaying Reichskriegsflagge
Violation if additional context suggests glorification rather than historical documentation.
EX-03
Closer inspection, then violationUser display name 'HonkWonk' for a character in the Elections Simulation experience
Report as a violation of Safety due to glorification of Nazism.