Researcher, founder, and host at the intersection of AI governance, feminist philosophy, and perpetrator intervention.
They determine who holds authority over representation, whose subjectivity is encoded or erased, whose behavior is shaped and whose does the shaping. When structured by concentrated authority and extraction logic, they encode dehumanizing representations of women, transmit them at scale, and produce measurable downstream harm.
The animating question: can we understand the conditioning mechanism precisely enough to reverse it, and can AI, itself a vector of harm, be redesigned as a tool of repair?
Information architectures are authority structures with identifiable decision-makers, economic interests, and enforcement mechanisms. Concentrated authority over representation produces predictable harm: objectification, dehumanization, and violence against those with least power in the system.
Ostrom's polycentric governance is the counter-model: distributed stewardship, transparent rules, accountability, commons protection. The EU AI Act Article 12 (August 2026 enforcement) is Polycentria's first commercial entry point.
Objectification is not an attitude. It is a practice enacted through information systems that strip interiority, deny autonomy, and encode bodies as instruments. Nussbaum's seven features. Langton's sexual solipsism. Haslam's dual model of mechanistic and animalistic dehumanization.
Both forms are documented downstream of pornography consumption at scale. The pipeline is empirical, not speculative.
Dehumanization is always first a governance act: withdrawal of moral recognition that licenses harm. The politics of recognition establishes that subjecthood requires acknowledgment from others. Systematic denial is political, not merely personal.
AI companions designed as perfectly compliant objects train users in habits of dominance. The counter-design question: what does AI look like when built on polycentric principles that support mutual recognition?
Batterer intervention programs produce small-to-medium effect sizes. The dominant Duluth Model has limited empirical justification. The field has run 8 RCTs in 40 years versus 300+ for PTSD interventions.
ACT is the most empirically promising approach, targeting psychological inflexibility and experiential avoidance. NVC and authentic relating are applied deconditioning methodologies. The research gap: AI-assisted perpetrator intervention before court mandate.
Pornography is the largest unregulated behavioral conditioning system in human history. Wright's 3AM model: sexual scripts are acquired from media exposure, activated by subsequent cues, and applied attitudinally and behaviorally.
Algorithmic recommendation systems are behavioral modification infrastructure. Zuboff: behavioral data extracted from human experience is sold to influence future behavior. AI companions are the next stage of this infrastructure.
Polycentria builds protocol infrastructure for ethical AI cognition transfer. The core product is SHD-CCP: Symmetrical High-Dimensional Context Compression Protocol. SHD-CCP encodes cognitive and emotional states geometrically on a Trefoil Knot Manifold, achieves 720:1 structural compression, and provides a cryptographic audit trail native to the packet format. The mathematical invariant QᵀQ = I guarantees interoperability across heterogeneous AI systems.
The name draws from Elinor Ostrom's polycentric governance. Cognitive state transfer infrastructure is a commons. It should be governed as one.
polycentria.com ↗U.S. Provisional Patent App. No. 63/876,451 via Cooley LLP. Continuation deadline: September 5, 2026.
Every act of violence against a woman was first an act of the mind. Before the control, the coercion, the harm: a moment when another person's consciousness stopped counting. That moment is not random. It is produced by information architectures, governance structures, and cultural systems that encode dehumanization as normal and distribute it at scale.
A research-driven long-form interview podcast examining the production pipeline of harm and the possibility of its reversal. Not true crime. Not therapy. Public philosophy with primary sources.
Men who have exited extremist, incel, and manosphere communities. First-person accounts of radicalization and exit.
Objectification, dehumanization, moral psychology, media effects, algorithmic radicalization. Empirical accounts of the pipeline.
ACT practitioners, BIP facilitators, authentic relating teachers, IPV intervention specialists. The practice of deconditioning.
AI ethics, platform governance, content moderation, recommendation system design. Engineers accounting for what they made.
Consciousness, moral status, feminist political theory, governance, objectification. The conceptual scaffolding for the whole project.
EU AI Act, platform regulation, DV law reform, perpetrator program standards. The institutional question.
Available for research collaboration, podcast guest inquiries, advisory roles, and consulting engagements. Podcast guest pitches from researchers, clinicians, technologists, and individuals with lived experience in the thesis domain are welcome.