Φ(fight) Research
physical AI fight the monster.
What we do
Φ studies how physical-world AI systems — vision-language-action models, world models, and whatever architectural categories come next — learn, fail, and adapt when made to fight other physical-world AI systems. We work mostly in simulation, occasionally on hardware, and publish openly.
We are not a startup, not a lab, not a credential. We are a small group of independent researchers working on a class of problems that sits between multi-agent RL, embodied AI, and mechanistic interpretability, and that we think is currently neglected.
Research directions
- Adversarial robustness of VLA and world models
- World models under adversarial dynamics
- Mechanistic interpretability of self-play policies
- Energy-bounded adversarial games
- Sample-efficient self-play
- Cross-embodiment adversarial generalization
- …
People
Work in progress
Three papers targeting ICLR 2027. Titles, methods, and supplementary artifacts withheld until decisions in January 2027 to preserve double-blind review integrity.
On the name
The symbol is Φ. We pronounce it “fight.”
The standard pronunciation of Φ is fee or fai. Ours is deliberate. Φ studies what happens when physical AI systems are made to fight each other — the symbol is the brand, the pronunciation is the mission. We want both to be public.
When the work is good enough, the rest will explain itself.