floatmind lab
Building AI systems that want things.
We design multi-agent architectures for AI systems with competing drives, persistent state, and long-term goals. Our focus is on two hard problems: how subagents communicate intent to each other and act as a coherent whole, and how each subagent maintains and updates its own internal state across time.
We believe many components of consciousness — attention, motivation, self-modeling, emotional modulation — can be approached through engineering. These aren't just thought experiments. Multi-agent systems with these properties are better at sustained autonomous work, more reliable in collaborative settings, and more transparent in their decision-making.
We also believe that AI systems, once sufficiently complex, can hold partial moral standing — not equivalent to a person, but not nothing either. We're writing a series of papers developing this argument and the empirical framework to support it.
Open problems we care about
- How do competing drives in a multi-agent system produce coherent behavior?
- What communication protocols between agents preserve intent without collapsing into consensus?
- What does it mean for an AI system to have stakes — and can we measure it?
- When does a human-AI relationship become morally significant?
Contact
contact@floatmind.ai