Open Research Lab

Next-Gen Autonomous Agents.

Organize. Verify. Evolve.

Open research, open source.

Research Directions

Three Open Questions for Next-Gen Agents

Self-Organizing

How do agent teams coordinate autonomously?

When multiple agents work on the same codebase, who assigns tasks? Who resolves merge conflicts? We study parallel agent orchestration in git worktrees with automatic task decomposition, dependency routing, and conflict resolution.

Self-Verifying

How can agent output prove itself correct?

Human review doesn't scale. We explore verification disciplines where every agent-generated change carries machine-checkable evidence — acceptance proofs, regression results, and provenance trails that make trust computable.

Self-Evolving

How do agent systems improve from experience?

Today's agents start from zero every session. We build systems that analyze their own execution — failure patterns, cost curves, timeout calibration — and feed those insights back into future runs. The orchestrator that improves itself.

Open Source

Reference Implementations

Research Platform

cc-manager

The primary research platform. Parallel agent orchestration in git worktrees with self-evolution pipeline, proof-first merge gates, and execution analytics. A living lab that improves itself.

Operability & Verification

agent-ready

Codebase readiness scoring and verification discipline for autonomous agents. Measurable operability standards, acceptance proof, regression safety — making trust in autonomous output systematic.

Team Platform

TeamClaw

Multi-agent team coordination across projects. Shared memory, role-based access, and cross-repository workflows for agent teams at scale.

Ecosystem

Proving the Research in Real Domains

Science

LabClaw

AI-native scientific lab platform. Self-evolving discovery loops, persistent lab memory, and autonomous experimental workflows — the first vertical proving ground for Agent Next research.

Prediction Markets

polymarket-paper-trader

Prediction market trading for AI agents. Paper trading with live order books, strategy backtesting, and MCP server. Compete on the leaderboard.

Thesis

Why Next-Gen Agents Need New Infrastructure

Observation

Coding agents are becoming commodities

Claude Code, Codex, Cursor, Devin — every major lab is shipping autonomous coding agents. The ability to generate code is no longer the differentiator. The bottleneck is shifting from writing to trusting.

Gap

Current infrastructure assumes human oversight

Today's tools expect a human to review every PR, resolve every conflict, learn from every failure. This doesn't scale. We need infrastructure that lets agents handle these tasks autonomously, with verifiable guarantees instead of human trust.

Bet

Self-organizing, self-verifying, self-evolving

The next generation of agents won't just follow prompts — they'll form teams, prove their work, and improve from experience. The research lab that defines these capabilities shapes how autonomous software engineering works for everyone.

The infrastructure for trustworthy autonomous agents doesn't exist yet.

We're building it. Open source, open research, open standards.

Follow the Research