We’re building Athyr, a runtime that treats AI agents as distributed microservices rather than in-process objects. It’s early days, and we’re looking for feedback.
The Problem We’re Solving
Building multi-agent systems is hard. You need to solve:
- Service discovery - how do agents find each other?
- Message routing - how do agents communicate?
- State management - where does agent memory live?
- LLM provider management - how do you handle multiple providers, rate limits, and failover?
Most teams end up building custom infrastructure that’s tightly coupled to their use case. We think there’s a better way.
Why We’re Not Just Using LangGraph or CrewAI
Current agent frameworks are designed for single-machine, single-process Python applications. They work well for demos and prototypes, but they assume:
- Shared memory between agents
- Instant function calls
- A single failure domain
- Everyone writes Python
When you need agents across machines, polyglot teams, or fault isolation, you end up fighting these frameworks rather than building on them.
Athyr takes a different approach. Distribution isn’t an afterthought—it’s the foundation. Agents are network services from day one, which means the patterns that work on your laptop scale to distributed deployments without architectural rewrites.
What We’ve Built
Athyr is a single binary runtime. Agents connect to it and focus on business logic while the platform handles infrastructure.
How it works:
- Agents connect as independent services (gRPC or HTTP)
- The platform manages memory, messaging, LLM routing, and state
- Agents contain only business logic—no embedded infrastructure code
What’s working today:
- Single Binary - No dependencies, no external services required.
- Language Agnostic - SDKs for Go and Python. Any language with gRPC/HTTP support can connect.
- LLM Gateway - Multi-provider routing with circuit breakers, automatic retry, and failover.
- Platform-Managed Memory - Conversation history with rolling windows and automatic summarization.
- Agent Registry - Capability-based discovery and pub/sub messaging between agents.
- Orchestration Patterns - Pipeline, fan-out, handoff routing, and group chat patterns in the Go SDK.
Where We Think This Shines
The core idea is that your agents become simpler. No embedded LLM clients, no retry code, no memory management—that’s the platform’s job.
We’re particularly interested in feedback from teams working on:
- Distributed deployments - Agents across machines, regions, or at the edge
- Polyglot teams - Mixed Go, Python, and TypeScript codebases
- On-prem or air-gapped environments - Where cloud dependencies are a problem
We’d Love Your Feedback
Athyr is early. The core is working, but we’re still figuring out what matters most. If you’re building multi-agent systems and hitting infrastructure pain, we’d love to hear from you:
- What problems are you solving with agents?
- What’s painful about your current setup?
- What would make Athyr useful to you?
Open an issue on the Go SDK or Python SDK, or just try it out and tell us what breaks.
Try It Out
# Install Athyr
curl -sSL https://athyr.tech/install.sh | sh
# Start the server
athyr serve
Check out the documentation to learn more, or dive into the Go SDK or Python SDK to build your first agent.