Orchex — Brand Guidelines

Version: 3.0 Last Updated: 2026-02-23 Scope: Brand, messaging, tone, positioning Audience: Documentation, marketing, contributors


1. Brand Essence

Brand Name

Orchex

Core Idea

Turn a plan into parallel AI agents that can't break each other's code.

The orchestration engine inside your AI coding assistant. Paste a plan doc, orchex splits it into parallel streams with file ownership enforcement, self-healing failures, and multi-LLM routing. Works with OpenAI, Gemini, Claude, DeepSeek, and Ollama. Your AI assistant is the driver. Orchex is the engine.


2. What Orchex Is (and Is Not)

Orchex *IS*

  • A parallel orchestration engine for AI-assisted development
  • Multi-LLM — works with OpenAI, Gemini, Claude, DeepSeek, and local models (Ollama)
  • Ownership enforcement — streams can only modify their declared files
  • Self-healing — 10 error categories with intelligent retry and auto-fix streams
  • An MCP-based tool that runs multiple AI agents simultaneously
  • BYOK (Bring Your Own Key) — you control your API costs with any provider
  • BSL 1.1 licensed npm package with optional paid cloud features

Orchex *IS NOT*

  • An autonomous AI agent that makes decisions for you
  • A replacement for understanding your code
  • An enterprise compliance tool (we're for solo devs and small teams)
  • A black-box system — you see exactly what each stream does
  • A managed API key service — you always use your own key

Golden Rule: You define the streams. Orchex runs them in parallel.


3. Positioning

Primary Positioning

The orchestration engine inside your AI coding assistant. MCP-first. Parallel execution with ownership enforcement.

Key Differentiators

Orchex stands out through:

  • Multi-LLM support — works with OpenAI, Gemini, Claude, DeepSeek, and local models (Ollama)
  • Ownership enforcement — streams can only modify files in their owns array
  • 10 error categories — intelligent retry based on error type (timeout, test failure, lint error, etc.)
  • Self-healing — failed streams auto-generate fix streams with error context
  • Parallel execution — multiple streams running simultaneously
  • Dependency awareness — pieces assembled in the right order
  • Pattern learning — telemetry-based optimization over time

Competitive Positioning

Claude Code does tasks one at a time and hits rate limits after 20 minutes. Orchex makes Claude Code 10x faster — spreading work across parallel streams with fresh contexts that never degrade. 5 LLM providers supported — route different orchestrations to the best model for the job. Unlike Agent Teams, orchex enforces file ownership boundaries. Unlike Devin, orchex is transparent and auditable. Your competitors are already running parallel agents.


4. Target Audience

Primary: Multi-LLM Developers

  • Using OpenAI, Gemini, or local LLMs for development
  • Want parallel orchestration without provider lock-in
  • Need reliable, repeatable automation
  • Value file safety and ownership enforcement

Primary: Solo Developers

  • Building side projects, indie products, or freelance work
  • Using any AI coding tool (not just Claude)
  • Frustrated by sequential AI sessions
  • Want speed without losing control

Secondary: Small Teams (2-5 developers)

  • Shipping features faster than competitors
  • Need consistent AI-assisted workflows across different LLM providers
  • Value transparency and auditability
  • Budget-conscious (BYOK model appeals with any provider)

Not Targeting (Yet)

  • Enterprises with complex regulatory compliance needs
  • Organizations needing managed API keys
  • Large teams (20+) with custom requirements

5. Tone of Voice

Overall Tone

  • Direct — Say what it does, not what it might do
  • Technical — Developer-to-developer communication
  • Honest — Acknowledge limitations, no overpromising
  • Helpful — Focus on solving real problems

Avoid

  • Hype-driven language ("revolutionary", "game-changing")
  • Buzzword stacking ("AI-powered intelligent automation")
  • Vague promises ("just works", "magic")
  • Enterprise jargon ("enterprise-grade", "mission-critical")

Examples

Good:

"Orchex runs 4 AI agents in parallel with any LLM. Each stream can only modify its declared files. When tests fail, it automatically generates a fix."

Good:

"Works with GPT-4, Gemini, or Claude. You choose the model; orchex handles orchestration."

Bad:

"Orchex revolutionizes your development workflow with cutting-edge AI orchestration technology."

Bad:

"The only orchestrator you'll ever need." (vague, overpromising)


6. Messaging Hierarchy

When space is limited, follow this order:

  1. Problem — "Your AI assistant does tasks one at a time. Large changes take hours."
  2. Solution — "Parallel AI agents that can't break each other's code"
  3. Magic moment — "orchex learn turns a plan doc into executable parallel streams"
  4. Differentiator — "Ownership enforcement + self-healing + multi-LLM"
  5. Urgency — "Your competitors are already running parallel agents"
  6. Model — "Feature-gated local, paid cloud. BYOK."

Never lead with technical details. Always lead with the problem.


7. Key Messages

One-Liner

"Turn a plan into parallel AI agents that can't break each other's code."

Elevator Pitch (30 seconds)

"Your AI coding assistant does tasks one file at a time. Orchex is the engine that makes it do all of them at once — safely. Paste a plan doc, orchex splits it into parallel streams with file ownership. Each agent can only modify its declared files. Self-healing with 10 error categories. Works across 5 LLM providers. A 40-file migration that took 2 hours of serial prompting now takes 12 minutes. Your competitors are already running parallel agents. Are you?"

Value Props (ordered by importance)

  1. orchex learn — Markdown plan → executable parallel streams. No other tool does this.
  2. Parallel execution — 5-10 streams per wave. 5-10x faster than serial prompting.
  3. Ownership enforcement — Each stream locked to its owns files. Zero conflicts.
  4. Self-healing — 10 error categories with targeted fix streams.
  5. Multi-LLM — 5 providers. Spread load. Avoid rate limits. Optimize cost.

8. Visual Direction

Color Palette

  • Primary Background: Rich black (#0d1117) — GitHub dark theme
  • Accent: Electric cyan (#00D9FF) — vibrant, not muted
  • Secondary: Cool white (#F8FAFC) — for contrast
  • Success: Green (#238636) — GitHub green
  • Links: Blue (#58a6ff) — GitHub link blue

Visual Style

  • Minimal, clean, developer-focused
  • Dark theme default (matches IDE environments)
  • Monospace fonts for code
  • Clear hierarchy, no visual clutter
  • Inline SVG icons (no external dependencies)

Imagery

  • Flow diagrams showing parallel execution
  • Before/after comparisons (sequential vs parallel)
  • Terminal screenshots (real usage)
  • No stock photos, no abstract AI imagery

9. Forbidden Language

Never use these phrases:

Forbidden Why Instead Say
"Autonomous AI" Implies lack of control "Parallel AI execution"
"Magic" / "Just works" Overpromises "Automated" / "Handles for you"
"Enterprise-grade" Not our target "Built for developers"
"Revolutionary" Hype "Faster" / "Parallel"
"AI takes over" Scary, inaccurate "AI assists"
"Unlimited potential" Vague Specific capabilities
"Cutting-edge" Meaningless Describe the actual feature

10. Beta Messaging

Beta Promise

"Turn a plan into parallel AI agents that can't break each other's code."

For beta users:

  • Feature-gated local (5 streams, 2 waves, 1 provider)
  • Cloud trial ($5 credit, 30 days, full features)
  • Multi-LLM support (OpenAI, Gemini, Claude, DeepSeek, Ollama)
  • Ownership enforcement — file safety guaranteed
  • Direct access to the developer
  • Shape what gets built next

What we ask:

  • Try it on a real project with your preferred LLM
  • Report bugs via email: support@orchex.dev
  • Share your use case (optional)

Beta Targets

  • Closed Beta: 50 users (invite-only, high-touch)
  • Open Beta: 100 users (public signup, self-serve)
  • GA: 500+ users (statistical patterns, product-market fit)

11. Content Guidelines

Documentation

  • Problem-first: Start with what the user is trying to do
  • Code examples: Real, runnable, copy-pasteable
  • No assumptions: Explain MCP, streams, waves for newcomers
  • Quick wins: Get users to first success in <5 minutes

Landing Page

  • Hero: Problem statement, not feature list
  • How it works: Visual 3-step flow
  • Pricing: Clear, no hidden costs, BYOK emphasized
  • CTA: "Get Started Free" (not "Sign Up")

Error Messages

  • What went wrong (specific)
  • Why it happened (if known)
  • What to do next (actionable)
  • No blame ("you did X wrong")

12. Pre-Publish Checklist

Before publishing any content, ask:

  • Does this lead with the problem, not the feature?
  • Would a solo developer understand this without context?
  • Is the tone direct and helpful, not hype-driven?
  • Does this accurately represent what orchex can do today?
  • Is BYOK/free local use clear?
  • Would this still be true in 6 months?

13. Brand Assets

Current

  • GitHub repo: github.com/wundam/orchex (private)
  • npm package: orchex (local MCP engine only, cloud code excluded)
  • Staging: via ORCHEX_STAGING_URL env var (URL removed from source)

Needed

  • Domain: orchex.dev (active)
  • Social: @orchex handles
  • Logo: Simple, developer-friendly
  • Favicon: Inline SVG (Phase 6.5)

End of document.

This document governs all external communication about orchex. When in doubt, be direct, honest, and helpful.