Titan Planet Logo

How Titan Works

A visual deep dive into TitanPL's request lifecycle, compilation process, and Gravity Runtime execution.

πŸ” The Big Picture

Titan transforms your JavaScript/TypeScript code into a high-performance native Rust server. Here's how it all works together.


πŸ—οΈ Build-Time: From Code to Binary

1. Action Discovery

Titan scans your app/ directory for action files:

app/
β”œβ”€β”€ actions/
β”‚   β”œβ”€β”€ user.js          β†’ /actions/user
β”‚   └── auth/
β”‚       └── login.js     β†’ /actions/auth/login

2. Bundling with esbuild

Each action is bundled independently:

  • Tree-shaking: Only used code is included
  • Dependency resolution: All imports are inlined
  • Type stripping: TypeScript types are removed
  • ES6 module output: Clean, modern JavaScript
// Before bundling (user.js)
import { validateEmail } from '../../utils/validation';
export const getUser = defineAction((req) => { ... });

// After bundling (optimized, dependencies inlined)
const validateEmail = (email) => { ... };
export const getUser = defineAction((req) => { ... });

3. Rust Code Generation

Titan generates Rust code that:

  • Registers each action's route
  • Maps HTTP paths to action functions
  • Creates the routing table
// Generated Rust (simplified)
pub fn register_actions(router: Router) -> Router {
    router
        .route("/actions/user", post(execute_user_action))
        .route("/actions/auth/login", post(execute_login_action))
}

4. Compilation

The Rust server is compiled into a native binary:

cargo build --release
   Compiling titan-server v26.12.1
   Finished release [optimized] target(s) in 12.3s

Result: A single binary executable with Gravity (embedded V8 runtime) and all actions.


⚑ Gravity Runtime: Request to Response

Phase 1: HTTP Server (Rust Axum)

When a request arrives at http://localhost:3000/actions/user:

  1. Axum receives the HTTP request (highly efficient Rust async I/O)
  2. Router matches the path to /actions/user
  3. Request is dispatched to an available worker

Phase 2: Worker Execution (Gravity Runtime)

Drift Execution Flow

Each worker:

  1. Receives the request (headers, body, query params)
  2. Enters V8 isolate (managed by multi-threaded Gravity)
  3. Calls the action function
  4. Executes synchronously β€” If drift() is encounterd, the worker is freed during the I/O wait.
  5. Returns the response back to Axum
// Your action code
export const getUser = defineAction((req) => {
  // This runs inside a V8 isolate
  const userId = req.query.id;
  
  // Non-blocking call to database via Drift
  const user = drift(t.db.query(`SELECT * FROM users WHERE id = ?`, [userId]));
  
  return {
    status: 200,
    body: { user }
  };
});

From JavaScript's perspective:

  • Everything is synchronous-looking (via Drift)
  • No async/await, no callbacks, no promises
  • Linear execution, top to bottom

Key Architectural Principles:

  • Async Rust Core: The Axum server is fully asynchronous for maximum network I/O efficiency.
  • Synchronous Workers: Each worker executes JavaScript synchronously to ensure predictability.
  • Worker Guard: Native blocking calls (without drift()) are disallowed and will trigger a runtime error to preserve concurrency.
  • Worker Availability: Once an action completes or a drift() suspends it, the worker returns to the pool for the next request.

Phase 3: Response

The result flows back:

JavaScript Result β†’ Rust Handler β†’ Axum Router β†’ HTTP Response

🧡 Worker Pool Architecture

Multi-Threaded Architecture

Initialization (Server Start)

When the server starts:

  1. Worker pool is created (e.g., 8 workers on an 8-core machine)
  2. Each worker initializes:
    • Creates a V8 isolate
    • Loads and compiles all action bundles
    • Registers Gravity runtime APIs (t.fetch, t.db, etc.)
  3. Workers enter idle state, waiting for requests

Request Distribution

                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
Incoming Requests β†’ β”‚ Rust Dispatcher β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                             β”‚
              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β–Ό              β–Ό              β–Ό
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚Worker 1 β”‚   β”‚Worker 2 β”‚   β”‚Worker N β”‚
         β”‚ [Busy]  β”‚   β”‚ [Idle]  β”‚   β”‚ [Idle]  β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Dispatcher Logic:

  • Maintains a queue of available workers
  • Assigns requests to idle workers
  • If all workers are busy, request waits in queue

Zero Lock Contention:

  • Workers never share state
  • No global locks
  • No mutex contention

πŸ”„ Development Mode

In development (titan dev), Titan adds:

Hot Reload

  1. File watcher monitors app/ directory
  2. On change detected:
    • Re-bundle affected actions
    • Reload action code in all workers
    • No server restart needed

Error Reporting

Titan captures:

  • Syntax errors (esbuild compilation)
  • Runtime errors (JavaScript exceptions)
  • Stack traces (with source maps)

Displayed in a beautiful Next.js-style error overlay.


🎯 Cold Start Characteristics

TitanPL Cold Start: ~3-5ms

Why so fast?

  1. No disk I/O: Actions are pre-loaded in memory
  2. No module resolution: Everything is bundled
  3. Pre-compiled bytecode: V8 compiles on startup, not per-request
  4. Gravity embedded: No separate process spawning

Compare to:

  • AWS Lambda (Node.js): 100-500ms cold start
  • Traditional Node.js server: 50-200ms per worker spawn

πŸ“Š Data Flow Diagram

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Your Code   β”‚
β”‚ (app/)      β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  esbuild    β”‚ ← Bundle each action
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Rust Codegenβ”‚ ← Generate routing table
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚cargo build  β”‚ ← Compile to native binary
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Native Binary (executable)    β”‚
β”‚                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Axum HTTP Server (async)  β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚              β”‚                   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚   Worker Pool (V8s)       β”‚  β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”       β”‚  β”‚
β”‚  β”‚  β”‚Workerβ”‚  β”‚Workerβ”‚  ...  β”‚  β”‚
β”‚  β”‚  β”‚  1   β”‚  β”‚  2   β”‚       β”‚  β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”˜       β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
   HTTP Responses

🧠 Memory Model

Per-Worker Memory

Each worker owns:

  • V8 Heap: ~40-80MB (configurable)
  • Compiled Actions: Pre-compiled bytecode
  • Gravity APIs: Native bindings to Rust

Workers are completely isolated:

  • No shared heap
  • No shared objects
  • Garbage collection is per-worker

Total Memory Usage

For an 8-worker server:

Base Rust Server:     ~10-20 MB
Worker 1-8 (V8):      ~320-640 MB (8 Γ— 40-80 MB)
Total:                ~330-660 MB

Trade-off:

  • More memory usage than single-threaded Node.js
  • But true parallelism and zero lock contention

πŸš€ Why This Architecture?

Design Goals

  1. Performance: Native Rust + V8 = maximum speed
  2. Simplicity: Synchronous code = easier debugging
  3. Scalability: Multi-threading = linear scaling
  4. Isolation: Per-worker isolates = crash safety
  5. Developer Experience: Hot reload + great errors

Inspired By

  • Chrome's process-per-tab: Isolation for stability
  • Deno's multi-isolate approach: Secure, sandboxed execution
  • Actix/Hyper (Rust): High-performance HTTP servers
  • V8 Engine: Proven JavaScript performance

πŸ“š Next Steps

On this page