Skip to content

Tutorials & Guides

Step-by-step projects you can build today. Real code, real tools, real results.

Build a REST API with Rust and Axum

Rust's Axum framework makes building fast, type-safe APIs surprisingly ergonomic. We'll build a complete CRUD API with PostgreSQL, proper error handling, and structured logging.

1

Project Setup

Create a new Rust project and add dependencies:

cargo new rust-api && cd rust-api

# Add to Cargo.toml [dependencies]
# axum = "0.8"
# tokio = { version = "1", features = ["full"] }
# serde = { version = "1", features = ["derive"] }
# serde_json = "1"
# sqlx = { version = "0.8", features = ["runtime-tokio", "postgres"] }
# tower-http = { version = "0.6", features = ["cors", "trace"] }
# tracing = "0.1"
# tracing-subscriber = "0.3"
2

Define the Router and Handlers

use axum::{
    extract::{Path, State},
    http::StatusCode,
    routing::{get, post},
    Json, Router,
};
use serde::{Deserialize, Serialize};
use sqlx::PgPool;

#[derive(Serialize, Deserialize)]
struct Task {
    id: i32,
    title: String,
    completed: bool,
}

#[derive(Deserialize)]
struct CreateTask {
    title: String,
}

async fn list_tasks(
    State(pool): State<PgPool>,
) -> Result<Json<Vec<Task>>, StatusCode> {
    let tasks = sqlx::query_as!(Task, "SELECT id, title, completed FROM tasks")
        .fetch_all(&pool)
        .await
        .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
    Ok(Json(tasks))
}

async fn create_task(
    State(pool): State<PgPool>,
    Json(input): Json<CreateTask>,
) -> Result<(StatusCode, Json<Task>), StatusCode> {
    let task = sqlx::query_as!(
        Task,
        "INSERT INTO tasks (title) VALUES ($1) RETURNING id, title, completed",
        input.title
    )
    .fetch_one(&pool)
    .await
    .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
    Ok((StatusCode::CREATED, Json(task)))
}

#[tokio::main]
async fn main() {
    tracing_subscriber::init();
    let pool = PgPool::connect("postgres://localhost/rust_api").await.unwrap();

    let app = Router::new()
        .route("/tasks", get(list_tasks).post(create_task))
        .with_state(pool);

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}
3

Run and Test

# Start the server
cargo run

# Create a task
curl -X POST http://localhost:3000/tasks \
  -H "Content-Type: application/json" \
  -d '{"title": "Learn Rust"}'

# List tasks
curl http://localhost:3000/tasks

Why Axum? It's built on top of Tokio and Tower, giving you access to the entire Tower middleware ecosystem. Type-safe extractors catch errors at compile time. And it's fast - Axum consistently tops the TechEmpower benchmarks.


Docker Multi-Stage Builds Done Right

Most Docker images are 5-10x larger than they need to be. Multi-stage builds let you use full build tools during compilation but ship only the runtime. Here's how to do it properly for Node.js, Go, and Rust.

Node.js - From 1.2GB to 150MB

# Stage 1: Install dependencies and build
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && cp -R node_modules prod_modules
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production image
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production

# Copy only what we need
COPY --from=builder /app/prod_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./

# Non-root user
RUN addgroup -g 1001 -S app && adduser -S app -u 1001
USER app

EXPOSE 3000
CMD ["node", "dist/index.js"]

Go - From 800MB to 12MB

# Build stage
FROM golang:1.23-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o server .

# Final stage - distroless for minimal attack surface
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
USER nonroot:nonroot
ENTRYPOINT ["/server"]

Key Principles

  • Use Alpine or distroless base images - Alpine is ~5MB vs ~120MB for Debian
  • Copy dependency files first - Docker caches layers, so unchanged dependencies won't re-download
  • Run as non-root - Always. No exceptions in production.
  • Use .dockerignore - Exclude node_modules, .git, *.md, test files
  • Pin versions - Use node:22.4-alpine not node:latest

Modern TypeScript Project Setup (2026)

Setting up a TypeScript project with the right tooling saves hours of debugging later. Here's a production-ready setup with Biome (replacing ESLint + Prettier), Vitest for testing, and strict TypeScript config.

1

Initialize and Configure

mkdir my-project && cd my-project
npm init -y
npm install -D typescript @types/node vitest @biomejs/biome

# Initialize TypeScript
npx tsc --init
2

Strict tsconfig.json

{
  "compilerOptions": {
    "target": "ES2024",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "noUncheckedIndexedAccess": true,
    "noImplicitOverride": true,
    "exactOptionalPropertyTypes": true,
    "outDir": "dist",
    "rootDir": "src",
    "declaration": true,
    "sourceMap": true,
    "skipLibCheck": true
  },
  "include": ["src"],
  "exclude": ["node_modules", "dist"]
}

noUncheckedIndexedAccess is the most underused strict flag - it makes array/object access return T | undefined instead of T, catching real bugs.

3

Biome for Linting + Formatting

npx @biomejs/biome init

Biome replaces ESLint + Prettier with a single tool that's 20-100x faster. It handles formatting, linting, and import sorting in one pass.

// biome.json
{
  "formatter": { "indentStyle": "space", "indentWidth": 2 },
  "linter": {
    "rules": {
      "complexity": { "noForEach": "warn" },
      "suspicious": { "noExplicitAny": "error" }
    }
  }
}
4

Vitest for Testing

// vitest.config.ts
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    globals: true,
    coverage: { provider: 'v8', reporter: ['text', 'html'] },
  },
});

// src/math.test.ts
import { describe, it, expect } from 'vitest';
import { add } from './math';

describe('add', () => {
  it('adds two numbers', () => {
    expect(add(1, 2)).toBe(3);
  });
});

Vitest is Jest-compatible but faster (native ESM, Vite-powered). It's the default testing choice for most new TypeScript projects.


PostgreSQL Performance Essentials

PostgreSQL is the most capable open-source database, but it needs tuning to perform well. Here are the optimizations that make the biggest difference.

Indexing Strategy

-- B-tree index for equality and range queries (most common)
CREATE INDEX idx_users_email ON users (email);

-- Partial index - only index what you query
CREATE INDEX idx_orders_pending ON orders (created_at)
  WHERE status = 'pending';

-- Covering index - includes all columns needed by the query
CREATE INDEX idx_products_search ON products (category_id, price)
  INCLUDE (name, image_url);

-- GIN index for full-text search and JSONB
CREATE INDEX idx_posts_search ON posts USING GIN (to_tsvector('english', title || ' ' || body));
CREATE INDEX idx_events_data ON events USING GIN (metadata jsonb_path_ops);

Query Analysis

-- Always use EXPLAIN ANALYZE to understand query plans
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM orders
WHERE user_id = 42 AND status = 'completed'
ORDER BY created_at DESC
LIMIT 20;

-- Look for:
-- Seq Scan → needs an index
-- Nested Loop with high row counts → consider JOIN strategy
-- Sort → consider adding ORDER BY columns to index
-- Buffers: shared read → data not in cache, may need more shared_buffers

Connection Pooling

PostgreSQL creates a new process per connection (~10MB each). For web applications, always use a connection pooler:

  • PgBouncer: Lightweight, battle-tested. Use transaction pooling mode for web apps.
  • Supavisor: Supabase's Elixir-based pooler with built-in tenant isolation.
  • Application-level: Most ORMs (Prisma, SQLAlchemy, SQLx) have built-in pools. Set max_connections to 2-3x your CPU cores, not higher.

Essential Configuration

# postgresql.conf - for a server with 16GB RAM, 4 cores
shared_buffers = 4GB              # 25% of RAM
effective_cache_size = 12GB       # 75% of RAM
work_mem = 64MB                   # Per-operation sort/hash memory
maintenance_work_mem = 1GB        # For VACUUM, CREATE INDEX
random_page_cost = 1.1            # For SSD storage (default 4.0 is for HDD)
effective_io_concurrency = 200    # For SSD
wal_buffers = 64MB
max_wal_size = 4GB

Quick win: Run PGTune with your server specs to get a baseline configuration. Then adjust based on your workload.

More Tutorials Coming

We're adding new tutorials every week. Topics in the pipeline: Kubernetes deployment patterns, building CLI tools with Go, authentication with Lucia, and real-time apps with WebSockets.