Catalog
affaan-m/database-migrations

affaan-m

database-migrations

Database migration best practices for schema changes, data migrations, rollbacks, and zero-downtime deployments across PostgreSQL, MySQL, and common ORMs (Prisma, Drizzle, Kysely, Django, TypeORM, golang-migrate).

global
0installs0uses~2.9k
v2.1Saved Apr 20, 2026

Database Migration Patterns

Safe, reversible database schema changes for production systems.

When to Activate

  • Creating or altering database tables
  • Adding/removing columns or indexes
  • Running data migrations (backfill, transform)
  • Planning zero-downtime schema changes
  • Setting up migration tooling for a new project

Core Principles

  1. Every change is a migration — never alter production databases manually
  2. Migrations are forward-only in production — rollbacks use new forward migrations
  3. Schema and data migrations are separate — never mix DDL and DML in one migration
  4. Test migrations against production-sized data — a migration that works on 100 rows may lock on 10M
  5. Migrations are immutable once deployed — never edit a migration that has run in production

Migration Safety Checklist

Before applying any migration:

  • Migration has both UP and DOWN (or is explicitly marked irreversible)
  • No full table locks on large tables (use concurrent operations)
  • New columns have defaults or are nullable (never add NOT NULL without default)
  • Indexes created concurrently (not inline with CREATE TABLE for existing tables)
  • Data backfill is a separate migration from schema change
  • Tested against a copy of production data
  • Rollback plan documented

PostgreSQL Patterns

Adding a Column Safely

-- GOOD: Nullable column, no lock
ALTER TABLE users ADD COLUMN avatar_url TEXT;

-- GOOD: Column with default (Postgres 11+ is instant, no rewrite)
ALTER TABLE users ADD COLUMN is_active BOOLEAN NOT NULL DEFAULT true;

-- BAD: NOT NULL without default on existing table (requires full rewrite)
ALTER TABLE users ADD COLUMN role TEXT NOT NULL;
-- This locks the table and rewrites every row

Adding an Index Without Downtime

-- BAD: Blocks writes on large tables
CREATE INDEX idx_users_email ON users (email);

-- GOOD: Non-blocking, allows concurrent writes
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);

-- Note: CONCURRENTLY cannot run inside a transaction block
-- Most migration tools need special handling for this

Renaming a Column (Zero-Downtime)

Never rename directly in production. Use the expand-contract pattern:

-- Step 1: Add new column (migration 001)
ALTER TABLE users ADD COLUMN display_name TEXT;

-- Step 2: Backfill data (migration 002, data migration)
UPDATE users SET display_name = username WHERE display_name IS NULL;

-- Step 3: Update application code to read/write both columns
-- Deploy application changes

-- Step 4: Stop writing to old column, drop it (migration 003)
ALTER TABLE users DROP COLUMN username;

Removing a Column Safely

-- Step 1: Remove all application references to the column
-- Step 2: Deploy application without the column reference
-- Step 3: Drop column in next migration
ALTER TABLE orders DROP COLUMN legacy_status;

-- For Django: use SeparateDatabaseAndState to remove from model
-- without generating DROP COLUMN (then drop in next migration)

Large Data Migrations

-- BAD: Updates all rows in one transaction (locks table)
UPDATE users SET normalized_email = LOWER(email);

-- GOOD: Batch update with progress
DO $$
DECLARE
  batch_size INT := 10000;
  rows_updated INT;
BEGIN
  LOOP
    UPDATE users
    SET normalized_email = LOWER(email)
    WHERE id IN (
      SELECT id FROM users
      WHERE normalized_email IS NULL
      LIMIT batch_size
      FOR UPDATE SKIP LOCKED
    );
    GET DIAGNOSTICS rows_updated = ROW_COUNT;
    RAISE NOTICE 'Updated % rows', rows_updated;
    EXIT WHEN rows_updated = 0;
    COMMIT;
  END LOOP;
END $$;

Prisma (TypeScript/Node.js)

Workflow

# Create migration from schema changes
npx prisma migrate dev --name add_user_avatar

# Apply pending migrations in production
npx prisma migrate deploy

# Reset database (dev only)
npx prisma migrate reset

# Generate client after schema changes
npx prisma generate

Schema Example

model User {
  id        String   @id @default(cuid())
  email     String   @unique
  name      String?
  avatarUrl String?  @map("avatar_url")
  createdAt DateTime @default(now()) @map("created_at")
  updatedAt DateTime @updatedAt @map("updated_at")
  orders    Order[]

  @@map("users")
  @@index([email])
}

Custom SQL Migration

For operations Prisma cannot express (concurrent indexes, data backfills):

# Create empty migration, then edit the SQL manually
npx prisma migrate dev --create-only --name add_email_index
-- migrations/20240115_add_email_index/migration.sql
-- Prisma cannot generate CONCURRENTLY, so we write it manually
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users (email);

Drizzle (TypeScript/Node.js)

Workflow

# Generate migration from schema changes
npx drizzle-kit generate

# Apply migrations
npx drizzle-kit migrate

# Push schema directly (dev only, no migration file)
npx drizzle-kit push

Schema Example

import { pgTable, text, timestamp, uuid, boolean } from "drizzle-orm/pg-core";

export const users = pgTable("users", {
  id: uuid("id").primaryKey().defaultRandom(),
  email: text("email").notNull().unique(),
  name: text("name"),
  isActive: boolean("is_active").notNull().default(true),
  createdAt: timestamp("created_at").notNull().defaultNow(),
  updatedAt: timestamp("updated_at").notNull().defaultNow(),
});

Kysely (TypeScript/Node.js)

Workflow (kysely-ctl)

# Initialize config file (kysely.config.ts)
kysely init

# Create a new migration file
kysely migrate make add_user_avatar

# Apply all pending migrations
kysely migrate latest

# Rollback last migration
kysely migrate down

# Show migration status
kysely migrate list

Migration File

// migrations/2024_01_15_001_create_user_profile.ts
import { type Kysely, sql } from 'kysely'

// IMPORTANT: Always use Kysely<any>, not your typed DB interface.
// Migrations are frozen in time and must not depend on current schema types.
export async function up(db: Kysely<any>): Promise<void> {
  await db.schema
    .createTable('user_profile')
    .addColumn('id', 'serial', (col) => col.primaryKey())
    .addColumn('email', 'varchar(255)', (col) => col.notNull().unique())
    .addColumn('avatar_url', 'text')
    .addColumn('created_at', 'timestamp', (col) =>
      col.defaultTo(sql`now()`).notNull()
    )
    .execute()

  await db.schema
    .createIndex('idx_user_profile_avatar')
    .on('user_profile')
    .column('avatar_url')
    .execute()
}

export async function down(db: Kysely<any>): Promise<void> {
  await db.schema.dropTable('user_profile').execute()
}

Programmatic Migrator

import { Migrator, FileMigrationProvider } from 'kysely'
import { promises as fs } from 'fs'
import * as path from 'path'
// ESM only — CJS can use __dirname directly
import { fileURLToPath } from 'url'
const migrationFolder = path.join(
  path.dirname(fileURLToPath(import.meta.url)),
  './migrations',
)

// `db` is your Kysely<any> database instance
const migrator = new Migrator({
  db,
  provider: new FileMigrationProvider({
    fs,
    path,
    migrationFolder,
  }),
  // WARNING: Only enable in development. Disables timestamp-ordering
  // validation, which can cause schema drift between environments.
  // allowUnorderedMigrations: true,
})

const { error, results } = await migrator.migrateToLatest()

results?.forEach((it) => {
  if (it.status === 'Success') {
    console.log(`migration "${it.migrationName}" executed successfully`)
  } else if (it.status === 'Error') {
    console.error(`failed to execute migration "${it.migrationName}"`)
  }
})

if (error) {
  console.error('migration failed', error)
  process.exit(1)
}

Django (Python)

Workflow

# Generate migration from model changes
python manage.py makemigrations

# Apply migrations
python manage.py migrate

# Show migration status
python manage.py showmigrations

# Generate empty migration for custom SQL
python manage.py makemigrations --empty app_name -n description

Data Migration

from django.db import migrations

def backfill_display_names(apps, schema_editor):
    User = apps.get_model("accounts", "User")
    batch_size = 5000
    users = User.objects.filter(display_name="")
    while users.exists():
        batch = list(users[:batch_size])
        for user in batch:
            user.display_name = user.username
        User.objects.bulk_update(batch, ["display_name"], batch_size=batch_size)

def reverse_backfill(apps, schema_editor):
    pass  # Data migration, no reverse needed

class Migration(migrations.Migration):
    dependencies = [("accounts", "0015_add_display_name")]

    operations = [
        migrations.RunPython(backfill_display_names, reverse_backfill),
    ]

SeparateDatabaseAndState

Remove a column from the Django model without dropping it from the database immediately:

class Migration(migrations.Migration):
    operations = [
        migrations.SeparateDatabaseAndState(
            state_operations=[
                migrations.RemoveField(model_name="user", name="legacy_field"),
            ],
            database_operations=[],  # Don't touch the DB yet
        ),
    ]

golang-migrate (Go)

Workflow

# Create migration pair
migrate create -ext sql -dir migrations -seq add_user_avatar

# Apply all pending migrations
migrate -path migrations -database "$DATABASE_URL" up

# Rollback last migration
migrate -path migrations -database "$DATABASE_URL" down 1

# Force version (fix dirty state)
migrate -path migrations -database "$DATABASE_URL" force VERSION

Migration Files

-- migrations/000003_add_user_avatar.up.sql
ALTER TABLE users ADD COLUMN avatar_url TEXT;
CREATE INDEX CONCURRENTLY idx_users_avatar ON users (avatar_url) WHERE avatar_url IS NOT NULL;

-- migrations/000003_add_user_avatar.down.sql
DROP INDEX IF EXISTS idx_users_avatar;
ALTER TABLE users DROP COLUMN IF EXISTS avatar_url;

Zero-Downtime Migration Strategy

For critical production changes, follow the expand-contract pattern:

Phase 1: EXPAND
  - Add new column/table (nullable or with default)
  - Deploy: app writes to BOTH old and new
  - Backfill existing data

Phase 2: MIGRATE
  - Deploy: app reads from NEW, writes to BOTH
  - Verify data consistency

Phase 3: CONTRACT
  - Deploy: app only uses NEW
  - Drop old column/table in separate migration

Timeline Example

Day 1: Migration adds new_status column (nullable)
Day 1: Deploy app v2 — writes to both status and new_status
Day 2: Run backfill migration for existing rows
Day 3: Deploy app v3 — reads from new_status only
Day 7: Migration drops old status column

Anti-Patterns

Anti-Pattern Why It Fails Better Approach
Manual SQL in production No audit trail, unrepeatable Always use migration files
Editing deployed migrations Causes drift between environments Create new migration instead
NOT NULL without default Locks table, rewrites all rows Add nullable, backfill, then add constraint
Inline index on large table Blocks writes during build CREATE INDEX CONCURRENTLY
Schema + data in one migration Hard to rollback, long transactions Separate migrations
Dropping column before removing code Application errors on missing column Remove code first, drop column next deploy
Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

88/100

Grade

A

Excellent

Safety

90

Quality

88

Clarity

90

Completeness

82

Summary

This skill guides database migration best practices across PostgreSQL, MySQL, and multiple ORM frameworks (Prisma, Drizzle, Kysely, Django, TypeORM, golang-migrate). It teaches safe schema changes, data migrations, rollback strategies, and zero-downtime deployment patterns with concrete code examples and anti-patterns.

Static Analysis Findings

1 finding

Patterns detected by deterministic static analysis before AI scoring. Hover over any finding code for detailed information and remediation guidance.

Environment Access
SEC-050Sensitive Environment Variable3x in 1 file

Environment variable read for sensitive values

SKILL.md$DATABASE_URL3x

Detected Capabilities

PostgreSQL schema modification patternsMySQL migration guidancePrisma migration workflow (dev, deploy, reset)Drizzle ORM schema generation and migrationKysely migration file creation and executionDjango data migration with batchinggolang-migrate version controlZero-downtime deployment strategy (expand-contract pattern)SQL batching for large updatesIndex creation without downtime

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

database migrationschema changecolumn renamezero-downtime deploymentindex creationdata backfillmigration rollbackprisma migrationdjango migrationpostgresql alter table

Risk Signals

INFO

Environment variable DATABASE_URL referenced in golang-migrate examples

SKILL.md | golang-migrate section | migrate command examples

Use Cases

  • Planning and executing schema changes in production databases
  • Creating reversible migrations with proper rollback strategies
  • Implementing zero-downtime column renames using expand-contract pattern
  • Batching large data migrations to avoid table locks
  • Setting up migration tooling for new projects across different ORMs
  • Safely adding indexes without blocking concurrent writes
  • Handling concurrent index creation in PostgreSQL

Quality Notes

  • Excellent structural clarity: organized by framework with consistent sections (Workflow, Schema Example, Custom SQL)
  • Strong pedagogical value: each pattern includes GOOD vs BAD examples showing the trade-offs (e.g., locking vs non-blocking index creation)
  • Comprehensive framework coverage: covers five different migration tools with framework-specific code examples
  • Zero-downtime strategy is well-explained with a three-phase timeline and realistic example spanning days
  • Safety checklist provided as a pre-flight checklist — practical and actionable
  • Anti-patterns table is exceptionally useful, mapping common mistakes to consequences and solutions
  • Core principles are clearly stated and justified (immutability, separation of concerns, testing requirements)
  • Batch update pattern shown for large migrations (Django, PostgreSQL PL/pgSQL) with detailed implementation
  • Expand-contract pattern explained with both abstract and concrete timeline example
  • All code examples are syntactically correct and directly runnable
  • Migration tool command syntax is accurate for all frameworks (Prisma, Drizzle, Kysely, Django, golang-migrate)
  • Limitations are documented: CONCURRENTLY cannot run in transaction blocks, migrations must be immutable after production deployment
  • Minor: no error handling guidance for migration failures — could expand on how to recover from dirty states or failed migrations
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v2.1

Content updated

2026-04-20

Latest
v2.0

Contract changed: description

2026-04-12

v1.0

Seeded from github.com/affaan-m/everything-claude-code

2026-03-16

Add affaan-m/database-migrations to your library

Command Palette

Search for a command to run...