Skip to content
Low Level Design Mastery Logo
LowLevelDesign Mastery

Database Isolation Levels

Controlling what concurrent transactions see

Isolation is an ACID property that ensures concurrent transactions don’t interfere with each other. Each transaction sees a consistent snapshot of the data, even when other transactions are running simultaneously.

Diagram

Isolation levels form a spectrum from weakest (most concurrency, most anomalies) to strongest (least concurrency, no anomalies):

Diagram

Read Uncommitted is the lowest isolation level. Transactions can read uncommitted data from other transactions.

Diagram

Example: Transaction 1 updates balance to 200 but hasn’t committed. Transaction 2 reads 200. Then Transaction 1 rolls back. Transaction 2 saw dirty (uncommitted) data that never actually existed!

When to Use: Almost never. Only for read-only analytics where accuracy isn’t critical.


Read Committed prevents dirty reads. Transactions can only read committed data.

Diagram

Key Characteristic: Each read sees the latest committed value at the time of the read.

Diagram

Example: Transaction 1 reads balance = 100. Transaction 2 updates it to 200 and commits. Transaction 1 reads again and gets 200. Same transaction, different values!

When to Use: Default in most databases (PostgreSQL, SQL Server). Good balance of consistency and performance.


Repeatable Read prevents dirty reads and non-repeatable reads. A transaction sees a consistent snapshot throughout its lifetime.

Diagram

Key Characteristic: Transaction sees the same data on repeated reads, even if other transactions commit changes.

Diagram

Example: Transaction 1 counts active orders (gets 5). Transaction 2 inserts a new active order and commits. Transaction 1 counts again (gets 6). A phantom row appeared!

When to Use: When you need consistent reads of the same rows (e.g., calculating totals, validating constraints).


Serializable is the highest isolation level. Transactions execute as if they ran one at a time (serially).

Diagram

Key Characteristic: Database ensures transactions produce the same result as if they ran serially (one after another).

When to Use: Critical financial transactions, inventory management, any scenario where perfect consistency is required.


Isolation LevelDirty ReadNon-Repeatable ReadPhantom ReadConcurrencyUse Case
Read Uncommitted❌ Possible❌ Possible❌ Possible✅ HighestAlmost never
Read Committed✅ Prevented❌ Possible❌ Possible✅ HighDefault (most DBs)
Repeatable Read✅ Prevented✅ Prevented❌ Possible⚠️ MediumConsistent reads
Serializable✅ Prevented✅ Prevented✅ Prevented❌ LowestCritical operations

Example 1: Bank Balance Check (Read Committed)

Section titled “Example 1: Bank Balance Check (Read Committed)”
Diagram

Isolation Level: Read Committed
Why: User wants to see current balance, doesn’t need perfect consistency for a simple read.


Example 2: Inventory Count (Repeatable Read)

Section titled “Example 2: Inventory Count (Repeatable Read)”
Diagram

Isolation Level: Repeatable Read
Why: Need to read inventory multiple times and ensure it doesn’t change between reads.


Example 3: Financial Transfer (Serializable)

Section titled “Example 3: Financial Transfer (Serializable)”
Diagram

Isolation Level: Serializable
Why: Financial transactions require perfect consistency. Can’t have race conditions or anomalies.


How isolation levels affect your code design:

Setting Isolation Level
import psycopg2
class DatabaseService:
def __init__(self, connection_string):
self.conn = psycopg2.connect(connection_string)
def read_balance(self, account_id: int):
# Read Committed (default) - good for simple reads
with self.conn.cursor() as cursor:
cursor.execute(
"SELECT balance FROM accounts WHERE id = %s",
(account_id,)
)
return cursor.fetchone()[0]
def process_order(self, order_id: int):
# Repeatable Read - need consistent reads
self.conn.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_REPEATABLE_READ
)
try:
with self.conn.cursor() as cursor:
# Read inventory twice - will get same value
cursor.execute("SELECT quantity FROM inventory WHERE product_id = %s", (1,))
qty1 = cursor.fetchone()[0]
# Do validation...
cursor.execute("SELECT quantity FROM inventory WHERE product_id = %s", (1,))
qty2 = cursor.fetchone()[0]
# qty1 == qty2 guaranteed!
finally:
self.conn.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED
)
def transfer_money(self, from_id: int, to_id: int, amount: float):
# Serializable - critical operation
self.conn.set_isolation_level(
psycopg2.extensions.ISOLATION_LEVEL_SERIALIZABLE
)
try:
with self.conn.cursor() as cursor:
cursor.execute(
"UPDATE accounts SET balance = balance - %s WHERE id = %s",
(amount, from_id)
)
cursor.execute(
"UPDATE accounts SET balance = balance + %s WHERE id = %s",
(amount, to_id)
)
self.conn.commit()
except Exception:
self.conn.rollback()
raise

Deep Dive: Advanced Isolation Considerations

Section titled “Deep Dive: Advanced Isolation Considerations”

Deadlocks: The Hidden Cost of High Isolation

Section titled “Deadlocks: The Hidden Cost of High Isolation”

Deadlock occurs when two transactions wait for each other’s locks indefinitely.

Diagram

Production Impact:

  • Frequency: Deadlocks occur ~0.01-0.1% of transactions in high-concurrency systems
  • Detection: Databases detect deadlocks automatically (usually less than 1 second)
  • Recovery: One transaction aborted, retried by application
  • Cost: Aborted transaction wasted work, retry overhead

Strategy 1: Lock Ordering

Pattern: Always acquire locks in the same order.

Example:

def transfer_money(self, from_id, to_id, amount):
# Always lock lower ID first
first_id = min(from_id, to_id)
second_id = max(from_id, to_id)
with self.lock(first_id):
with self.lock(second_id):
# Transfer logic

Benefit: Prevents circular wait → no deadlocks


Strategy 2: Lock Timeout

Pattern: Set timeout on lock acquisition.

Example:

def transfer_money(self, from_id, to_id, amount):
try:
# Try to acquire locks with timeout
if not self.try_lock(from_id, timeout=5):
raise LockTimeout("Could not acquire lock")
if not self.try_lock(to_id, timeout=5):
raise LockTimeout("Could not acquire lock")
# Transfer logic
finally:
self.unlock(from_id)
self.unlock(to_id)

Benefit: Fails fast instead of deadlocking


Strategy 3: Reduce Isolation Level

Pattern: Use lower isolation when possible.

Example:

  • Serializable: Highest deadlock risk
  • Repeatable Read: Medium deadlock risk
  • Read Committed: Lower deadlock risk (fewer locks)

Trade-off: Lower isolation = fewer deadlocks but more anomalies


Read Committed:

  • Locks: Short-lived (released after read)
  • Contention: Low
  • Throughput: High (10K-50K TPS)
  • Deadlock Risk: Low

Repeatable Read:

  • Locks: Held for transaction duration
  • Contention: Medium
  • Throughput: Medium (5K-20K TPS)
  • Deadlock Risk: Medium

Serializable:

  • Locks: Range locks, held for transaction duration
  • Contention: High
  • Throughput: Low (1K-5K TPS)
  • Deadlock Risk: High

Production Benchmark (PostgreSQL):

Isolation LevelRead LatencyWrite LatencyThroughputDeadlock Rate
Read Committed5ms10ms20K TPS0.01%
Repeatable Read8ms15ms10K TPS0.05%
Serializable15ms30ms3K TPS0.2%

Key Insight: Each isolation level increase costs 2-3x in performance.


Snapshot Isolation is used by many databases (PostgreSQL, MySQL InnoDB) as their default “Repeatable Read” implementation.

How it works:

  • Each transaction sees a snapshot of data at transaction start
  • Writes create new versions (multi-version concurrency control - MVCC)
  • No locks for reads (read from snapshot)
  • Locks only for writes (prevent conflicts)

Benefits:

  • No read locks: Readers don’t block writers
  • No write locks: Writers don’t block readers (until commit)
  • Better concurrency: Higher throughput than locking

Trade-offs:

  • ⚠️ Storage overhead: Must store multiple versions
  • ⚠️ Vacuum required: Old versions must be cleaned up

Production Example: PostgreSQL

  • Uses MVCC for all isolation levels
  • Vacuum process: Runs periodically to clean old versions
  • Performance: 2-5x better than locking-based isolation

Scenario: Financial reporting

Diagram

Impact:

  • Financial reports: Wrong numbers → regulatory issues
  • Analytics: Incorrect insights → bad decisions
  • Auditing: Compliance violations

Prevention: Read Committed or higher


Non-Repeatable Read: Business Logic Impact

Section titled “Non-Repeatable Read: Business Logic Impact”

Scenario: Inventory validation

Diagram

Impact:

  • Overselling: Sell more than available
  • Double-booking: Book same resource twice
  • Race conditions: Business logic fails

Prevention: Repeatable Read or higher


Scenario: Count operations

Diagram

Impact:

  • Statistics: Wrong averages, totals
  • Reports: Incorrect aggregations
  • Business metrics: Wrong KPIs

Prevention: Serializable (only level that prevents phantoms)


Isolation Level Selection: Production Guidelines

Section titled “Isolation Level Selection: Production Guidelines”
Diagram

Production Rules:

  1. Start with Read Committed (default, works for 90% of cases)
  2. Upgrade to Repeatable Read if you read same row twice
  3. Upgrade to Serializable only if phantoms cause problems
  4. Monitor deadlock rate - if high, consider lowering isolation

Database-Specific Isolation Implementations

Section titled “Database-Specific Isolation Implementations”

Read Committed (Default):

  • Uses MVCC (Multi-Version Concurrency Control)
  • Snapshot: Taken at start of each statement
  • Locks: Only for writes
  • Performance: Excellent

Repeatable Read:

  • Uses MVCC with transaction-level snapshot
  • Snapshot: Taken at transaction start
  • Locks: Only for writes
  • Note: Actually prevents phantoms (stricter than SQL standard!)

Serializable:

  • Uses Serializable Snapshot Isolation (SSI)
  • Detection: Detects serialization conflicts
  • Abort: Aborts conflicting transactions
  • Performance: Good (better than locking-based)

Read Committed:

  • Uses MVCC
  • Snapshot: Per statement
  • Performance: Good

Repeatable Read (Default):

  • Uses MVCC with transaction snapshot
  • Gap locks: Prevents phantoms (stricter than standard!)
  • Performance: Good

Serializable:

  • Uses locking (not MVCC)
  • Locks: All reads acquire shared locks
  • Performance: Poor (much slower)

Key Difference: MySQL’s Repeatable Read is stricter than PostgreSQL’s!


Rule: Minimize transaction duration to reduce lock contention.

Bad:

# BAD: Long transaction
def process_order(order):
with transaction():
validate_order(order) # External API call (slow!)
reserve_inventory(order)
charge_payment(order)
# Transaction held during slow API call

Good:

# GOOD: Short transaction
def process_order(order):
# Do slow work outside transaction
validation_result = validate_order(order) # Outside transaction
# Short transaction
with transaction():
reserve_inventory(order)
charge_payment(order)

Benefit: Reduces lock time → fewer deadlocks, better throughput


Practice 2: Use Appropriate Isolation Per Operation

Section titled “Practice 2: Use Appropriate Isolation Per Operation”

Pattern: Different isolation levels for different operations.

Example:

class OrderService:
def get_order(self, order_id):
# Read Committed - simple read
return self.db.read(order_id, isolation='READ_COMMITTED')
def validate_inventory(self, product_id):
# Repeatable Read - need consistent reads
return self.db.read(product_id, isolation='REPEATABLE_READ')
def transfer_money(self, from_id, to_id, amount):
# Serializable - critical operation
return self.db.transfer(from_id, to_id, amount, isolation='SERIALIZABLE')

Benefit: Optimize each operation for its needs


Practice 3: Monitor Isolation-Level Metrics

Section titled “Practice 3: Monitor Isolation-Level Metrics”

What to Monitor:

  • Deadlock rate: Should be less than 0.1%
  • Lock wait time: P95 should be less than 100ms
  • Transaction duration: P95 should be less than 1s
  • Rollback rate: High rate indicates problems

Production Monitoring:

class IsolationMonitor:
def track_transaction(self, isolation_level, duration, deadlocked):
self.metrics.increment(f'transaction.{isolation_level}.total')
self.metrics.histogram(f'transaction.{isolation_level}.duration', duration)
if deadlocked:
self.metrics.increment(f'transaction.{isolation_level}.deadlock')
self.alert_on_deadlock(isolation_level)


Now that you understand isolation levels, let’s explore how to scale databases to handle more load:

Next up: Scaling Databases — Learn about read replicas, connection pooling, and scaling strategies.