Skip to Content
TheCornerLabs Docs
DocsSystem DesignGrokking Scalable Systems for InterviewsDatabasesWhat Are Sql Isolation Levels (Read Committed, Repeatable Read, Serializable), And What Anomalies Do They Prevent

SQL isolation levels are predefined settings (such as Read Committed, Repeatable Read, and Serializable) that determine how strictly transactions are kept separate from each other, with each level preventing certain concurrency anomalies like dirty reads, non-repeatable reads, and phantom reads.

What Are Transaction Isolation Levels?

In database systems, an isolation level refers to the degree to which one transaction is isolated from the effects of other concurrent transactions.

It defines how and when the changes made by one transaction become visible to others, ensuring data consistency and integrity.

Isolation is the “I” in the ACID properties of transactions (Atomicity, Consistency, Isolation, Durability) and is crucial for maintaining correctness in concurrent environments.

However, there is a trade-off between isolation and performance. Lower isolation levels allow more concurrent access (higher throughput) but risk more anomalies (inconsistent reads), while higher isolation levels reduce such anomalies at the cost of more blocking or waiting between transactions.

In simple terms, relaxed isolation = higher concurrency + potential inconsistencies, whereas strict isolation = fewer anomalies + potential slowdown.

Choosing the right level is about balancing accuracy vs. performance for your application’s needs.

Standard SQL Isolation Levels: The ANSI SQL standard defines four isolation levels in increasing order of strictness: Read Uncommitted, Read Committed, Repeatable Read, and Serializable.

(Many database systems provide these or similar levels. For instance, PostgreSQL’s default is Read Committed, while MySQL’s InnoDB engine defaults to Repeatable Read.)

The higher the level, the more types of inconsistent outcomes (“anomalies”) are prevented.

1762008087498413 Image scaled to 45%

Below, we explain the major isolation levels, Read Committed, Repeatable Read, and Serializable and the anomalies each one addresses.

Common Data Anomalies in Concurrent Transactions

When transactions run concurrently, several types of anomalies (unexpected or incorrect behaviors) can occur if isolation is insufficient.

The main anomalies that isolation levels deal with include:

  • Dirty Read: This occurs when a transaction reads data that has been written by another transaction that has not yet committed. In other words, a transaction sees uncommitted changes from a concurrent transaction. If that other transaction rolls back, the first transaction would have read data that “never officially existed,” leading to inconsistencies.

  • Non-Repeatable Read: This happens when a transaction reads the same row twice and gets different values each time because another transaction modified that data and committed in between the two reads. The data “did not repeat”. What was true a moment ago has changed when read again within the same transaction.

  • Phantom Read: This occurs when a transaction re-executes a query returning a set of rows (e.g. “SELECT * FROM … WHERE condition”) and finds that new rows satisfying the condition have appeared (or some have disappeared) due to another transaction’s commit. The second read sees a “phantom” row that wasn’t there before, because another transaction inserted or deleted data in the interim.

Each isolation level places certain restrictions on concurrent reads/writes to prevent some of these anomalies.

Below we describe the levels in detail and note which anomalies are prevented at each level.

Read Committed Isolation Level

Read Committed is one of the most commonly used isolation levels (the default in many systems, like Oracle and PostgreSQL) because it strikes a balance between consistency and performance.

At this level, a transaction sees only data that has been committed by other transactions at the moment it is read.

In practical terms, dirty reads are prevented, You cannot read uncommitted (dirty) data from another concurrent transaction. Each query in a transaction will get the most recently committed data as of the start of that query.

However, Read Committed does not guarantee repeatable reads within the same transaction.

If another transaction commits a change after you’ve read some data, a subsequent read of that same data in your transaction can return a new value. This means non-repeatable reads are possible at Read Committed.

Phantom reads are also possible, since range of data can change between two queries.

Anomalies prevented: Dirty reads (you will never read inconsistent, uncommitted data).
Anomalies possible: Non-repeatable reads and phantom reads may occur.

Example Scenario

Transaction T1 is reading a bank account balance (which is $100). Meanwhile, T2 updates that account balance to $150 and commits.

Under Read Committed, if T1 now reads the balance again, it will see the new committed value $150. T1 was protected from seeing any uncommitted intermediate value (avoiding a dirty read), but it did see a change between reads (a non-repeatable read occurred because another transaction’s commit made the data different on the second read).

In many applications (e.g. reading latest inventory or balance), this level of consistency is acceptable, as it shows only committed data but allows data to change during the transaction.

Repeatable Read Isolation Level

Repeatable Read is a stricter isolation level that ensures that if a transaction reads data twice, it will see the same values each time, assuming it hasn’t modified that data itself.

In other words, once data is read in a transaction, no other transaction can modify that data until the first transaction completes, guaranteeing repeatable reads (no inconsistent re-reads).

This is often implemented by keeping read locks on rows or by using a snapshot of the data.

As a result, dirty reads and non-repeatable reads are prevented under Repeatable Read.

However, phantom reads can still occur in the standard definition of Repeatable Read.

Another transaction might insert or delete rows that satisfy a query’s WHERE condition.

Since Repeatable Read (in the SQL standard) does not necessarily lock the range of possible rows (no range-locking), a re-executed query could see new “phantom” rows.

(Note: Some database engines do address some phantoms even at Repeatable Read. For example, InnoDB in MySQL uses next-key locks to prevent phantoms in many cases, and PostgreSQL’s implementation of Repeatable Read (MVCC snapshot) will not show phantoms for new inserts because the snapshot is fixed at the start of the transaction. But under the formal ANSI definitions, only Serializable guarantees no phantoms in all cases.)

Anomalies prevented: Dirty reads and non-repeatable reads are not possible (the data you read won’t magically change on you within the transaction).
Anomalies possible: Phantom reads are still possible and another transaction could add or remove rows that affect a query’s results.

Example Scenario

Suppose T1 at Repeatable Read level reads a product list where category = 'Electronics' and finds 5 products.

Next, T2 inserts a new “Electronics” product and commits.

If T1 now repeats the query, under Repeatable Read it will not see the new product, it sees the category list as it was (5 items) at the start of T1, because T1’s reads are repeatable (no changes from other transactions are visible). This prevents the phantom in this particular case on some systems (when using snapshot isolation).

However, in a locking-based system without range locks, T1 might see a phantom new product, since Repeatable Read doesn’t mandate blocking inserts.

To truly prevent phantoms in all cases, we need the next level, Serializable.

Serializable Isolation Level

Serializable is the highest (strictest) isolation level.

A serializable execution of transactions is one in which the outcome is as if the transactions ran one by one in some serial order, rather than interleaved.

In effect, Serializable isolation prevents all the anomalies we described: no dirty reads, no non-repeatable reads, and no phantom reads occur.

Serializable provides the strongest guarantee of consistency, and it is the closest to truly “ACID” isolation.

Under Serializable, the database’s concurrency control (through locking or multiversioning with validation) will ensure that any sequence of concurrent transactions yields the same result as some sequential execution.

If the system detects a potential conflict that could result in an anomaly (e.g. a phantom), it will typically block one transaction or roll it back with a serialization error to maintain consistency. This level therefore sacrifices a lot of concurrency; transactions might have to wait or retry, but integrity is preserved as if transactions executed one at a time.

Anomalies prevented: Dirty reads, non-repeatable reads, and phantom reads are all prevented – no inconsistent reads or phantoms occur at Serializable. All transactions appear fully isolated.
Anomalies possible: None of the standard read anomalies are allowed; effectively, Serializable = no anomalies (it even prevents more subtle anomalies like write skew, depending on the implementation). If a schedule of transactions can’t be serialized, the DBMS will not let it complete.

Example Scenario

Consider two concurrent transactions, T1 and T2, both transferring money between accounts in a banking system.

Under Serializable isolation, if there is any interleaving of operations that would lead to an inconsistent outcome (e.g., one transaction sees partial effects of the other), the database will ensure one transaction waits or aborts.

For instance, if T1 queries total balance across multiple accounts while T2 is moving money between those accounts, Serializable isolation might force T1 to wait or might abort T1 or T2 if a conflict is detected.

This ensures that you never see a phantom total or partially applied transfer. The final result is as if T1 and T2 happened one after the other, not concurrently.

Although this might reduce throughput (transactions might retry), it guarantees maximum consistency.

Isolation Levels vs. Anomalies Summary

The following table summarizes the ANSI SQL transaction isolation levels and indicates which concurrency anomalies are disallowed (“no”) or allowed (“yes”) at each level (Dirty Read, Non-Repeatable Read, Phantom Read).

Isolation LevelAnomalies PreventedStill PossibleExplanation (Simple Terms)
Read UncommittedNoneDirty Reads, Non-repeatable Reads, Phantom ReadsTransactions can read uncommitted (dirty) data from others. Fastest but least safe.
Read CommittedDirty ReadsNon-repeatable Reads, Phantom ReadsA transaction only reads committed data. Common in most databases (e.g., SQL Server, Oracle).
Repeatable ReadDirty Reads, Non-repeatable ReadsPhantom ReadsEnsures that if you read a row twice in a transaction, you get the same data both times but new rows may appear.
SerializableDirty Reads, Non-repeatable Reads, Phantom ReadsNoneHighest isolation; transactions are executed as if they were serialized one after another. Safest but slowest.

Each higher isolation level includes the guarantees of the levels below it and adds protection against additional anomalies.

For example, Repeatable Read does everything Read Committed does and also ensures repeatable reads; Serializable does everything Repeatable Read does and also prevents phantoms.

It’s important to choose an isolation level based on application needs: use the lowest level that safely handles your use-case to avoid unnecessary performance costs.

For many applications, Read Committed is sufficient, providing a good balance of consistency (no dirty reads) and concurrency. In high-integrity systems (financial, etc.), Serializable might be warranted despite the performance hit, to ensure absolutely correct results.

Last updated on