Subscribe to the Timescale Newsletter

By submitting you acknowledge Timescale's  Privacy Policy.

What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?

What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?

Over the past year, one database we keep hearing about is ClickHouse, a column-oriented OLAP database initially built and open-sourced by Yandex.

In this detailed post, which is the culmination of three months of research and analysis, we answer the most common questions we hear, including:

  • What is ClickHouse (including a deep dive into its architecture)
  • How does ClickHouse compare to PostgreSQL
  • How does ClickHouse compare to TimescaleDB
  • How does ClickHouse perform for time-series data vs. TimescaleDB

At Timescale, we take our benchmarks very seriously. We find that, in our industry, there is far too much vendor-biased “benchmarketing” and not enough honest “benchmarking.” We believe developers deserve better. So we take great pains to really understand the technologies we are comparing against—and also to point out places where the other technology shines (and where TimescaleDB may fall short).

You can see this in our other detailed benchmarks vs. AWS Timestream (29-minute read), MongoDB (19-minute read), and InfluxDB (26-minute read).

We’re also database nerds at heart who really enjoy learning about and digging into other systems. (Which are a few reasons why these posts—including this one—are so long!)

So, to better understand the strengths and weaknesses of ClickHouse, we spent the last three months and hundreds of hours benchmarking, testing, reading documentation, and working with contributors.

Shout-out to Timescale engineers Alexander Kuzmenkov, who was most recently a core developer on ClickHouse, and Aleksander Alekseev, who is also a PostgreSQL contributor, who helped check our work and keep us honest with this post.

How ClickHouse Fared in Our Tests

ClickHouse is a very impressive piece of technology. In some tests, ClickHouse proved to be a blazing-fast database, able to ingest data faster than anything else we’ve tested so far (including TimescaleDB). In some complex queries, particularly those that do complex grouping aggregations, ClickHouse is hard to beat.

But nothing in databases comes for free. ClickHouse achieves these results because its developers have made specific architectural decisions. These architectural decisions also introduce limitations, especially when compared to PostgreSQL and TimescaleDB.

ClickHouse’s limitations/weaknesses include:

  • Worse query performance than TimescaleDB at nearly all queries in the Time-Series Benchmark Suite, except for complex aggregations.
  • Poor inserts and much higher disk usage (e.g., 2.7x higher disk usage than TimescaleDB) at small batch sizes (e.g., 100-300 rows/batch).
  • Non-standard SQL-like query language with several limitations (e.g., joins are discouraged, syntax is at times non-standard).
  • Lack of other features one would expect in a robust SQL database (e.g., PostgreSQL or TimescaleDB): no transactions, no correlated sub-queries, no stored procedures, no user-defined functions, no index management beyond primary and secondary indexes, no triggers.
  • Inability to modify or delete data at a high rate and low latency—instead have to batch deletes and updates.
  • Batch deletes and updates happen asynchronously.
  • Because data modification is asynchronous, ensuring consistent backups is difficult: the only way to ensure a consistent backup is to stop all writes to the database.
  • Lack of transactions and lack of data consistency also affect other features like materialized views because the server can't atomically update multiple tables at once. If something breaks during a multi-part insert to a table with materialized views, the end result is an inconsistent state of your data.

We list these shortcomings not because we think ClickHouse is a bad database. We actually think it’s a great database—well, to be more precise, a great database for certain workloads. And as a developer, you have to choose the right tool for your workload.

Why does ClickHouse fare well in certain cases but worse in others?

The answer is the underlying architecture.

Generally, in databases, there are two types of fundamental architectures, each with strengths and weaknesses: OnLine Transactional Processing (OLTP) and OnLine Analytical Processing (OLAP).

OLTP OLAP
Large and small datasets Large datasets focused on reporting/analysis
Transactional data (the raw, individual records matter) Pre-aggregated or transformed data to foster better reporting
Many users performing varied queries and updates on data across the system Fewer users performing deep data analysis with few updates
SQL is the primary language for interaction Often, but not always, utilizes a particular query language other than SQL

ClickHouse, PostgreSQL, and TimescaleDB architectures

At a high level, ClickHouse is an excellent OLAP database designed for systems of analysis.

PostgreSQL, by comparison, is a general-purpose database designed to be a versatile and reliable OLTP database for systems of record with high user engagement.

TimescaleDB is a relational database for time-series: purpose-built on PostgreSQL for time-series workloads. It combines the best of PostgreSQL plus new capabilities that increase performance, reduce cost, and provide an overall better developer experience for time series.

So, if you find yourself needing to perform fast analytical queries on mostly immutable large datasets with few users, i.e., OLAP, ClickHouse may be the better choice.

Instead, if you find yourself needing something more versatile that works well for powering applications with many users and likely frequent updates/deletes, i.e., OLTP, PostgreSQL may be the better choice.

And if your applications have time-series data—and especially if you also want the versatility of PostgreSQL—TimescaleDB is likely the best choice.

Time-Series Benchmark Suite results summary (TimescaleDB vs. ClickHouse)

We can see the impact of these architectural decisions on how TimescaleDB and ClickHouse fare with time-series workloads.

We spent hundreds of hours working with ClickHouse and TimescaleDB during this benchmark research. We tested insert loads from 100 million rows (1 billion metrics) to 1 billion rows (10 billion metrics), cardinalities from 100 to 10 million, and numerous combinations in between. We really wanted to understand how each database works across various datasets.

Overall, for inserts we find that ClickHouse outperforms on inserts with large batch sizes—but underperforms with smaller batch sizes. For queries, we find that ClickHouse underperforms on most queries in the benchmark suite, except for complex aggregates.

Insert performance

When rows are batched between 5,000 and 15,000 rows per insert, speeds are fast for both databases, with ClickHouse performing noticeably better:

Insert comparison between ClickHouse and TimescaleDB at cardinalities between 100 and 1 million hosts
Performance comparison: ClickHouse outperforms TimescaleDB at all cardinalities when batch sizes are 5,000 rows or greater

However, when the batch size is smaller, the results are reversed in two ways: insert speed and disk consumption. With larger batches of 5,000 rows/batch, ClickHouse consumed ~16GB of disk during the test, while TimescaleDB consumed ~19GB (both before compression).

With smaller batch sizes, not only does TimescaleDB maintain steady insert speeds that are faster than ClickHouse between 100-300 rows/batch, but disk usage is 2.7x higher with ClickHouse. This difference should be expected because of the architectural design choices of each database, but it's still interesting to see.

Insert comparison of TimescaleDB and ClickHouse with small batch sizes. TimescaleDB outperforms and uses 2.7x less disk space.
Performance comparison: Timescale outperforms ClickHouse with smaller batch sizes and uses 2.7x less disk space

Query performance

For testing query performance, we used a "standard" dataset that queries data for 4,000 hosts over a three-day period, with a total of 100 million rows. In our experience running benchmarks in the past, we found that this cardinality and row count works well as a representative dataset for benchmarking because it allows us to run many ingest and query cycles across each database in a few hours.

Based on ClickHouse’s reputation as a fast OLAP database, we expected ClickHouse to outperform TimescaleDB for nearly all queries in the benchmark.

When we ran TimescaleDB without compression, ClickHouse did outperform.

However, when we enabled TimescaleDB compression—which is the recommended approach—we found the opposite, with TimescaleDB outperforming nearly across the board:

Bar chart displaying results of query response between TimescaleDB and ClickHouse. TimescaleDB outperforms in almost every query category.
Results of query benchmarking between TimescaleDB and ClickHouse. TimescaleDB outperforms in almost every query category

(For those that want to replicate our findings or better understand why ClickHouse and TimescaleDB perform the way they do under different circumstances, please read the entire article for the full details.)

Cars vs. bulldozers

Today, we live in the golden age of databases: there are so many databases that all these lines (OLTP/OLAP/time-series/etc.) are blurring. Yet every database is architected differently and, as a result, has different advantages and disadvantages. As a developer, you should choose the right tool for the job.

After spending lots of time with ClickHouse, reading their docs, and working through weeks of benchmarks, we found ourselves repeating this simple analogy:

ClickHouse is like a bulldozer - very efficient and performant for a specific use case. PostgreSQL (and TimescaleDB) is like a car: versatile, reliable, and useful in most situations you will face in your life.

Most of the time, a “car” will satisfy your needs. But if you find yourself doing a lot of “construction”, by all means, get a “bulldozer.”

We aren’t the only ones who feel this way. Here is a similar opinion shared on HackerNews by stingraycharles (whom we don’t know, but stingraycharles if you are reading this—we love your username):

"TimescaleDB has a great time-series story and an average data warehousing story; Clickhouse has a great data warehousing story, an average time-series story, and a bit meh clustering story (YMMV)."

In the rest of this article, we do a deep dive into the ClickHouse architecture and then highlight some of the advantages and disadvantages of ClickHouse, PostgreSQL, and TimescaleDB that result from the architectural decisions that each of its developers (including us) have made.

We conclude with a more detailed time-series benchmark analysis. We also have a detailed description of our testing environment to replicate these tests yourself and verify our results.

Yes, we’re the makers of TimescaleDB, so you may not trust our analysis. If so, we ask you to hold your skepticism for the next few minutes and give the rest of this article a read.

As you (hopefully) will see, we spent a lot of time understanding ClickHouse for this comparison: first, to make sure we were conducting the benchmark the right way so that we were fair to Clickhouse, but also because we are database nerds at heart and were genuinely curious to learn how ClickHouse was built.

Next steps

Are you curious about TimescaleDB? The easiest way to get started is by creating a free Timescale Cloud account, which will give you access to a fully managed TimescaleDB instance (100 % free for 30 days).

If you want to host TimescaleDB yourself, you can do it completely for free: visit our GitHub to learn more about options, get installation instructions, and more (⭐️ are very much appreciated! 🙏)

One last thing: Join our Community Slack to ask questions, get advice, and connect with other developers (we are +7,000 and counting!). We, the authors of this post, are very active on all channels—as well as all our engineers, members of Team Timescale, and many passionate users.

What Is ClickHouse?

ClickHouse, short for “Clickstream Data Warehouse”, is a columnar OLAP database that was initially built for web analytics in Yandex Metrica. Generally, ClickHouse is known for its high insert rates, fast analytical queries, and SQL-like dialect.

Timeline of ClickHouse development from 2008 to 2020

Timeline of ClickHouse development (Full history here.)

We are fans of ClickHouse. It is a very good database built around certain architectural decisions that make it a good option for OLAP-style analytical queries. In particular, in our benchmarking with the Time Series Benchmark Suite (TSBS), ClickHouse performed better for data ingestion than any time-series database we've tested so far (TimescaleDB included) at an average of more than 600k rows/second on a single instance when rows are batched appropriately.

But nothing in databases comes for free - and as we’ll show below, this architecture also creates significant limitations for ClickHouse, making it slower for many types of time-series queries and some insert workloads.

If your application doesn't fit within the architectural boundaries of ClickHouse (or TimescaleDB, for that matter), you'll probably end up with a frustrating development experience, redoing a lot of work down the road.

The ClickHouse Architecture

ClickHouse was designed for OLAP workloads, which have specific characteristics. From the ClickHouse documentation, here are some of the requirements for this type of workload:

  • The vast majority of requests are for read access.
  • Data is inserted in fairly large batches (> 1000 rows), not by single rows, or it is not updated at all.
  • Data is added to the DB but is not modified.
  • For reads, quite a large number of rows are processed from the DB, but only a small subset of columns.
  • Tables are “wide,” meaning they contain a large number of columns.
  • Queries are relatively rare (usually hundreds of queries per server or less per second).
  • For simple queries, latencies of around 50 ms are allowed.
  • Column values are fairly small: numbers and short strings (for example, 60 bytes per URL).
  • Requires high throughput when processing a single query (up to billions of rows per second per server).
  • Transactions are not necessary.
  • Low requirements for data consistency.
  • There is one large table per query. All tables are small, except for one.
  • A query result is significantly smaller than the source data. In other words, data is filtered or aggregated so the result fits in a single server’s RAM.

How is ClickHouse designed for these workloads? Here are some of the key aspects of their architecture:

  • Compressed, column-oriented storage
  • Table Engines
  • Indexes
  • Vector Computation Engine

Compressed, column-oriented storage

First, ClickHouse (like nearly all OLAP databases) is column-oriented (or columnar), meaning that data for the same table column is stored together. (In contrast, in row-oriented storage, used by nearly all OLTP databases, data for the same table row is stored together.)

Column-oriented storage has a few advantages:

  • If your query only needs to read a few columns, then reading that data is much faster (you don’t need to read entire rows, just the columns)
  • Storing columns of the same data type together leads to greater compressibility (although, as we have shown, it is possible to build columnar compression into row-oriented storage).

Table engines

To improve the storage and processing of data in ClickHouse, columnar data storage is implemented using a collection of table "engines". The table engine determines the type of table and the features that will be available for processing the data stored inside.

ClickHouse primarily uses the MergeTree table engine as the basis for how data is written and combined. Nearly all other table engines derive from MergeTree and allow additional functionality to be performed automatically as the data is (later) processed for long-term storage.

(Quick clarification: From this point forward, whenever we mention MergeTree, we're referring to the overall MergeTree architecture design and all table types that derive from it unless we specify a specific MergeTree type.)

At a high level, MergeTree allows data to be written and stored very quickly to multiple immutable files (called "parts" by ClickHouse). These files are later processed in the background at some point in the future and merged into a larger part with the goal of reducing the total number of parts on disk (fewer files = more efficient data reads later). This is one of the key reasons behind ClickHouse’s astonishingly high insert performance on large batches.

All columns in a table are stored in separate parts (files), and all values in each column are stored in the order of the primary key. This column separation and sorting implementation makes future data retrieval more efficient, particularly when computing aggregates on large ranges of contiguous data.

Indexes

Once the data is stored and merged into the most efficient set of parts for each column, queries need to know how to efficiently find the data. For this, Clickhouse relies on two types of indexes: the primary index and, additionally, a secondary (data skipping) index.

Unlike a traditional OLTP, BTree index, which knows how to locate any row in a table, the ClickHouse primary index is sparse in nature, meaning that it does not have a pointer to the location of every value for the primary index. Instead, because all data is stored in primary key order, the primary index stores the value of the primary key in every N-th row (called index_granularity, 8192 by default). This is done with the specific design goal of fitting the primary index into memory for extremely fast processing.

When your query patterns fit with this index style, the sparse nature can help improve query speed significantly. The one limitation is that you cannot create other indexes on specific columns to help improve a different query pattern. We'll discuss this more later.

Vector computation engine

ClickHouse was designed with the desire to have "online" query processing in a way that other OLAP databases hadn't been able to achieve. Even with compression and columnar data storage, most other OLAP databases still rely on incremental processing to pre-compute aggregated data. It has generally been the pre-aggregated data that's provided the speed and reporting capabilities.

To overcome these limitations, ClickHouse implemented a series of vector algorithms for working with large arrays of data on a column-by-column basis. With vectorized computation, ClickHouse can specifically work with data in blocks of tens of thousands of rows (per column) for many computations. Vectorized computing also provides an opportunity to write more efficient code that utilizes modern SIMD processors and keeps code and data closer together for better memory access patterns, too.

In total, this is a great feature for working with large data sets and writing complex queries on a limited set of columns and something TimescaleDB could benefit from as we explore more opportunities to utilize columnar data.

That said, as you'll see from the benchmark results, enabling compression in TimescaleDB (which converts data into compressed columnar storage) improves the query performance of many aggregate queries in ways that are even better than ClickHouse.

ClickHouse architecture's disadvantages (a.k.a. nothing comes for free)

Nothing comes for free in database architectures. Clearly, ClickHouse is designed with a very specific workload in mind. Similarly, it is not designed for other types of workloads.

We can see an initial set of disadvantages from the ClickHouse docs:

  • No full-fledged transactions.
  • Lack of ability to modify or delete already inserted data with a high rate and low latency. There are batch deletes and updates available to clean up or modify data, for example, to comply with GDPR, but not for regular workloads.
  • The sparse index makes ClickHouse not so efficient for point queries retrieving single rows by their keys.

There are a few disadvantages that are worth going into detail:

  • Data can’t be directly modified in a table
  • Some “synchronous” actions aren’t really synchronous
  • SQL-like, but not quite SQL
  • No data consistency in backups

MergeTree limitation: Data can’t be directly modified in a table

All tables in ClickHouse are immutable. There is no way to directly update or delete a value that's already been stored. Instead, any operations that UPDATE or DELETE data can only be accomplished through an `ALTER TABLE` statement that applies a filter and actually re-writes the entire table (part by part) in the background to update or delete the data in question. Essentially, it's just another merge operation with some filters applied.

As a result, several MergeTree table engines exist to solve this deficiency—to solve common scenarios where frequent data modifications would otherwise be necessary. Yet, this can lead to unexpected behavior and non-standard queries.

For example, if you need to store only the most recent reading of a value, creating a CollapsingMergeTree table type is your best option. With this table type, an additional column (called `Sign`) is added to the table, which indicates which row is the current state of an item when all other field values match. ClickHouse will then asynchronously delete rows with a `Sign` that cancel each other out (a value of 1 vs -1), leaving the most recent state in the database.

For example, consider a common database design pattern where the most recent values of a sensor are stored alongside the long-term time-series table for fast lookup. We'll call this table SensorLastReading. In ClickHouse, this table would require the following pattern to store the most recent value every time new information is stored in the database.

SensorLastReading

SensorID Temp Cpu Sign
1 55 78 1

When new data is received, you need to add 2 more rows to the table, one to negate the old value and one to replace it.

SensorID Temp Cpu Sign
1 55 78 1
1 55 78 -1
1 40 35 1

At some point after this insert, ClickHouse will merge the changes, removing the two rows that cancel each other out on Sign, leaving the table with just this row:

SensorID Temp Cpu Sign
1 40 35 1

But remember, MergeTree operations are asynchronous, so queries can occur on data before something like the collapse operation has been performed. Therefore, the queries to get data out of a CollapsingMergeTree table require additional work, like multiplying rows by their `Sign`, to make sure you get the correct value any time the table is in a state that still contains duplicate data.

Here is one solution that the ClickHouse documentation provides, modified for our sample data. Notice that with numerical numbers, you can get the "correct" answer by multiplying all values by the Sign column and adding a HAVING clause.

SELECT
    SensorID,
    sum(Temp * Sign) AS Temp,
    sum(Cpu * Sign) AS Cpu
FROM SensorLastReading
GROUP BY SensorId
HAVING sum(Sign) > 0

Again, the value here is that MergeTree tables provide really fast ingestion of data at the expense of transactions and simple concepts like UPDATE and DELETE in the way traditional applications would try to use a table like this. With ClickHouse, it's just more work to manage this kind of data workflow.

Because ClickHouse isn't an ACID database, these background modifications (or really any data manipulations) have no guarantees of ever being completed. Because there is no such thing as transaction isolation, any SELECT query that touches data in the middle of an UPDATE or DELETE modification (or a Collapse modification, as we noted above) will get whatever data is currently in each part. If the delete process, for instance, has only modified 50% of the parts for a column, queries would return outdated data from the remaining parts that have not yet been processed.

More importantly, this holds true for all data that is stored in ClickHouse, not just the large, analytical-focused tables that store something like time-series data, but also the related metadata.

While it's understandable that time-series data, for example, is often insert-only (and rarely updated), business-centric metadata tables almost always have modifications and updates as time passes.

Regardless, the related business data that you may store in ClickHouse to do complex joins and deeper analysis is still in a MergeTree table (or variation of a MergeTree), and therefore, updates or deletes would still require an entire rewrite (through the use of `ALTER TABLE`) any time there are modifications.

Distributed MergeTree tables

Distributed tables are another example of where asynchronous modifications might cause you to change how you query data. If your application writes data directly to the distributed table (rather than to different cluster nodes which is possible for advanced users), the data is first written to the "initiator" node, which in turn copies the data to the shards in the background as quickly as possible. Because there are no transactions to verify that the data was moved as part of something like two-phase commits (available in PostgreSQL), your data might not actually be where you think it is.

There is at least one other problem with how distributed data is handled. Because ClickHouse does not support transactions and data is in a constant state of being moved, there is no guarantee of consistency in the state of the cluster nodes. Saving 100,000 rows of data to a distributed table doesn't guarantee that backups of all nodes will be consistent with one another (we'll discuss reliability in a bit). Some of that data might have been moved, and some of it might still be in transit.

Again, this is by design, so there's nothing specifically wrong with what's happening in ClickHouse! It's just something to be aware of when comparing ClickHouse to something like PostgreSQL and TimescaleDB.

Some “synchronous” actions aren’t really synchronous

Most actions in ClickHouse are not synchronous. But we found that even some of the ones labeled “synchronous” weren’t really synchronous either.

One particular example that caught us by surprise during our benchmarking was how `TRUNCATE` worked. We ran many test cycles against ClickHouse and TimescaleDB to identify how changes in row batch size, workers, and even cardinality impacted the performance of each database. At the end of each cycle, we would `TRUNCATE` the database in each server, expecting the disk space to be released quickly so that we could start the next test. In PostgreSQL (and other OLTP databases), this is an atomic action. As soon as the truncate is complete, the space is freed up on disk.

Dashboard graph showing disk usage and immedate release of space after using TRUNCATE
TRUNCATE is an atomic action in TimescaleDB/PostgreSQL and frees disk almost immediately

We expected the same thing with ClickHouse because the documentation mentions that this is a synchronous action (and most things are not synchronous in ClickHouse). It turns out, however, that the files only get marked for deletion and the disk space is freed up at a later, unspecified time in the background. There's no specific guarantee for when that might happen.

Dashboard graph showing disk usage of ClickHouse and the time needed to free disk space after TRUNCATE
TRUNCATE is an asynchronous action in ClickHouse, freeing disk at some future time

For our tests it was a minor inconvenience. We had to add a 10-minute sleep into the testing cycle to ensure that ClickHouse had released the disk space fully. In real-world situations, like ETL processing that utilizes staging tables, a `TRUNCATE` wouldn't actually free the staging table data immediately - which could cause you to modify your current processes.

We point a few of these scenarios to simply highlight the point that ClickHouse isn't a drop-in replacement for many things that a system of record (OLTP database) is generally used for in modern applications. Asynchronous data modification can take a lot more effort to effectively work with data.

SQL-like, but not quite SQL

In many ways, ClickHouse was ahead of its time by choosing SQL as the language of choice.

ClickHouse chose early in its development to utilize SQL as the primary language for managing and querying data. Given the focus on data analytics, this was a smart and obvious choice, given that SQL was already widely adopted and understood for querying data.

In ClickHouse, the SQL isn't something that was added after the fact to satisfy a portion of the user community. That said, what ClickHouse provides is a SQL-like language that doesn't comply with any actual standard.

The challenges of a SQL-like query language are many. For example, retraining users who will be accessing the database (or writing applications that access the database). Another challenge is a lack of ecosystem: connectors and tools that speak SQL won’t just work out of the box—i.e., they will require some modification (and again, knowledge by the user) to work.

Overall, ClickHouse handles basic SQL queries well.

However, because the data is stored and processed in a different way from most SQL databases, there are a number of commands and functions you may expect to use from a SQL database (e.g., PostgreSQL, TimescaleDB), but which ClickHouse doesn't support or has limited support for:

  • Not optimized for JOINs
  • No index management beyond the primary and secondary indexes
  • No recursive CTEs
  • No correlated subqueries or LATERAL joins
  • No stored procedures
  • No user-defined functions
  • No triggers

One example that stands out about ClickHouse is that JOINs, by nature, are generally discouraged because the query engine lacks any ability to optimize the join of two or more tables.

Instead, users are encouraged to either query table data with separate sub-select statements and then and then use something like a `ANY INNER JOIN`, which strictly looks for unique pairs on both sides of the join (avoiding a cartesian product that can occur with standard JOIN types). There's also no caching support for the product of a JOIN, so if a table is joined multiple times, the query on that table is executed multiple times, further slowing down the query.

For example, all of the "double-groupby" queries in the TSBS group by multiple columns and then join to the tag table to get the `hostname` for the final output. Here is how that query is written for each database.

TimescaleDB:

WITH cpu_avg AS (
     SELECT time_bucket('1 hour', time) as hour,
       hostname, 
	   AVG(cpu_user) AS mean_cpu_user
     FROM cpu
     WHERE time >= '2021-01-01T12:30:00Z' 
       AND time < '2021-01-02T12:30:00Z'
     GROUP BY 1, 2
)
SELECT hour, hostname, mean_cpu_user
FROM cpu_avg
JOIN tags ON cpu_avg.tags_id = tags.id
ORDER BY hour, hostname;

ClickHouse:

SELECT
    hour,
    id,
    mean_cpu_user
FROM
(
    SELECT
        toStartOfHour(created_at) AS hour,
        tags_id AS id,
        AVG(cpu_user) as mean_cpu_user
    FROM cpu
    WHERE (created_at >= '2021-01-01T12:30:00Z') 
        AND (created_at < '2021-01-02T12:30:00Z')
    GROUP BY
        hour,
        id
) AS cpu_avg
ANY INNER JOIN tags USING (id)
ORDER BY
    hour ASC,
    id;

Reliability: no data consistency in backups

One last aspect to consider as part of the ClickHouse architecture and its lack of support for transactions is that there is no data consistency in backups. As we've already shown, all data modification (even sharding across a cluster) is asynchronous. Therefore, the only way to ensure a consistent backup would be to stop all writes to the database and then make a backup. Data recovery struggles with the same limitation.

The lack of transactions and data consistency also affects other features like materialized views because the server can't atomically update multiple tables at once. If something breaks during a multi-part insert to a table with materialized views, the end result is an inconsistent state of your data.

ClickHouse is aware of these shortcomings and is certainly working on or planning updates for future releases. Some form of transaction support has been in discussion for some time, and backups are in process and merged into the main branch of code, although it's not yet recommended for production use. But even then, it only provides limited support for transactions.

ClickHouse vs. PostgreSQL

(A proper ClickHouse vs. PostgreSQL comparison would probably take another 8,000 words. To avoid making this post even longer, we opted to provide a short comparison of the two databases—but if anyone wants to provide a more detailed comparison, we would love to read it.)

As we can see above, ClickHouse is a well-architected database for OLAP workloads. Conversely, PostgreSQL is a well-architected database for OLTP workloads.

Also, PostgreSQL isn’t just an OLTP database: it’s the fastest-growing and most loved OLTP database (DB-Engines, StackOverflow 2021 Developer Survey).

As a result, we won’t compare the performance of ClickHouse vs. PostgreSQL because—to continue our analogy from before—it would be like comparing the performance of a bulldozer vs. a car. These are two different things designed for two different purposes.

We’ve already established why ClickHouse is excellent for analytical workloads. Let’s now understand why PostgreSQL is so loved for transactional workloads: versatility, extensibility, and reliability.

PostgreSQL versatility and extensibility

Versatility is one of the distinguishing strengths of PostgreSQL. It's one of the main reasons for the recent resurgence of PostgreSQL in the wider technical community.

PostgreSQL supports a variety of data types, including arrays, JSON, and more. It supports various index types—not just the common B-tree but also GIST, GIN, and more. Full-text search? Check. Role-based access control? Check. And, of course, full SQL.

Also, through the use of extensions, PostgreSQL can retain the things it's good at while adding specific functionality to enhance the ROI of your development efforts.

Does your application need geospatial data? Add the PostGIS extension. What about features that benefit time-series data workloads? Add TimescaleDB. Could your application benefit from the ability to search using trigrams? Add pg_trgm.

With all these capabilities, PostgreSQL is quite flexible, meaning it is essentially future-proof. As your application or workload changes, you will know that you can still adapt PostgreSQL to your needs.

(For one specific example of the powerful extensibility of PostgreSQL, please read how our engineering team built functional programming into PostgreSQL using customer operators.)

PostgreSQL reliability

As developers, we’re resolved to the fact that programs crash, servers encounter hardware or power failures, disks fail, or experience corruption. You can mitigate this risk (e.g., robust software engineering practices, uninterrupted power supplies, disk RAID, etc.) but not eliminate it completely; it’s a fact of life for systems.

In response, databases are built with various mechanisms to further reduce such risk, including streaming replication to replicas, full-snapshot backup and recovery, streaming backups, robust data export tools, etc.

PostgreSQL has the benefit of 20+ years of development and usage, which has resulted in not just a reliable database but also a broad spectrum of rigorously tested tools: streaming replication for high availability and read-only replicas, pg_dump and pg_recovery for full database snapshots, pg_basebackup and log shipping/streaming for incremental backups and arbitrary point-in-time recovery, pgBackrest or WAL-E for continuous archiving to cloud storage, and robust COPY FROM and COPY TO tools for quickly importing/exporting data with a variety of formats.

This enables PostgreSQL to offer a greater “peace of mind”—because all of the skeletons in the closet have already been found (and addressed).

ClickHouse vs. TimescaleDB

TimescaleDB is the leading relational database for time series, built on PostgreSQL. It offers everything PostgreSQL has to offer, plus a full time-series database.

As a result, all of the advantages of PostgreSQL also apply to TimescaleDB, including versatility and reliability.

But TimescaleDB adds some critical capabilities that allow it to outperform Postgres for time-series data:

ClickHouse vs. TimescaleDB Performance for Time-Series Data

Time-series data has exploded in popularity because the value of tracking and analyzing how things change over time has become evident in every industry: DevOps and IT monitoring, industrial manufacturing, financial trading and risk management, sensor data, ad tech, application eventing, smart home systems, autonomous vehicles, professional sports, and more.

It's unique from more traditional business-type (OLTP) data in at least two primary ways: it is primarily insert heavy, and the scale of the data grows at an unceasing rate. This impacts both data collection and storage, as well as how we analyze the values themselves. Traditional OLTP databases often can't handle millions of transactions per second or provide effective means of storing and maintaining the data.

Time-series data is also more unique than general analytical (OLAP) data in that queries generally have a time component, and queries rarely touch every row in the database.

Over the last few years, however, the lines between the capabilities of OLTP and OLAP databases have started to blur. For the last decade, the storage challenge was mitigated by numerous NoSQL architectures while still failing to effectively deal with the query and analytics required of time-series data.

As a result, many applications try to find the right balance between the transactional capabilities of OLTP databases and the large-scale analytics provided by OLAP databases. It makes sense, therefore, that many applications would try to use ClickHouse, which offers fast ingest and analytical query capabilities for time-series data.

So, let's see how both ClickHouse and TimescaleDB compare for time-series workloads using our standard TSBS benchmarks.

Performance Benchmarks

Let me start by saying that this wasn't a test we completed in a few hours and then moved on from. In fact, just yesterday, while finalizing this blog post, we installed the latest version of ClickHouse (released three days ago) and ran all of the tests again to ensure we had the best numbers possible! (benchmarking, not benchmarketing)

In preparation for the final set of tests, we ran benchmarks on both TimescaleDB and ClickHouse dozens of times each—at least. We tried different cardinalities, different lengths of time for the generated data, and various settings for things that we had easy control over—like "chunk_time_interval" with TimescaleDB. We wanted to really understand how each database would perform with typical cloud hardware and the specs that we often see in the wild.

We also acknowledge that most real-world applications don't work like the benchmark does: ingesting data first and querying it second. Separating each operation allows us to understand which settings impacted each database during different phases, allowing us to tweak benchmark settings for each database along the way to get the best performance.

Finally, we always view these benchmarking tests as an academic and self-reflective experience. That is, spending a few hundred hours working with both databases often causes us to consider ways we might improve TimescaleDB (in particular), and thoughtfully consider when we can—and should—say that another database solution is a good option for specific workloads.

​​Machine Configuration

For this benchmark, we consciously decided to use cloud-based hardware configurations that were reasonable for a medium-sized workload typical of startups and growing businesses. In previous benchmarks, we've used bigger machines with specialized RAID storage, a very typical setup for a production database environment.

But, as time has marched on and we see more developers use Kubernetes and modular infrastructure setups without lots of specialized storage and memory optimizations, it felt more genuine to benchmark each database on instances that more closely matched what we tend to see in the wild. Sure, we can always throw more hardware and resources to help spike numbers, but that often doesn't help convey what most real-world applications can expect.

To that end, for comparing both insert and read latency performance, we used the following setup in AWS:

  • Versions: TimescaleDB version 2.4.0, community edition, with PostgreSQL 13; ClickHouse version 21.6.5 (the latest non-beta releases for both databases at the time of testing).
  • 1 remote client machine running TSBS, 1 database server, both in the same cloud datacenter
  • Instance size: Both client and database server ran on Amazon EC2 virtual machines (m5.4xlarge) with 16 vCPU and 64GB Memory each.
  • OS: Both server and client machines ran Ubuntu 20.04.3
  • Disk Size: 1TB of EBS GP2 storage
  • Deployment method: Installed via apt-get using official sources

Database configuration

ClickHouse: No configuration modification was done with the ClickHouse. We simply installed it per their documentation. There is not currently a tool like timescaledb-tune for ClickHouse.

TimescaleDB: For TimescaleDB, we followed the recommendations in the timescale documentation. Specifically, we ran timescaledb-tune and accepted the configuration suggestions which are based on the specifications of the EC2 instance. We also set synchronous_commit=off in postgresql.conf. This is a common performance configuration for write-heavy workloads while still maintaining transactional, logged integrity.

Insert performance

For insert performance, we used the following datasets and configurations. The datasets were created using Time-Series Benchmarking Suite with the cpu-only use case.

  • Dataset: 100-1,000,000 simulated devices generated 10 CPU metrics every 10 seconds for ~100 million reading intervals.
  • Intervals used for each configuration are as follows: 31 days for 100 devices; 3 days for 4,000 devices; 3 hours for 100,000 devices; 30 minutes for 1,000,000
  • Batch size: Inserts were made using a batch size of 5,000 which was used for both ClickHouse and TimescaleDB. We tried multiple batch sizes and found that in most cases there was little difference in overall insert efficiency between 5,000 and 15,000 rows per batch with each database.
  • TimescaleDB chunk size: We set the chunk time depending on the data volume, aiming for 7-16 chunks in total for each configuration (more on chunks here).

In the end, these were the performance numbers for ingesting pre-generated time-series data from the TSBS client machine into each database using a batch size of 5,000 rows.

Table showing the final insert results between ClickHouse and TimescaleDB when using larger 5,000 rows/batch
Insert performance comparison between ClickHouse and TimescaleDB with 5,000 row/batches

To be honest, this didn't surprise us. We've seen numerous recent blog posts about ClickHouse ingest performance, and since ClickHouse uses a different storage architecture and mechanism that doesn't include transaction support or ACID compliance, we generally expected it to be faster.

The story does change a bit, however, when you consider that ClickHouse is designed to save every "transaction" of ingested rows as separate files (to be merged later using the MergeTree architecture). It turns out that when you have much lower batches of data to ingest, ClickHouse is significantly slower and consumes much more disk space than TimescaleDB.

(Ingesting 100 million rows, 4,000 hosts, 3 days of data - 22GB of raw data)

Table showing the impact of using smaller batch sizes has on TimescaleDB and ClickHouse. TimescaleDB insert performance and disk usage stays steady, while ClickHouse performance is negatively impacted
Insert performance comparison between ClickHouse and TimescaleDB using smaller batch sizes, which significantly impacts ClickHouse's performance and disk usage

Do you notice something in the numbers above?

Regardless of batch size, TimescaleDB consistently consumed ~19GB of disk space with each data ingest benchmark before compression. This is a result of the chunk_time_interval which determines how many chunks will get created for a given range of time-series data. Although ingest speeds may decrease with smaller batches, the same chunks are created for the same data, resulting in consistent disk usage patterns. Before compression, it's easy to see that TimescaleDB continually consumes the same amount of disk space regardless of the batch size.

By comparison, ClickHouse storage needs are correlated to how many files need to be written (which is partially dictated by the size of the row batches being saved). It can actually take significantly more storage to save data to ClickHouse before it can be merged into larger files. Even at 500-row batches, ClickHouse consumed 1.75x more disk space than TimescaleDB for a source data file that was 22GB in size.

Read latency

For benchmarking read latency, we used the following setup for each database (the machine configuration is the same as the one used in the Insert comparison):

  • Dataset: 4,000/10,000 simulated devices generated 10 CPU metrics every 10 seconds for 3 full days (100M+ reading intervals, 1B+ metrics)
  • We also enabled native compression on TimescaleDB. We compressed everything but the most recent chunk of data, leaving it uncompressed. This configuration is a commonly recommended one where raw, uncompressed data is kept for recent time periods and older data is compressed, enabling greater query efficiency (see our compression docs for more). The parameters we used to enable compression are as follows: We segmented by the tags_id columns and ordered by time descending and usage_user columns.

On read (i.e., query) latency, the results are more complex. Unlike inserts, which primarily vary on cardinality size (and perhaps batch size), the universe of possible queries is essentially infinite, especially with a language as powerful as SQL. Often, the best way to benchmark read latency is to do it with the actual queries you plan to execute. For this case, we use a broad set of queries to mimic the most common query patterns.

The results shown below are the median from 1000 queries for each query type. Latencies in this chart are all shown as milliseconds, with an additional column showing the relative performance of TimescaleDB compared to ClickHouse (highlighted in green when TimescaleDB is faster, in blue when ClickHouse is faster).

Table showing query response results when querying 4,000 hosts and 100 million rows of data. TimescaleDB outperforms in almost all query categories.
Results of benchmarking query performance of 4,000 hosts with 100 million rows of data
Table showing query response results when querying 10,000 hosts and 100 million rows of data. TimescaleDB outperforms in almost all query categories.
Results of benchmarking query performance of 10,000 hosts with 100 million rows of data

Simple rollups

For simple rollups (i.e., single-groupby), when aggregating one metric across a single host for 1 or 12 hours, or multiple metrics across one or multiple hosts (either for 1 hour or 12 hours), TimescaleDB generally outperforms ClickHouse at both low and high cardinality. In particular, TimescaleDB exhibited up to 1058 % of the performance of ClickHouse on configurations with 4,000 and 10,000 devices, with 10 unique metrics being generated every read interval.

Aggregates

When calculating a simple aggregate for 1 device, TimescaleDB consistently outperforms ClickHouse across any number of devices. In our benchmark, TimescaleDB demonstrates 156 % of the performance of ClickHouse when aggregating 8 metrics across 4,000 devices and 164 % when aggregating 8 metrics across 10,000 devices. Once again, TimescaleDB outperforms ClickHouse for high-end scenarios.

Double rollups

The one set of queries that ClickHouse consistently bested TimescaleDB in query latency was in the double rollup queries that aggregate metrics by time and another dimension (e.g., GROUPBY time, deviceId). We'll go into a bit more detail below on why this might be, but this also wasn't completely unexpected.

Thresholds

When selecting rows based on a threshold, TimescaleDB demonstrates between 249-357 % the performance of ClickHouse when computing thresholds for a single device but only 130-58% the performance of ClickHouse when computing thresholds for all devices for a random time window.

Complex queries

For complex queries that go beyond rollups or thresholds, the comparison is a bit more nuanced, particularly when looking at TimescaleDB. The difference is that TimescaleDB gives you control over which chunks are compressed. In most time-series applications, especially things like IoT, there's a constant need to find the most recent value of an item or a list of the top X things by some aggregation. This is what the lastpoint and groupby-orderby-limit queries benchmark.

As we've shown previously with other databases (InfluxDB and MongoDB), and as ClickHouse documents themselves, getting individual ordered values for items is not a use case for a MergeTree-like/OLAP database, generally because there is no ordered index that you can define for a time, key, and value. This means asking for the most recent value of an item still causes a more intense scan of data in OLAP databases.

We see that expressed in our results. TimescaleDB was around 3486% faster than ClickHouse when searching for the most recent values (lastpoint) for each item in the database. This is because the most recent uncompressed chunk will often hold the majority of those values as data is ingested and a great example of why this flexibility with compression can have a significant impact on the performance of your application.

We fully admit, however, that compression doesn't always return favorable results for every query form. In the last complex query, groupby-orderby-limit, ClickHouse bests TimescaleDB by a significant amount, almost 15x faster. What our results didn't show is that queries that read from an uncompressed chunk (the most recent chunk) are 17x faster than ClickHouse, averaging 64ms per query. The query looks like this in TimescaleDB:

SELECT time_bucket('60 seconds', time) AS minute, max(usage_user)
        FROM cpu
        WHERE time < '2021-01-03 15:17:45.311177 +0000'
        GROUP BY minute
        ORDER BY minute DESC
        LIMIT 5

As you might guess, when the chunk is uncompressed, PostgreSQL indexes can be used to order the data by time quickly. When the chunk is compressed, the data matching the predicate (`WHERE time < '2021-01-03 15:17:45.311177 +0000'` in the example above) must first be decompressed before it is ordered and searched.

When the data for a `lastpoint` query falls within an uncompressed chunk (which is often the case with near-term queries that have a predicate like `WHERE time < now() - INTERVAL '6 hours'`), the results are startling.

(uncompressed chunk query, 4k hosts)

Table showing the positive impact querying uncompressed data in TimescaleDB can have, specifically the lastpoint and groupby-orderby-limit queries.
Query latency performance when lastpoint and groupby-orderby-limit queries use an uncompressed chunk in TimescaleDB

One of the key takeaways from this last set of queries is that the features provided by a database can have a material impact on the performance of your application. Sometimes, it just works, while other times, having the ability to fine-tune how data is stored can be a game-changer.

Read latency performance summary

  • For simple queries, TimescaleDB outperforms ClickHouse, regardless of whether native compression is used.
  • For typical aggregates, even across many values and items, TimescaleDB outperforms ClickHouse.
  • Doing more complex double rollups, ClickHouse outperforms TimescaleDB every time. To some extent, we were surprised by the gap and will continue to understand how we can better accommodate queries like this on raw time-series data. One solution to this disparity in a real application would be to use a continuous aggregate to pre-aggregate the data.
  • When selecting rows based on a threshold, TimescaleDB outperforms ClickHouse and is up to 250% faster.
  • For some complex queries, particularly a standard query like "lastpoint", TimescaleDB vastly outperforms ClickHouse
  • Finally, depending on the time range being queried, TimescaleDB can be significantly faster (up to 1760%) than ClickHouse for grouped and ordered queries. When these kinds of queries reach further back into compressed chunks, ClickHouse outperforms TimescaleDB because more data must be decompressed to find the appropriate max() values to order by.

Conclusion

You made it to the end! Thank you for taking the time to read our detailed report.

Understanding ClickHouse and then comparing it with PostgreSQL and TimescaleDB made us appreciate that there is a lot of choice in today’s database market—but often, there is still only one right tool for the job.

Before deciding which to use for your application, we recommend taking a step back and analyzing your stack, your team's skills, and your needs, now and in the future. Choosing the best technology for your situation now can make all the difference down the road. Instead, you want to pick an architecture that evolves and grows with you, not one that forces you to start all over when the data starts flowing from production applications.

We’re always interested in feedback, and we’ll continue to share our insights with the greater community.

Want to learn more about TimescaleDB?

Create a free account to get started with a fully managed TimescaleDB instance (100 % free for 30 days).

Want to host TimescaleDB yourself? Visit our GitHub to learn more about options, get installation instructions, and more (and, as always, ⭐️ are appreciated!)

Join our Slack community to ask questions, get advice, and connect with other developers (the authors of this post, as well as our co-founders, engineers, and passionate community members, are active on all channels).

Ingest and query in milliseconds, even at petabyte scale.
This post was written by

Originally posted

Last updated

33 min read
Engineering
Contributors

Related posts