Subscribe to the Timescale Newsletter

By submitting you acknowledge Timescale's  Privacy Policy.

$40 Million to Help Developers Measure Everything That Matters

$40 Million to Help Developers Measure Everything That Matters

We’re excited to announce a $40 million Series B investment led by Redpoint Ventures with participation from all existing investors: Benchmark, New Enterprise Associates, Icon Ventures, and Two Sigma Ventures. Redpoint is no stranger to developer-centric businesses, having also backed Snowflake, Stripe, Twilio, Heroku, and many others, and we are thrilled to have them join the team. Satish Dharmaraj, Redpoint Managing Director, joins Timescale’s board. Combined with earlier rounds (2018, 2019), we have now raised over $70 million.

Every company today is either a software company, becoming a software company, or getting replaced by a software company. This trend is undeniable and often described in different ways: “digital transformation”, “software eating the world,” etc.

Developers are the vanguard of this transformation. And as computing continues to get more powerful and storage even cheaper, developers are able to collect data at higher fidelities than before, measuring everything that matters to them, perpetually building radically better product experiences.

At Timescale, we are dedicated to serving developers worldwide, enabling them to build exceptional data-driven products that measure everything that matters: software applications, industrial equipment, financial markets, blockchain activity, consumer behavior, machine learning models, climate change, and more. Analyzing this data across the time dimension ("time-series data") enables developers to understand what is happening right now, how that is changing, and why that is changing.

This might be measuring the temperature and humidity of soil to help farmers combat climate change. Or measuring flight data to predict landing and arrival times for airlines and travelers. Or tracking every action that a user takes in an application and the performance of the infrastructure underlying that application to help resolve support issues and increase customer happiness. But these are just a few of the thousands of different ways developers are building on top of Timescale’s products today.

TimescaleDB (a.k.a. “Postgres for time series”) is our core product: 100 % free, open source (“open core,” to be precise), petabyte-scale, and built on Postgres. If you would like us to manage it for you, our cloud service is available on AWS.

Time-series data is everywhere, and a whole new category of databases, called “time-series databases,” has emerged to store it. All of these are non-standard, non-relational databases, and nearly all of them primarily store metrics (e.g., floats).

Except for TimescaleDB. Ever since it first launched four years ago, we’ve seen massive growth, with a vibrant community running over 2 million monthly active databases today. This is because TimescaleDB is not just a time-series database. It’s also a relational database, specifically, a relational database for time series. Developers who use TimescaleDB get the benefit of a purpose-built time-series database plus a classic relational (Postgres) database, all in one, with full SQL (not “SQL-like”) support.

In fact, our growth accelerated over the past year when we made petabyte-scale TimescaleDB databases free for everyone and made it easier for developers to modify and use TimescaleDB.

With this new financing, we add an invaluable partner who is committed to our mission. We will use the new infusion of capital to keep investing in our suite of products, continuing to make them easier, faster, more reliable, and lower cost.

With today’s announcement, we are also kicking off an ambitious (and maybe slightly foolish 😉) effort to execute 10+ launches throughout the month of May (#AlwaysBeLaunching).

Upcoming launches include: various user-facing and infrastructure aspects of our cloud service; ways we are making some queries in Postgres 8,000x faster; capabilities to more easily manage petabyte-scale deployments; tools to make SQL even more powerful for time-series analytics; and an observability platform for developers trying to measure everything in a cloud-first world.

We’ve come a long way at Timescale, but we’re just getting started.

To all our users, we thank you for your support and feedback and for building alongside us.

To everyone who is not yet a user, we invite you to try Timescale for free today.

Once you are using TimescaleDB, please join the TimescaleDB community and ask any questions you may have about time-series data, databases, and more.

And, for those who share our mission and want to join our fully remote, global team: learn about our open positions here. We are hiring broadly across many roles.

For more on who is using Timescale today, how we got started 4 years ago, our progress since then, and what’s still yet to come, please read on.

A vibrant community with over 2 million monthly active databases

Today, Timescale users are pushing the envelope across every industry, including companies like: Akamai, Bosch, Cisco, Comcast, Credit Suisse, DigitalOcean, Electronic Arts, HPE, IBM, Microsoft, Nutanix, NYSE, OpenAI, Rackspace, Schneider Electric, Samsung, Siemens, Uber, Walmart, Warner Music, and many more. The vibrant Timescale community now runs over two million active databases every month.

Timescale logo wall, including Siemens, Comcast, Samsung, IBM, Electronic Arts, and more.

That also includes innovative startups: battery-less sensor company Everactive, who ingest data from thousands of sensors into TimescaleDB, then surface it to their customers through dashboards, charts, and automated alerts; gaming company Latitude, who store billions of rows of event data from their 1.5 million monthly active gamers; Blue Sky Analytics, who have built India’s most comprehensive and state-of-the-art air quality dataset to help monitor climate risk; and Inflowmatix, who help water utilities all over the world understand and optimize their networks in light of the challenges in water that we face today.

  • “Our intelligent edge platform surrounds everything, from the enterprise to the cloud, and thousands of customers and their businesses rely on Akamai to keep their systems fast, smart, and secure. We demand the same of our technology stack. For our data warehousing needs, we evaluated several scaled Postgres options—but TimescaleDB was the ultimate winner: Postgres with built-in features and semantics for time series. We get the reliability of a relational database, plus compression, continuous aggregates, and rapid ingest, fast queries, and unlimited scale.”
    - Paul Mattal, Director of Network Systems at Akamai
  • “We ingest data from thousands of sensors into TimescaleDB, then surface it to our customers through dashboards, charts, and automated alerts. Other time-series databases would force us to either bundle metrics into JSON blobs (making it hard to work with in-database) or to store every metric separately (forcing heavy, slow joins for most queries of interest). TimescaleDB was an easy choice because it lets us double-down on Postgres, which we already loved using for metadata about our packet streams.”
    - Clayton Yochum, Senior Staff Engineer, Cloud Platform, Everactive
  • “We develop immersive, interactive AI-powered games – and with over 1.5M active gamers on AI Dungeon each month, we generate billions of rows of event data. Three major factors made TimescaleDB our “winning” database: ease of use, flexibility, and performance, even at extremely high volumes. TimescaleDB gives us the fast, real-time analytics, submillisecond query results, and scale we need, now and as our user base continues to grow.”
    - Alan Walton, CTO, Latitude
  • "We empower water utilities around the world to understand and optimize the performance of their networks, and we constantly gather an immense amount of data that goes back years, with an estimated 3TB+ in the next year alone. We love TimescaleDB for 2 main reasons: the PostgreSQL element makes it user-friendly and seamless to operate, and it helps us optimize costs as we amass an ever-growing amount of metrics from organizations around the world."
    - Nikolay Tsvetinov, Backend Platform & DevOps Lead at Inflowmatix Limited

What is time-series data?

As you can see from the above examples, time-series data is everywhere. But what exactly is time-series data?

Real-life sources of time-series data include automotive performance, stock ticker data, smart homes, customer order history and package delivery logistics, vehicle tracking, and more.

Simply put, time series is the measurement of something across time. But, to dig a little deeper, time-series data is the measurement of how something changes.

Here is a simple example:

If I send you $10, then a traditional bank database would atomically debit my account and credit your account. Then, if you send me $10, the same process happens in reverse. At the end of this process, our bank balances would look the same, so the bank might think, “Oh, nothing happened.” And that’s what a traditional database would show you.

But, with a time-series database, the bank could see, “Hey, these two people keep sending each other $10—maybe they’re friends, maybe they’re roommates, maybe there’s something else going on.” That level of granularity, the measurement of how something changes, is what time series enables.

In database terms, time-series datasets track changes to the overall system as INSERTs, not UPDATEs in place, to capture more information about what is happening.

Time series used to be niche, isolated to industries like finance, process manufacturing (e.g., oil and gas, chemicals, plastics), or power and utilities. But in the last few years, time-series workloads have exploded. This is partly due to the growth in IT monitoring and IoT, but there are also many other new sources of time-series data: cryptocurrencies, gaming, machine learning, and more.

What is happening is that everyone wants to make better data-driven decisions faster, which means collecting data at the highest fidelity possible. Time series is the highest fidelity of data one can capture because it tells you exactly how things are changing over time. While traditional datasets give you static snapshots, time-series data provides the dynamic movie of what’s happening across your system: e.g., your software, your physical power plant, your game, and your customers inside your application.

Time series is no longer some niche workload. Time-series data is everywhere.

In fact, all data is time-series data—if you are able to store it at that fidelity. Of course, that’s the problem with collecting time-series data: it’s relentless. By performing all these inserts, as opposed to updates, you end up with a lot more data, at higher volumes and velocities than ever before. You quickly get to tables in the billions or even trillions of rows. For a traditional database, this creates challenges around performance and scalability.

That’s where TimescaleDB comes in.

How TimescaleDB got started

A new category of databases called “time-series databases” has emerged to store time-series data. All of these databases are non-standard, non-relational databases, and nearly all of them primarily store metrics (e.g., floats).

We faced this world of metric stores when we were building our previous company, an IoT platform. At the time, we had our time-series data in a non-relational database (InfluxDB) and our application data in a relational database (Postgres). And development was slow, difficult, and brittle. I remember one time when we needed to update our dashboard to sort device uptime by device type. Because our data was in two different databases, this minor feature required a whole engineering sprint and glue code. I remember thinking at the time, “Shouldn’t this just be a SQL JOIN”?

Out of that pain, we built the first version of TimescaleDB, just for our own internal needs. We built it on Postgres, because we loved Postgres and SQL, and found that it was possible to achieve massive performance (exceeding leading non-relational databases) while retaining all of the goodness of a relational database and full SQL. We spoke with our customers and developer friends about it, and they all said, “Tell me more about this new database you built, it sounds like something we could use, too!”. So we listened to our users, pivoted our company, and re-launched as Timescale in April 2017.

When we launched TimescaleDB, we met a fair amount of skepticism. Some didn’t believe one could get the necessary performance and scalability out of Postgres. Others didn’t believe that SQL was a language conducive to time-series analysis. At the time, the top-voted Hacker News comment called us “a rather bad idea.”

Not just a time-series database… but a relational database for time series

But, we struck a nerve with developers, and since launching 4 years ago, we’ve seen explosive growth. This is because TimescaleDB is not just a time-series database. It’s also a relational database: specifically, a relational database for time series.

Developers who use TimescaleDB get the benefit of a purpose-built time-series database plus a classic relational (Postgres) database, all in one, with full SQL (not “SQL-like”) support.

In fact, our vision and fundamental architectural decisions have proven to be correct: time-series data is now ubiquitous, Postgres is the fastest-growing database (period), and SQL is once again the universal language for data analysis.

And since our decision, others are following our lead and building SQL interfaces to time-series data, albeit on non-relational systems, some even with names seemingly “inspired” by ours. (But they still can’t copy our performance.)

TimescaleDB is “Postgres for time series”

TimescaleDB is PostgreSQL with time-series superpowers

TimescaleDB is still “Postgres for time series” at its core. But, thanks to years of dedicated effort focused on listening to our users, serving developers, and constantly pursuing product excellence, TimescaleDB gives developers even better scale and performance, at an even lower cost, along with an even easier developer experience, and a whole lot more:

  • Massive scale (100s billions of rows and millions of inserts per second on a single server, or petabyte scale and 10+ million inserts a second across multiple servers)
  • 95%+ native compression via best-in-class compression algorithms
  • Better query performance than PostgreSQL, MongoDB, InfluxDB, and 6000x higher inserts, faster queries, and 150-220x cheaper than Amazon Timestream.
  • Even faster queries via continuously and incrementally refreshed materialized views (which can be seamlessly combined with pre-aggregated recent data) via continuous aggregates and real-time aggregation
  • Advanced functions for time-series manipulation in SQL like graphing tools like downsampling and smoothing; and statistical approximations like Hyperloglog, T-Digest, UddSketch
  • Geospatial support via PostGIS, and other domain specific capabilities from the breadth of Postgres extensions
  • All of the flexibility and power of SQL (yes, full SQL, not “SQL-like” or “SQLish”)
  • All of the Postgres compatible language and ORM connectors (Python, JavaScript, Ruby, Go, R, Django, Node.js, and so many more)
  • All of the management and administration tools already available for Postgres (e.g., backup/restore, high-availability, physical replication)
  • All of the SQL-compatible visualization tools and connectors like Tableau, Looker, Grafana, PowerBI, Metabase, and more

All while maintaining the reliability, ease of use, and overall goodness of PostgreSQL. And all for free for self-managed deployments or via a fully managed service on AWS.

100% free, with open source at our core

TimescaleDB is an open source (open core, to be specific) and 100% free database. The core of TimescaleDB is licensed under Apache 2, although some advanced features are licensed under the source-available (but also free) Timescale License. All of our software source code is out in the open on GitHub. We have also made many of our other products and tools open source under the Apache 2 or MIT License: e.g., our Time Series Benchmarking Suite.

(✨ Fun fact: We have never relicensed any of our software. In fact, last year, we made our Timescale License more open, adding the “right to repair” and “right to improve” and making all enterprise features free. TimescaleDB has no closed-source or paid-only features.)

There is a healthy debate these days about open-source licensing in the cloud era, with some opting to choose the OSI-complaint AGPL in response. Unlike the AGPL, the Timescale License does not force developers to redistribute the source of any hosted or derivative work. Still, it does, however, prevent the ability for others to offer TimescaleDB-as-a-service in the cloud. As a result, for most developers, the Timescale License grants free access to software with almost all of the traditional open-source rights. But, for those who must use an OSI-compliant version, we also make it easy to compile an Apache2-only version of TimescaleDB.

Because of these reasons, we believe we are not just innovating at the database layer but also pushing the envelope on how to best commercialize open-source software in today’s cloud era. (Read more about how we're building an open-source business in the cloud era.)

We also give back to open-source communities. We employ contributors to Postgres, Prometheus, and Grafana. We run community-centric market research projects, such as our recent “State of Postgres” survey, and make the raw anonymized data available to the rest of the Postgres community and the general public. We’ve detected security vulnerabilities in Postgres. We’ve written tutorials and recorded videos to help developers learn various open-source projects like Postgres and Prometheus. And, of course, we’ve sponsored industry conferences.

We believe in giving back more than we’ve received, and we are proud of what we’ve been able to do so far.

Launch month: Our ambitious plan for May

Overall, we have found that the best way for us to serve developers is to constantly ship improvements, bug fixes, new features, and new products. Internally, our mantra is “Always Be Launching.” (Recognizing of course that different products have different natural cadences: e.g., database features vs. cloud features vs. bug fixes.)

In constant pursuit of this mission, we’ve increased our cadence of releasing new products and features. Recent launches include:

  • Fully-managed managed service available on AWS
  • New analytical functions that extend SQL to perform time-series analytics, including monotonic counters, tools for graphing, statistical sketching, and pipelining (Jan 2021)
  • A more liberal software license, that made all of our enterprise features free, and added the right-to-repair and the right-to-improve (Sep 2020)
  • Continuous improvements to existing features, and bug fixes, via numerous software releases (GitHub)

But we’re just getting started.

With that in mind, today we’re not just announcing our Series B, we’re also kicking off something more ambitious (and possibly more foolish) than anything we’ve done before: a month with 10+ launches of new features to our database, managed service, and observability products.

So for the month of May, expect even more from us. More to come!

Launch month

  1. May 4th - Redesigned and reorganized documentation (Announcement on Twitter)
  2. May 5th - $40 million to help developers measure what matters (this post)
  3. May 6th - How we made some queries 8,000x faster in Postgres
  4. May 7th - Announcing Explorer: A better way to understand your cloud database
  5. May 12th - 2021 State of Postgres Survey Results (See full results here)
  6. May 13th - New Timescale storage plans that scale up to 10TB (Announcement on Twitter)
  7. May 18th - Securing your time-series data with VPC Peering for Timescale
  8. May 20th - New Timescale compute plans that scale up to 32 CPU / 128 GB RAM (Announcement on Twitter)
  9. May 25th - How I learned to stop worrying and love PostgreSQL on Kubernetes: continuous backup/restore validation on Timescale
  10. May 26th - TimescaleDB 2.3: Improving columnar compression for time-series on PostgreSQL (Announcement on Twitter)
  11. May 27th - Announcing automated disk management: Safely managing your cloud database

We’ll keep this list updated with each launch. You can also follow along in our blog (we’ll tag all posts “Always Be Launching”).

Come join us!

To all our users, we thank you again for your support and feedback, and for building alongside us. We are grateful for your trust in us, and will always strive to make your lives more productive and easier.

To everyone who is not yet a user, we invite you to try Timescale for free today. And once you are using TimescaleDB, please join the TimescaleDB community and ask any questions you may have about time-series data, databases, and more.

And, for those who share our mission and want to join our fully remote, global team: learn about our open positions here. We are hiring broadly across many roles.

To the stars! 🐯🚀

Ingest and query in milliseconds, even at petabyte scale.
This post was written by

Originally posted

Last updated

13 min read
Announcements & Releases
Contributors

Related posts