Hello, this is Jan Lehnardt and you're visiting my blog. Thanks for stopping by.
plok — It reads like a blog, but it sounds harder!
↑ Archives
This is part two in a small series about measuring software performance. There’s a lot of common sense covered, but I feel it necessary to shed some light.
If you haven’t, check out part one.
Say you want to find out what’s behind the buzz of all these new #nosql databases. There’s a large number to choose from today. All options come in varying degrees of maturity and characteristics so it’d be nice to know what solves your problem best. A non-exhaustive list of these databases or storage systems include Memcache[DB], Tokyo Cabinet / Tyrant, Project Voldemort, Scalaris, Dynamite, Redis, Persevere, MongoDB, Solr or my favourite CouchDB. And these are just some of the open source ones.
This article is not a comprehensive comparison of any of the mentioned systems. Instead it tries to give you an idea about what to look for when evaluating a storage system or how to take into perspective evaluations and benchmarks others have done.
We’ll look at some of the technical aspects of data storage systems: Applying common sense when reading benchmarks; b-trees and hashing; speed vs. concurrency; networked systems and their problems; low level data storage (disks’n stuff); and data reliability on single-nodes and multi-node systems.
There are a lot of other reasons to decide for or against a project based on a lot of non-technical criteria, but things like commercial support or a healthy open source community are not part of this article.
From time to time you see some crazy numbers posted to the reddits of the internets that claim fantastic performance.
The (imaginary) SuperfastDB can store 450,000 items per second!.
Wow.
No word on where the items are stored (in memory? on a harddrive? Spindles? Solid State?), what an item is exactly and how big it is, the rest of the hardware this was run on and how to reproduce it.
But boy, 450,000 a second!
My shoes can do 650,000 a second, but you’ve got to figure out what.
Context is as important as reproducibility. The last article here established that finding out that my system and your system come up with different numbers is not much of a help. Any sort of serious test must come with a set of scripts or programs and comprehensive instructions on how the tests were run.
Everything “cool” in computer science has been around for 25+ years. Actual innovation is rare. Advancements in hardware and new combinations of existing solutions make for new stuff coming out each day (that’s a good thing), but the fundamental rules are the same for all. We’re all running von Neumann machines, quicksort is still pretty quick and hashes and b-trees rule the storage world.
Let’s recap.
Hashing revolves around the idea of O(1)
lookups. Allocate a number of buckets, create a function that gives you a number of a bucket for any data item you might want to store, make sure no two data items hit the same bucket (or work around that). Runtime characteristics include that you only need to ask your function where to look for or store your data and the allocation of your set of buckets: If you need to store more items than you have buckets, some more work is required which gives you O(N)
operations that you can’t ignore in practice.
The other elephant in the room is b-trees. The fundamental idea here is to get to your data in a minimal number of steps traversing a tree because making a step is expensive, but reading your data is very fast comparatively. Steps are expensive because they translate to a head seek (that is the time your spinning hard drive needs to position the reading arm to find the spot to read your data from), but reading from a harddrive once the reading head is in place is fast.
There are a bunch of more interesting lookup structure like R-Trees for spatial queries, but they are mostly used for secondary indexes on top a regular data set that lives in a hash or b-tree.
Concurrency is hard. The devil lies in the details and when briefly looking at things, the details are often overlooked. Suits the devil.
Creating storage systems that assume only one access occurs at a time is relatively easy. If resources are shared concurrently, things become tricky. The two larger schools of thought (and practice) are locking and no-locking (heh).
Locking means that the database has to maintain information for everybody who wants to write to a part of the database, and what part it is.
No locking, or optimistic locking or MVCC moves that burden to the person who is trying to write to the database. She must prove that she won’t be overwriting any existing data.
The trade-offs here are a leaner request handing on the server that works well with remote & concurrent clients at the expense of more complexity on the client (the person who wants to store something in our database).
Hybrid approaches are possible too: While MVCC is used internally, the database’s clients can rely on database-side locking (e.g. PostgreSQL or InnoDB).
Just a quick note: We already talk about client and server here. There is a strong case for embedded databases like SQLite that don’t expose a concurrent user model to the outside. The program that needs an embedded database just includes it.
Another approach to using databases is having a dedicated computer running a database system and sharing it over the network with any number of clients using this database server. They can often be “a bunch of servers” or a cluster. More on that later.
A separate database server (networked or not) will need to spend some time to deal with connections, network failures, unspecified client behaviour and so on. The upside is a piece of infrastructure that can be maintained separately. An embedded database will thus be faster but probably won’t solve all of your problems and it will always be tied to your application.
When people tell me “SuperfastDB does 450,000 a second!” I ask “How many fsync()
s is that?”. Let me explain:
A database system uses operating system services to use any hardware. The operating systems exposes a harddrive through a filesystem. The database systems talks to the filesystem and asks it to store or retrieve data in its behalf. The filesystem then goes ahead and tries to satisfy the database’s requests.
(I’ll not talk about databases that can use raw block devices to store data. They exist but they are not as common as those who use the filsystem.)
The filesystem also tries to be clever – for good reasons. When the database requests a piece of data, the filesystem will not only find that piece and return it, it will also store it in a cache to avoid having to actually talk to the harddrive the next time this piece of data gets requested. When the data changes, the filesystem either removes it from the cache or updates it with the harddrive. It might even go further and only store the new data that comes in with a write request into the cache and rely on a periodic task to write all of the cache back to the drive. Writing a bunch of of pieces at once is more efficient than storing each one on its own.
More efficient equals to faster and faster is good, right? Well, it depends: If all goes well, this approach is a nice one. But you know computers, things will not go well 100% of the time. The failure scenarios are endless, but they boil down to the question: “What happens when your machine dies and you have data that has only been written to memory?” — The answer isn’t too hard: That data is lost. If there is a delay between a write request finishing and data being written (or “flushed”) to disk any data that has been “written” during the delay period is subject to loss.
There are cases where this is not a problem; in other cases it is. A developer should have the chance to decide. (Note that even your hardware could be lying to you about having stored data, but I’ll punt on this one, get proper hardware).
So, flushing to disk needs to happen before you can rest assured your data has been stored. Your operating system has an API call that forces the filesystem to write its cache to disk. It is called fsync()
(on UNIX systems) and it is an expensive operation. You can only do so many fsync()
s in a second and it is not a great many.
The 450,000 items were most likely just written to memory and not to disk.
When writing files to disk (at the end of the day, your data ends up in one file or another on the filesystem) that represents what lives in a database, there are multiple options to handle updates.
An update is a change to your data item, for example, a new phone number. The intuitive way to handle this is to go and find the old phone number in the file, and overwrite it with the new number. Easy.
There are several problems with this approach: What to do if the new phone number is longer than the old one (say you added an international calling prefix)? The new number needs to be written to a different place and the change in location must be recorded. Not too big of an issue.
Back to failure scenarios: Again, the reasons can be manifold, but what happens when we’ve (over-)written the first 4 digits of the old with the new number and then the server dies, power goes away or the database server crashes? The next time you want to read the phone number you get a mix of the old and the new one (if you are lucky) and you don’t exactly know that this is the case and which parts are missing. Your database file is inconsistent and you need to run an integrity check to find missing bits and correct half-written bytes. In the worst case that means scanning your entire database file a few times before you resolved all inconstancies. If you have a lot of data, that can take days.
To solve this, you always write the new phone number to a new place in the database file and only when it has been fsync()
ed to disk, you update the location of the phone number (and then flush that update to disk as well). You will never end up in a scenario where your database file can end up an inconsistent state and after a crash you are back online without an integrity check.
The trade-off for consistency is write-speed (remember fsync()
s are expensive) for consistency-check-speed after a failure.
A nice bonus is that if the “new place in the database” is the end of the file, you keep your disk-drive head busy with writing data to disk instead of seeking all over the place (remember: seeks are expensive).
So far, we’ve been looking at scenarios that involve a single database. We learned a great deal (I hope), but in reality we often deal with more than one database. The simplest reason to have two databases is for redundancy. Failures can bring down your database temporarily or even permanently. If it is a temporary issue, waiting a bit (or a bit longer) to get up and running again might be an option, but often, an application or service should be available at all times. A fatal failure where a database server is lost beyond repair, your data is gone if you haven’t stored it in a second place.
“I’ll just make two copies, easy!”. Yup easy, until you look at the details (that damn devil again!).
It’s all about failures again. Consider a single read request. A client connects to a server and asks for a data item. The server looks it up and returns the data to the client. All is well. At any point things can go wrong. The network connection can drop (or slow down so much that client or server assume it dropped), the client can disappear (because of a network failure or crash) as can the server. Clients, servers and the protocols they speak need to be built around the assumption that any of these things (and many more) can go wrong. If any parts is not designed to handle error cases, your system will do funny things, but it won’t reliably store and manage your data.
Add complexity: With each write target (store in two places) the possibility of error and the need for proper error handling grows quadratically. When evaluating a distributed storage system, looking at how errors are handled is vital.
Another reason to distribute data among multiple servers is capacity. The three metrics of interest here are read requests, write requests and data. If you have more requests or data than a single machine can handle, you need to move to multiple machines. Each metric calls for different strategies, but they often go along with each other. The need for fault tolerance that I discussed above needs to be considered alongside.
Growing read capacity is relatively easy once you covered the base case where the source for reading data might not be the same as the the target for writing data and that there can be a mismatch (cf. eventual consistency).
Distributing writes and data works by designating two machines with 50% of the operations. A clever intermediate, a proxy server for example, decides which request goes where and all is well, we can store twice as much and we can store at twice the speed. When we need to grow bigger yet, we add another server and tell the proxy server to distribute the load equally among them. Adding a proxy for distribution introduces a single point of failure and you don’t want these; there’s added complexity with this approach.
The diagram shows that there is another step needed that wasn’t included in the above description. The new “node” needs to have a copy of all data items that are assigned to him and are currently living on the two existing nodes. The process of moving data items to new nodes is called resharding and needs to happen every time a new node is added.
Resharding can be an expensive operation if you have a lot of data. Techniques like consistent hashing help with minimising the amount of items that need to move. If you are looking at a sharding database, you want to understand how the sharding is performed and if you like the trade-offs.
The CAP Theorem states that out of consistency, availability and partition tolerance, a system can choose to support two at any given moment, but never three.
Consistency guarantees that all clients that talk to cluster of nodes will always get to read the same data. Write operations are atomic on all nodes.
Availability guarantees that in any (reasonable) failure scenario, clients are still able to access their data.
Partition tolerance guarantees that when nodes in the cluster lose their network connection and two or more completely separated sub-clusters emerge, the system will still be able to store and retrieve data.
If you are aiming for a comparative benchmark of two or more systems, you should run your procedure by they authors. I found developers are happy to help out with benchmarks by clearing up misconceptions or sharing tricks to speed things up (which you can choose to ignore, if you are looking for out-of-the box comparison, but this is rarely useful).
That’s a great writeup. Many developers will already know most of it, but it still helps to refresh our knowledge in such a consistent and well-written overview. It should also be a great introduction for newbies.
I am one of the main authors of Project Voldemort (one of the crazy key-value databases mentioned making unrealistic claims :-)).
This is a remarkably sane blog post, I wish more people would read it. There are a few new ideas in storage systems these days, but many of them are bad ideas, and many things that were good in relational databases have been lost.
There is some value to the in-memory test, so I would not ignore it. It does place an upper bound on the performance of the system, and many systems are so bad they can be eliminated purely from this, but it is not representative of real performance. We have tried to flag these kinds of results for what they are. Real disk intensive tests tend to be difficult to reproduce since you need a comparable disk subsystem.
One thing I would mention is that flushing to disk is not such a great guarantee since the disk is the most failure prone part of the system. Written but not flushed on multiple machines is much, much faster and at least as safe as flushed on one disk. This is a big performance win, and though the guarantees are different (they are in terms of availability not durability), they are not really worse.
"Written but not flushed on multiple machines is much, much faster and at least as safe as flushed on one disk."
Good point! I hadn’t quite thought of it that way before, but that makes a lot of sense!
I have the impression that CouchDB is written with an eye toward being a great option for a node within a distributed system, but isn’t (by itself) a complete solution for a distributed database.
That of course is in contrast to the dynamo family of key-value stores (dynomite, project voldemort, cassandra and probably others as well).
It sounds like the ideal scenario is that a number of possible ways of distributing data across couchdb nodes emerge as separate projects, so that you can pick the one that suits you.
If couchdb is going to be a good general purpose node for dropping into a variety of distributed data storage clusters it would be a good idea, as this post seems to suggest, to make the tradeoff of speed (memory) vs durability (fsyncs to disk) configurable by the user. This is similar in concept to the way a dynamo user can configure the number of nodes each piece of data will be stored on, and of course these two knobs could both be exposed.
Maybe this is already the case? I’m curious! Are there any examples of distributed storage systems already using large clusters of CouchDB nodes together with an extra software layer that handles the distributed aspects (say, organizing them into a dynamo-like ring)?
Hey Charlie,
you are exactly right. CouchDB is not a turnkey solution for a distributed database but brings all the primitives to build your distributed database.
The couchdb-lounge project is a thin layer on top of CouchDB that adds sharding and fault tolerance to CouchDB nodes. It is used in production at meebo.com.
CouchDB will eventually have an easy to use multi-node setup, but you’ll always be able to combine it with other layers like Dynamite or Scalaris.
Great article! A minor correction: Metcalfe’s Law is quadratic, not exponential. The "possibility of error and the need for proper error handling" probably doesn’t grow exponentially, but quadratically or slower, depending on the system and what exactly is meant by errors and error handling.
Fixed, thanks!
If you are looking to compare information on different Data Storage systems be sure to check out Compare Tech Providers
That’s cool. I think the biggest drawback to such a setup though was mentioned by the Cassandra engineer at the recent #nosql gathering when he said (paraphrasing) "you should have direct control of how the data is laid out on disk, so that you can stream it directly from one node to another without a lot of switches between kernel land and user land"
In other words, you’re trading some raw speed when you separate the concerns of "disk storage node" and "distributed node manager". How much this matters obviously depends on the performance demands of your application, but it definitely seems to matter to facebook.
"Database" is the data
"Database system" or "DBMS" is the software.
Thanks.
Hi anon,
I’m more of a descriptive linguist, you’re prescriptive, that’s fine. In general usage, database and database management systems are used synonymously.
Are you ever going to do any work and stop writing this crap? Seriously, you’ve talked more about the proper way to evaluate and benchmark these damn systems. Why don’t you use some of that time evaluating them and posting all the parameters of your environment, the code used, and let people judge for themselves?
I’d stop wasting your time on long tirades like this one and do some work. After we have those tests we can iterate on the problems or build out more through tests around them.
All your base are belong to Jan.
Motherfucker.
E.