Sharding is a double edged sword, on one hand it allows you to scale your applications with increased growth in customers and on other hand it very hard for junior developers to grasp the concept. I try to abstract and encapsulate as much complexity as I can in the framework.
Typically people either do
1. Functional Sharding/Vertical sharding: For huge datasets this will only buy you time. When I joined the company everything was stored in Berkely db or Lucene or in some files on filesystems. I tried moving all the things to Mysql in one project but it became a monster so I started moving pieces of applications out into mysql and moving them to production. This also boosted confidence of team in mysql as more feature went live on Mysql and they saw that Mysql was reliable and scalable. I started with smaller pieces that didn't had >10-20M rows but needs mysql for its ACID properties. Anticipating the growth we decided to create one schema per functionality and avoided joins between two schemas by joining them in application if required. This way in case one feature became hot and start using Mysql like crazy then we can easily move that schema to its own server or add more slaves for that schema.
Pros:
Sharding by Customer or user : In our metadata db for files/folders we assign a shard to a customer on registration. We had to do this because most of our metadata queries require joins and I could not find an easy way to spread this across multiple servers in the allotted time. We find the four least used shards and randomly pick one of them and assign the customer to it. We store the customer to shard mapping in a global db (a.ka dns or cluster metadata db). Each incoming request looks up the shard Id by customer and looks up a connection pool out of pool of connections and executes the request. I have described it here http://neopatel.blogspot.com/2012/04/java-spring-shard-datasource-with.html
We divide our servers into clusters so we can update them when any non backward compatible changes is introduced and also so that we can manage migrations or do rolling updates (topic of next blog post). Several clusters can live on same host or we can do one cluster per host or we can split one cluster on to many hosts.
Sharding by hash on key:In our S3 clone database (Instead of using S3 for storage we use filers for storage and Mysql for object metadata ) we use the other approach of sharding where each object is assigned a shard id based on least used shard and the shardId is embedded in the Object key itself. We could spread this into multiple servers because all queries are by objectId and do not require any joins at all. We maintain a logical shard to physical shard mapping here in case we need to move the shard to some other host or redistribute it. Whenever an object needs to be looked up we parse the objectid and get the logical shard id, from this we lookup the physical shard id and get a connection pool and do the query there.
ObjectId = 32.XXXXXXX maps to logical shard 32.
logical shard 32 lives on physical shard 4
physical shard 4 lives on host2, schema5
In next parts of the series I will cover more details on how the logical and physical shard mapping is done and the reasons behind the cluster approach.
Part1 of series
Part2 of series
Part3 of series
Part4 of series
Typically people either do
1. Functional Sharding/Vertical sharding: For huge datasets this will only buy you time. When I joined the company everything was stored in Berkely db or Lucene or in some files on filesystems. I tried moving all the things to Mysql in one project but it became a monster so I started moving pieces of applications out into mysql and moving them to production. This also boosted confidence of team in mysql as more feature went live on Mysql and they saw that Mysql was reliable and scalable. I started with smaller pieces that didn't had >10-20M rows but needs mysql for its ACID properties. Anticipating the growth we decided to create one schema per functionality and avoided joins between two schemas by joining them in application if required. This way in case one feature became hot and start using Mysql like crazy then we can easily move that schema to its own server or add more slaves for that schema.
Pros:
- Easier to understand for developers
- Local development can happen on one mysql server and deployment can happen in different topology.
- You can keep on adding more slave if the app is less write intensive or you can keep updating to new hardware and more RAM.
- For bigger datasets that cant fit in one server, this will only buy you some more time to move to Horizontal sharding.
- By Customer or user : When a customer registers you assign him a shard and all his data lives on that mysql shard. Joins are easy in this approach but hotspots become and issue and you will need to redistribute the shards on different hosts.
- By hash on key: In this approach you distribute the data for a customer across many shards. Each record is assigned its own shards. There will rarely be need to redistribute shards here but joins are pain as you have do them in application.
Sharding by Customer or user : In our metadata db for files/folders we assign a shard to a customer on registration. We had to do this because most of our metadata queries require joins and I could not find an easy way to spread this across multiple servers in the allotted time. We find the four least used shards and randomly pick one of them and assign the customer to it. We store the customer to shard mapping in a global db (a.ka dns or cluster metadata db). Each incoming request looks up the shard Id by customer and looks up a connection pool out of pool of connections and executes the request. I have described it here http://neopatel.blogspot.com/2012/04/java-spring-shard-datasource-with.html
We divide our servers into clusters so we can update them when any non backward compatible changes is introduced and also so that we can manage migrations or do rolling updates (topic of next blog post). Several clusters can live on same host or we can do one cluster per host or we can split one cluster on to many hosts.
Customer acme lives on shard 64 which lives on physical shard 5.
physical shard 5 lives on host1 schema2.
host1 belongs to cluster C1
Sharding by hash on key:In our S3 clone database (Instead of using S3 for storage we use filers for storage and Mysql for object metadata ) we use the other approach of sharding where each object is assigned a shard id based on least used shard and the shardId is embedded in the Object key itself. We could spread this into multiple servers because all queries are by objectId and do not require any joins at all. We maintain a logical shard to physical shard mapping here in case we need to move the shard to some other host or redistribute it. Whenever an object needs to be looked up we parse the objectid and get the logical shard id, from this we lookup the physical shard id and get a connection pool and do the query there.
ObjectId = 32.XXXXXXX maps to logical shard 32.
logical shard 32 lives on physical shard 4
physical shard 4 lives on host2, schema5
In next parts of the series I will cover more details on how the logical and physical shard mapping is done and the reasons behind the cluster approach.
Part1 of series
Part2 of series
Part3 of series
Part4 of series
Comments
Post a Comment