Skip to main content

Mysql Sharding at our company- Part1

Sharding is a double edged sword, on one hand it allows you to scale your applications with increased growth in customers and on other hand it very hard for junior developers to grasp the concept. I try to abstract and encapsulate as much complexity as I can in the framework.

Typically people either do
1. Functional Sharding/Vertical sharding: For huge datasets this will only buy you time. When I joined the company everything was stored in Berkely db or Lucene or in some files on filesystems. I tried moving all the things to Mysql in one project but it became a monster so I started moving pieces of applications out into mysql and moving them to production. This also boosted confidence of team in mysql as more feature went live on Mysql and they saw that Mysql was reliable and scalable. I started with smaller pieces that didn't had >10-20M rows but needs mysql for its ACID properties. Anticipating the growth we decided to create one schema per functionality and avoided joins between two schemas by joining them in application if required.  This way in case one feature became hot and start using Mysql like crazy then we can easily move that schema to its own server or add more slaves for that schema.


Pros:
  1. Easier to understand for developers
  2. Local development can happen on one mysql server and deployment can happen in different topology.
  3. You can keep on adding more slave if the app is less write intensive or you can keep updating to new hardware and more RAM.
Cons:
  1. For bigger datasets that cant fit in one server, this will only buy you some more time to move to Horizontal sharding.
2. Horizontal sharding : In this sharding approach you basically distribute your data across many mysql servers. There are various ways to do this also:
  1. By Customer or user : When a customer registers you assign him a shard and all his data lives on that mysql shard.  Joins are easy in this approach but hotspots become and issue and you will need to redistribute the shards on different hosts.
  2. By hash on key: In this approach you distribute the data for a customer across many shards. Each record is assigned its own shards.  There will rarely be need to redistribute shards here but joins are pain as you have do them in application.
We use both sharding approaches.
Sharding by Customer or user : In our metadata db for files/folders we assign a shard to a customer on registration. We had to do this because most of our metadata queries require joins and I could not find an easy way to spread this across multiple servers in the allotted time. We find the four least used shards and randomly pick one of them and assign the customer to it. We store the customer to shard mapping in a global db (a.ka dns or cluster metadata db). Each incoming request looks up  the shard Id by customer and looks up a connection pool out of pool of connections and executes the request. I have described it here  http://neopatel.blogspot.com/2012/04/java-spring-shard-datasource-with.html

We divide our servers into clusters so we can update them when any non backward compatible changes is introduced and also so that we can manage migrations or do rolling updates (topic of next blog post). Several clusters can live on same host or we can do one cluster per host or we can split one cluster on to many hosts.


Customer acme lives on shard 64 which lives on physical shard 5. 
physical shard 5 lives on host1 schema2.
host1 belongs to cluster C1  

Sharding by hash on key:In our S3 clone database (Instead of using S3 for storage we use filers for storage and Mysql for object metadata ) we use the other approach of sharding where each object is assigned a shard id based on least used shard and the shardId is embedded in the Object key itself. We could spread this into multiple servers because all queries are by objectId and do not require any joins at all. We maintain a logical shard to physical shard mapping here in case we need  to move the shard to some other host or redistribute it. Whenever an object needs to be looked up we parse the objectid and get the logical shard id, from this  we lookup the physical shard id and get a connection pool and do the query there.

ObjectId = 32.XXXXXXX maps to logical shard 32.
logical shard 32 lives on physical shard 4
physical shard 4 lives on host2, schema5


In next parts of the series I will cover more details on how the logical and physical shard mapping is done and the reasons behind the cluster approach.

Part1 of series
Part2 of series
Part3 of series
Part4 of series 

Comments

Popular posts from this blog

RabbitMQ java clients for beginners

Here is a sample of a consumer and producer example for RabbitMQ. The steps are
Download ErlangDownload Rabbit MQ ServerDownload Rabbit MQ Java client jarsCompile and run the below two class and you are done.
This sample create a Durable Exchange, Queue and a Message. You will have to start the consumer first before you start the for the first time.

For more information on AMQP, Exchanges, Queues, read this excellent tutorial
http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/

+++++++++++++++++RabbitMQProducer.java+++++++++++++++++++++++++++
import com.rabbitmq.client.Connection; import com.rabbitmq.client.Channel; import com.rabbitmq.client.*; public class RabbitMQProducer { public static void main(String []args) throws Exception { ConnectionFactory factory = new ConnectionFactory(); factory.setUsername("guest"); factory.setPassword("guest"); factory.setVirtualHost("/"); factory.setHost("127.0.0.1"); factory.setPort(5672); Conne…

What a rocky start to labor day weekend

Woke up by earthquake at 7:00 AM in morning and then couldn't get to sleep. I took a bath, made my tea and started checking emails and saw that after last night deployment three storage node out of 100s of nodes were running into Full GC. What was special about the 3 nodes was that each one was in a different Data centre but it was named same app02.  This got me curious I asked the node to be taken out of rotation and take a heap dump.  Yesterday night a new release has happened and I had upgraded spymemcached library version as new relic now natively supports instrumentation on it so it was a suspect. And the hunch was a bullseye, the heap dump clearly showed it taking 1.3G and full GCs were taking 6 sec but not claiming anything.



I have a quartz job in each jvm that takes a thread dump every 5 minutes and saves last 300 of them, checking few of them quickly showed a common thread among all 3 data centres. It seems there was a long running job that was trying to replicate pending…

Email slavery

It seems I have become an EmailSlave. The first half of the day is spent in just answering to emails. There are so many emails where I am copied but I need not be. There are many emails  where its a 1-2 page email and somewhere down someone says @KP please answer this.  So it seems daily my work schedule is:
Signin to newrelic and check anomalies for 15 min. Check emails related production exception report and yes there are a ton of these report daily. Need a better tool here as this model is not scalable. I need to reduce the incoming data at me to only see relevant data like what newrelic does. May be I need to create a webapp out of these emails.Check emails for next few minutes before team callsDo team callsThen again back to checking emails until a I have taken a best shot at answering everyone waiting for my reply.Attend team meetings on Tue/Thu
Being an architect and coder at heart I don't feel satisfied at end of the day if there is nothing tangible getting done at the end.…