Skip to main content

Mysql Sharding at our company- Part1

Sharding is a double edged sword, on one hand it allows you to scale your applications with increased growth in customers and on other hand it very hard for junior developers to grasp the concept. I try to abstract and encapsulate as much complexity as I can in the framework.

Typically people either do
1. Functional Sharding/Vertical sharding: For huge datasets this will only buy you time. When I joined the company everything was stored in Berkely db or Lucene or in some files on filesystems. I tried moving all the things to Mysql in one project but it became a monster so I started moving pieces of applications out into mysql and moving them to production. This also boosted confidence of team in mysql as more feature went live on Mysql and they saw that Mysql was reliable and scalable. I started with smaller pieces that didn't had >10-20M rows but needs mysql for its ACID properties. Anticipating the growth we decided to create one schema per functionality and avoided joins between two schemas by joining them in application if required.  This way in case one feature became hot and start using Mysql like crazy then we can easily move that schema to its own server or add more slaves for that schema.


Pros:
  1. Easier to understand for developers
  2. Local development can happen on one mysql server and deployment can happen in different topology.
  3. You can keep on adding more slave if the app is less write intensive or you can keep updating to new hardware and more RAM.
Cons:
  1. For bigger datasets that cant fit in one server, this will only buy you some more time to move to Horizontal sharding.
2. Horizontal sharding : In this sharding approach you basically distribute your data across many mysql servers. There are various ways to do this also:
  1. By Customer or user : When a customer registers you assign him a shard and all his data lives on that mysql shard.  Joins are easy in this approach but hotspots become and issue and you will need to redistribute the shards on different hosts.
  2. By hash on key: In this approach you distribute the data for a customer across many shards. Each record is assigned its own shards.  There will rarely be need to redistribute shards here but joins are pain as you have do them in application.
We use both sharding approaches.
Sharding by Customer or user : In our metadata db for files/folders we assign a shard to a customer on registration. We had to do this because most of our metadata queries require joins and I could not find an easy way to spread this across multiple servers in the allotted time. We find the four least used shards and randomly pick one of them and assign the customer to it. We store the customer to shard mapping in a global db (a.ka dns or cluster metadata db). Each incoming request looks up  the shard Id by customer and looks up a connection pool out of pool of connections and executes the request. I have described it here  http://neopatel.blogspot.com/2012/04/java-spring-shard-datasource-with.html

We divide our servers into clusters so we can update them when any non backward compatible changes is introduced and also so that we can manage migrations or do rolling updates (topic of next blog post). Several clusters can live on same host or we can do one cluster per host or we can split one cluster on to many hosts.


Customer acme lives on shard 64 which lives on physical shard 5. 
physical shard 5 lives on host1 schema2.
host1 belongs to cluster C1  

Sharding by hash on key:In our S3 clone database (Instead of using S3 for storage we use filers for storage and Mysql for object metadata ) we use the other approach of sharding where each object is assigned a shard id based on least used shard and the shardId is embedded in the Object key itself. We could spread this into multiple servers because all queries are by objectId and do not require any joins at all. We maintain a logical shard to physical shard mapping here in case we need  to move the shard to some other host or redistribute it. Whenever an object needs to be looked up we parse the objectid and get the logical shard id, from this  we lookup the physical shard id and get a connection pool and do the query there.

ObjectId = 32.XXXXXXX maps to logical shard 32.
logical shard 32 lives on physical shard 4
physical shard 4 lives on host2, schema5


In next parts of the series I will cover more details on how the logical and physical shard mapping is done and the reasons behind the cluster approach.

Part1 of series
Part2 of series
Part3 of series
Part4 of series 

Comments

Popular posts from this blog

Haproxy and tomcat JSESSIONID

One of the biggest problems I have been trying to solve at our startup is to put our tomcat nodes in HA mode. Right now if a customer comes, he lands on to a node and remains there forever. This has two major issues: 1) We have to overprovision each node with ability to handle worse case capacity. 2) If two or three high profile customers lands on to same node then we need to move them manually. 3) We need to cut over new nodes and we already have over 100+ nodes.  Its a pain managing these nodes and I waste lot of my time in chasing node specific issues. I loath when I know I have to chase this env issue. I really hate human intervention as if it were up to me I would just automate thing and just enjoy the fruits of automation and spend quality time on major issues rather than mundane task,call me lazy but thats a good quality. So Finally now I am at a stage where I can put nodes behing HAProxy in QA env. today we were testing the HA config and first problem I immediat...

Adding Jitter to cache layer

Thundering herd is an issue common to webapp that rely on heavy caching where if lots of items expire at the same time due to a server restart or temporal event, then suddenly lots of calls will go to database at same time. This can even bring down the database in extreme cases. I wont go into much detail but the app need to do two things solve this issue. 1) Add consistent hashing to cache layer : This way when a memcache server is added/removed from the pool, entire cache is not invalidated.  We use memcahe from both python and Java layer and I still have to find a consistent caching solution that is portable across both languages. hash_ring and spymemcached both use different points for server so need to read/test more. 2) Add a jitter to cache or randomise the expiry time: We expire long term cache  records every 8 hours after that key was added and short term cache expiry is 2 hours. As our customers usually comes to work in morning and access the cloud file server it ...

Spring 3.2 quartz 2.1 Jobs added with no trigger must be durable.

I am trying to enable HA on nodes and in that process I found that in a two test node setup a job that has a frequency of 10 sec was running into deadlock. So I tried upgrading from Quartz 1.8 to 2.1 by following the migration guide but I ran into an exception that says "Jobs added with no trigger must be durable.". After looking into spring and Quartz code I figured out that now Quartz is more strict and earlier the scheduler.addJob had a replace parameter which if passed to true would skip the durable check, in latest quartz this is fixed but spring hasnt caught up to this. So what do you do, well I jsut inherited the factory and set durability to true and use that public class DurableJobDetailFactoryBean extends JobDetailFactoryBean {     public DurableJobDetailFactoryBean() {         setDurability(true);     } } and used this instead of JobDetailFactoryBean in the spring bean definition     <bean i...