Skip to main content

Memcached stale keys on new code deployment and evictions

Scalability problems are interesting. This weekend we deployed a new release and suddenly we started getting ldap alerts from one datacentre. All fingers were pointed to something new in the code making more ldap queries but I didnt thought we made any ldap code changes, infact we were moving away from ldap to mysql.  We use 15G of memcached and looking into memcache stats I found that we were seeing evictions.  We had bumped memory from 12G to 15G in other data centre during same code and that data centre was not having issues. So this problem was interesting.

At last we found out that our new release had made 90% of memcached data stale. We store our objects as Json in memcached and deprecated fields removal from objects made the json in cache unusable. But the code bug was that the old jsons were still sitting in cache consuming memory.  As new jsons was getting pumped into memcached they were fighting for memory and memcached  LRU is slab based so when it evicts a slab it would evict good data also. This in turn was causing Thundering herd issue in LDAP.

LDAP is nice but when it comes to scaling more than 2-3 million objects it sucks as internally openldap uses bdb backend and after a point both read/writes start taking a long time no matter what you do. Good news is that in coming month we are moving from ldap->mysql which would make us scalable again.  I will write a blog on how to migrate millions of customers from ldap->mysql without them knowing about it. The analogy is changing tires of race cars on a race track and letting cars again on race track without disrupting the ongoing race.


Comments

Popular posts from this blog

Haproxy and tomcat JSESSIONID

One of the biggest problems I have been trying to solve at our startup is to put our tomcat nodes in HA mode. Right now if a customer comes, he lands on to a node and remains there forever. This has two major issues: 1) We have to overprovision each node with ability to handle worse case capacity. 2) If two or three high profile customers lands on to same node then we need to move them manually. 3) We need to cut over new nodes and we already have over 100+ nodes.  Its a pain managing these nodes and I waste lot of my time in chasing node specific issues. I loath when I know I have to chase this env issue. I really hate human intervention as if it were up to me I would just automate thing and just enjoy the fruits of automation and spend quality time on major issues rather than mundane task,call me lazy but thats a good quality. So Finally now I am at a stage where I can put nodes behing HAProxy in QA env. today we were testing the HA config and first problem I immediat...

Adding Jitter to cache layer

Thundering herd is an issue common to webapp that rely on heavy caching where if lots of items expire at the same time due to a server restart or temporal event, then suddenly lots of calls will go to database at same time. This can even bring down the database in extreme cases. I wont go into much detail but the app need to do two things solve this issue. 1) Add consistent hashing to cache layer : This way when a memcache server is added/removed from the pool, entire cache is not invalidated.  We use memcahe from both python and Java layer and I still have to find a consistent caching solution that is portable across both languages. hash_ring and spymemcached both use different points for server so need to read/test more. 2) Add a jitter to cache or randomise the expiry time: We expire long term cache  records every 8 hours after that key was added and short term cache expiry is 2 hours. As our customers usually comes to work in morning and access the cloud file server it ...

Spring 3.2 quartz 2.1 Jobs added with no trigger must be durable.

I am trying to enable HA on nodes and in that process I found that in a two test node setup a job that has a frequency of 10 sec was running into deadlock. So I tried upgrading from Quartz 1.8 to 2.1 by following the migration guide but I ran into an exception that says "Jobs added with no trigger must be durable.". After looking into spring and Quartz code I figured out that now Quartz is more strict and earlier the scheduler.addJob had a replace parameter which if passed to true would skip the durable check, in latest quartz this is fixed but spring hasnt caught up to this. So what do you do, well I jsut inherited the factory and set durability to true and use that public class DurableJobDetailFactoryBean extends JobDetailFactoryBean {     public DurableJobDetailFactoryBean() {         setDurability(true);     } } and used this instead of JobDetailFactoryBean in the spring bean definition     <bean i...