Skip to main content

What a rocky start to labor day weekend

Woke up by earthquake at 7:00 AM in morning and then couldn't get to sleep. I took a bath, made my tea and started checking emails and saw that after last night deployment three storage node out of 100s of nodes were running into Full GC. What was special about the 3 nodes was that each one was in a different Data centre but it was named same app02.  This got me curious I asked the node to be taken out of rotation and take a heap dump.  Yesterday night a new release has happened and I had upgraded spymemcached library version as new relic now natively supports instrumentation on it so it was a suspect. And the hunch was a bullseye, the heap dump clearly showed it taking 1.3G and full GCs were taking 6 sec but not claiming anything.



I have a quartz job in each jvm that takes a thread dump every 5 minutes and saves last 300 of them, checking few of them quickly showed a common thread among all 3 data centres. It seems there was a long running job that was trying to replicate pending storage objects as per replication policy and it had 50M or so backlog to catchup.  It seems newrelic was holding on to all that data for the thread and not just collecting the aggregate stats, the job was running for hours causing memory to increase with each replicated file.

Now I had two options:-
  1. Rollback the library
  2. Suppress newrelic instrumentation
I thought of going option#2 as it wont require any new build, just run puppet and restart all nodes.  Boy I was wrong , newrelic has negligible documentation for the yml file.  I filed a support ticket as they have no phone support even for emergencies even though we pay thousands of dollars a month. After IPO their support team quality has also reduced, the support told me hey go and disable the class, i am like its natively supported by you so give me some hints how to do it. She told me that a OOM ticket is filed 2 versions ago so I can wait for it to be fixed, wtf. I waited 20 min for her to tell me how to disable it and no reply,  It seems even she didnt knew how to do it so she was buying time letting me hanging dry on a cloth line. I started mining the support forum and google and I tried 5-10 settings with our devops engineer and newrelic agent is stupid that it would take any wrong setting too and show it in the UI. I ended up with a settings nightmare


Finally I tried one setting which was weird but worked, I had updated library to spymemcached-2.12.1 but the setting to turn it off in newrelic was

  class_transformer:
    com.newrelic.instrumentation.spymemcached-2.12.0:
      enabled: false

notice the setting is called 2.12.0 even though my library is 2.12.1, wtf.  It was late in night for the devops engineer and I was almost 5 hours on the desk, we just patched those 3 special nodes and called it a day. The other nodes would be patched by puppet before Monday using automation. 

Today I also complete 7  years at the company :).


Comments

Popular posts from this blog

Haproxy and tomcat JSESSIONID

One of the biggest problems I have been trying to solve at our startup is to put our tomcat nodes in HA mode. Right now if a customer comes, he lands on to a node and remains there forever. This has two major issues: 1) We have to overprovision each node with ability to handle worse case capacity. 2) If two or three high profile customers lands on to same node then we need to move them manually. 3) We need to cut over new nodes and we already have over 100+ nodes.  Its a pain managing these nodes and I waste lot of my time in chasing node specific issues. I loath when I know I have to chase this env issue. I really hate human intervention as if it were up to me I would just automate thing and just enjoy the fruits of automation and spend quality time on major issues rather than mundane task,call me lazy but thats a good quality. So Finally now I am at a stage where I can put nodes behing HAProxy in QA env. today we were testing the HA config and first problem I immediat...

Adding Jitter to cache layer

Thundering herd is an issue common to webapp that rely on heavy caching where if lots of items expire at the same time due to a server restart or temporal event, then suddenly lots of calls will go to database at same time. This can even bring down the database in extreme cases. I wont go into much detail but the app need to do two things solve this issue. 1) Add consistent hashing to cache layer : This way when a memcache server is added/removed from the pool, entire cache is not invalidated.  We use memcahe from both python and Java layer and I still have to find a consistent caching solution that is portable across both languages. hash_ring and spymemcached both use different points for server so need to read/test more. 2) Add a jitter to cache or randomise the expiry time: We expire long term cache  records every 8 hours after that key was added and short term cache expiry is 2 hours. As our customers usually comes to work in morning and access the cloud file server it ...

Spring 3.2 quartz 2.1 Jobs added with no trigger must be durable.

I am trying to enable HA on nodes and in that process I found that in a two test node setup a job that has a frequency of 10 sec was running into deadlock. So I tried upgrading from Quartz 1.8 to 2.1 by following the migration guide but I ran into an exception that says "Jobs added with no trigger must be durable.". After looking into spring and Quartz code I figured out that now Quartz is more strict and earlier the scheduler.addJob had a replace parameter which if passed to true would skip the durable check, in latest quartz this is fixed but spring hasnt caught up to this. So what do you do, well I jsut inherited the factory and set durability to true and use that public class DurableJobDetailFactoryBean extends JobDetailFactoryBean {     public DurableJobDetailFactoryBean() {         setDurability(true);     } } and used this instead of JobDetailFactoryBean in the spring bean definition     <bean i...