Woke up by earthquake at 7:00 AM in morning and then couldn't get to sleep. I took a bath, made my tea and started checking emails and saw that after last night deployment three storage node out of 100s of nodes were running into Full GC. What was special about the 3 nodes was that each one was in a different Data centre but it was named same app02. This got me curious I asked the node to be taken out of rotation and take a heap dump. Yesterday night a new release has happened and I had upgraded spymemcached library version as new relic now natively supports instrumentation on it so it was a suspect. And the hunch was a bullseye, the heap dump clearly showed it taking 1.3G and full GCs were taking 6 sec but not claiming anything.
I have a quartz job in each jvm that takes a thread dump every 5 minutes and saves last 300 of them, checking few of them quickly showed a common thread among all 3 data centres. It seems there was a long running job that was trying to replicate pending storage objects as per replication policy and it had 50M or so backlog to catchup. It seems newrelic was holding on to all that data for the thread and not just collecting the aggregate stats, the job was running for hours causing memory to increase with each replicated file.
Now I had two options:-
enabled: false
Now I had two options:-
- Rollback the library
- Suppress newrelic instrumentation
Finally I tried one setting which was weird but worked, I had updated library to spymemcached-2.12.1 but the setting to turn it off in newrelic was
class_transformer:
com.newrelic.instrumentation.spymemcached-2.12.0:enabled: false
notice the setting is called 2.12.0 even though my library is 2.12.1, wtf. It was late in night for the devops engineer and I was almost 5 hours on the desk, we just patched those 3 special nodes and called it a day. The other nodes would be patched by puppet before Monday using automation.
Today I also complete 7 years at the company :).
Comments
Post a Comment