Skip to main content

From NOSQL to MySql (BDB to mysql) - Part5

This weekend was painful. Hardware came but out of 12 mysql servers, 3 had hardware issues, so only 8 mysql available (4 pair of Master-slave). I didn't wanted to take a risk of having a cluster up with only 1 master so left that one intentionally.

Started migration on 1st DC and after 4 hours the DC guys were doing some inbound network maintenance and took the in bound network to DC down and it took them 3 hours to restore it, so 3 hours were lost. We were up till 6:00 AM on Saturday morning and only able to finish 8-10 nodes.

We worked on Saturday night and as usual Berkely db gave us last minute jitters, one node was badly fragmented and took 10 hours to migration and one other node stuck in middle of migration so had to stop migration and run db_recovery and then resume again. In all 12-15 nodes migrated on that day.

We are now almost done 60% done in all DCs over last 3 weeks.

But all this pain is worth having a calm Monday where no system alerts or no Sales/Operations team hovering around you to diagnose scalability issues. This also in turn translates to better customer conversion and satisfaction.

Its first Monday in last few months where I had no single nagios alert raised for an app node spiking, hurray.


Part1
Part2
Part3
Part4
Part5


Comments

Popular posts from this blog

RabbitMQ java clients for beginners

Here is a sample of a consumer and producer example for RabbitMQ. The steps are
Download ErlangDownload Rabbit MQ ServerDownload Rabbit MQ Java client jarsCompile and run the below two class and you are done.
This sample create a Durable Exchange, Queue and a Message. You will have to start the consumer first before you start the for the first time.

For more information on AMQP, Exchanges, Queues, read this excellent tutorial
http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/

+++++++++++++++++RabbitMQProducer.java+++++++++++++++++++++++++++
import com.rabbitmq.client.Connection; import com.rabbitmq.client.Channel; import com.rabbitmq.client.*; public class RabbitMQProducer { public static void main(String []args) throws Exception { ConnectionFactory factory = new ConnectionFactory(); factory.setUsername("guest"); factory.setPassword("guest"); factory.setVirtualHost("/"); factory.setHost("127.0.0.1"); factory.setPort(5672); Conne…

Logging to Graphite monitoring tool from java

We use Graphite as a tool for monitoring some stats and watch trends. A requirement is to monitor impact of new releases as build is deployed to app nodes to see if things like
1) Has the memcache usage increased.
2) Has the no of Java exceptions went up.
3) Is the app using more tomcat threads.
Here is a screenshot

We changed the installer to log a deploy event when a new build is deployed. I wrote a simple spring bean to log graphite events using java. Logging to graphite is easy, all you need to do is open a socket and send lines of events.
import org.slf4j.Logger;import org.slf4j.LoggerFactory; import java.io.OutputStreamWriter; import java.io.Writer; import java.net.Socket; import java.util.HashMap; import java.util.Map; public class GraphiteLogger { private static final Logger logger = LoggerFactory.getLogger(GraphiteLogger.class); private String graphiteHost; private int graphitePort; public String getGraphiteHost() { return graphiteHost; } public void setGraphite…

What a rocky start to labor day weekend

Woke up by earthquake at 7:00 AM in morning and then couldn't get to sleep. I took a bath, made my tea and started checking emails and saw that after last night deployment three storage node out of 100s of nodes were running into Full GC. What was special about the 3 nodes was that each one was in a different Data centre but it was named same app02.  This got me curious I asked the node to be taken out of rotation and take a heap dump.  Yesterday night a new release has happened and I had upgraded spymemcached library version as new relic now natively supports instrumentation on it so it was a suspect. And the hunch was a bullseye, the heap dump clearly showed it taking 1.3G and full GCs were taking 6 sec but not claiming anything.



I have a quartz job in each jvm that takes a thread dump every 5 minutes and saves last 300 of them, checking few of them quickly showed a common thread among all 3 data centres. It seems there was a long running job that was trying to replicate pending…