Skip to main content

AWS and rise of devops

I used to always wonder how Snapchat, Pinterest and Instagram were able to scale to millions of users with just 10-15 engineers.  I am a Java Architect but when it comes to networking, operations and other stuff I am a Noob beyond basic skills.  Recently our ops team did some subnet changes and some IP changes and added 10G network between some services, All this is grey area to me and I was like you really need to hire Operations for this so how come these other startups did without so many people.  One of my friend was after me for months to help him move his jenkins servers from Ukraine to EC2 as Ukraine is in turmoil. I have no ops expertise so this was tricky but here is how I got it done over 2 weekends as Dallas is freezing due to cold front and I dont have driver license due to immigration fiasco by USCIS. So this friend really got benefit due to it as I had nothing else to do on weekend.

  1. I took a vanilla CentOS AMI and launched an instance in EC2. But when launching it asked me whether I want to launch in ec-classic or EC2-VPC. I was curious so read that EC-VPC will allow you to isolate your instances and allows better security groups by blocking traffic to internal servers from outside using security groups.
  2. Creating a VPC was piece of cake as I followed the wizard and read the docs. I really wanted to use http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html  but went with http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.html  as the earlier one was requiring an extra NAT instance.
  3. Finally I launched an EC2 instance with CentOs and installed jenkins using sudo yum install jenkins
  4. One thing I wanted was a banner when I ssh to the instance so I  googled and found that all you need to do is go to http://patorjk.com/software/taag/#p=display&h=0&f=Big&t=Jenkins%0A  and generate a text banner and put it in /etc/motd and you are done. now when you login to instance it prints "Jenkins"
  5. Now how do I move jenkins, his old instance was up for 2 years and I read moving jenkins from one box to other  requires just copying home from one box to other http://stackoverflow.com/questions/8724939/how-to-move-jenkins-from-one-pc-to-another  But when I did du -hsm I got 40G. I was like no way I want to move all this shit.  
  6. Finally i installed jenkins thinbackup plugin https://wiki.jenkins-ci.org/display/JENKINS/thinBackup  on both servers. On the old one I took a thinkbackup and then zipped it and overwrote the JENKINS_HOME directory /var/lib/jenkins . 
  7. Then I ran "chown +R jenkins /var/lib/jenkins"
  8.  I asked him to switch DNS to new server.
  9. I took a dump of svn and import into a new https://www.assembla.com account and due to this I didnt needed to setup svn server separately.
  10. We restarted the jenkins using "service jenkins restart" and the new jenkins was up with all configs as old server, I changed all svn repo paths to point to assembla credentials/urls and we were done.
 Now comes the hard part. After all this was done, he was using one box already in EC2 for running selenium tests and that agent was down. No matter what I do it wont connect. I checked jenkins config page and it was using port 15001.  I edited Ec2-security group and allowed 15001 port and it wont connect.

Then I thought may be I need to run this box in same VPC, me being a noob this was a bad idea that derailed me. In EC2 you cant migrate an instance from ec-classic to EC2-VPC. The only way is to create an AMI from old instance and launch a new instance, I did that and it took 4 hours including reading docs to create an AMI, but even then it wont connect. I deleted that AMI/snapshot and new instance and fired up old instance.

Finally I switched to debugging using raw telnet and saw that from the jenkins instance I can telnet to localhost 15001 but I cant from his laptop or from selenium box.  Finally figured out that the CentOS AMI I picked had its own firewall and had only 80 and 443 port open.

We added
sudo iptables -I INPUT -p tcp --dport 15001 -j ACCEPT

and finally all was done.

In short EC2 had made bare bones operations people job in jeopardy, they need to move to devops.  AWS is innovating like crazy and today he sent me some links on AWS Lambda http://aws.amazon.com/lambda/ and AWA Aurora https://aws.amazon.com/blogs/aws/highly-scalable-mysql-compat-rds-db-engine/   I was like hmm if startups have all this then do they need to hire Mysql dbas until the site has reached a large momentum.

Finally after doing all this I understood how Pinterest, Snapchat, Instagram were able to keep up with a large infrastructure with least no of employees, even less than my employer when they were at similar scale.  For storage you use S3, for database you use aurora, for load balancing you use ELB and you automate build/deploy via jenkins/puppet and now a days docker. I remember 10-14 years ago when I started working for startups, they have to hire an  army of people to get prototype out of the door and that means VCs need to write a Series A check. It seems today people at YC are doing it with 3-4 people with only Seed funding. So the seed funding has become the old Series A. Who needs an army of people to install/manage 100s of severs when automation tools combined with Power of AWS, GCS can do the job for you.


Off-course all this comes at a cost, when the site becomes really big, AWS bills are higher, recently at my employer one of our GCS performance testing env had a bill of $10K for 1 month which went in poof as you  don’t own the hardware, you lease it so its like Renting vs owning the house. But we were able to finish perf testing quickly as bringing up new env was faster than stuck in hardware pipeline. Also Adrian crockfort once said if you are leasing SSD then its even better as the wear out of SSD is not your problem (http://observationdeck.io9.com/not-my-circus-not-my-monkeys-457554833). But  startups have an opportunity cost, if they can get the product out with less developers in less time then later when they become big they can hire specialist  Also each developer now a day in bay area has a fully loaded cost of >150-200K so for startups strapped for cost, AWS and GCS can be a boon and they can solve the high bill problem when they really reach that stage. Isnt it great if you get an AWS bill of $100K because that means you must be making 10-30 times the $$$$.

Comments

Popular posts from this blog

Haproxy and tomcat JSESSIONID

One of the biggest problems I have been trying to solve at our startup is to put our tomcat nodes in HA mode. Right now if a customer comes, he lands on to a node and remains there forever. This has two major issues: 1) We have to overprovision each node with ability to handle worse case capacity. 2) If two or three high profile customers lands on to same node then we need to move them manually. 3) We need to cut over new nodes and we already have over 100+ nodes.  Its a pain managing these nodes and I waste lot of my time in chasing node specific issues. I loath when I know I have to chase this env issue. I really hate human intervention as if it were up to me I would just automate thing and just enjoy the fruits of automation and spend quality time on major issues rather than mundane task,call me lazy but thats a good quality. So Finally now I am at a stage where I can put nodes behing HAProxy in QA env. today we were testing the HA config and first problem I immediat...

Adding Jitter to cache layer

Thundering herd is an issue common to webapp that rely on heavy caching where if lots of items expire at the same time due to a server restart or temporal event, then suddenly lots of calls will go to database at same time. This can even bring down the database in extreme cases. I wont go into much detail but the app need to do two things solve this issue. 1) Add consistent hashing to cache layer : This way when a memcache server is added/removed from the pool, entire cache is not invalidated.  We use memcahe from both python and Java layer and I still have to find a consistent caching solution that is portable across both languages. hash_ring and spymemcached both use different points for server so need to read/test more. 2) Add a jitter to cache or randomise the expiry time: We expire long term cache  records every 8 hours after that key was added and short term cache expiry is 2 hours. As our customers usually comes to work in morning and access the cloud file server it ...

Spring 3.2 quartz 2.1 Jobs added with no trigger must be durable.

I am trying to enable HA on nodes and in that process I found that in a two test node setup a job that has a frequency of 10 sec was running into deadlock. So I tried upgrading from Quartz 1.8 to 2.1 by following the migration guide but I ran into an exception that says "Jobs added with no trigger must be durable.". After looking into spring and Quartz code I figured out that now Quartz is more strict and earlier the scheduler.addJob had a replace parameter which if passed to true would skip the durable check, in latest quartz this is fixed but spring hasnt caught up to this. So what do you do, well I jsut inherited the factory and set durability to true and use that public class DurableJobDetailFactoryBean extends JobDetailFactoryBean {     public DurableJobDetailFactoryBean() {         setDurability(true);     } } and used this instead of JobDetailFactoryBean in the spring bean definition     <bean i...