Skip to main content

RateLimiting based on load on nodes

We are a cloud based file storage company and we allow many access point to the cloud. One of the access point is Webdav api and people can use any webdav client to access the cloud. But some of the webdav client especially on mac OS are really abusive. Though the user is not doing any abusive action, the webdav client does aggressive caching so even when you are navigating at a top directory it does PROPFINDs for depth at 5 or 6 level to make the user experience seamless as if he is navigating local drive. This makes life miserable on the server because from some clients we get more than 1000 requests in a minute. If there are 5-10 clients do the webdav activity then it causes 100 or more propfinds per sec. Luckily the server is able to process these but it hurts other activities. So we needed to rate limit this. Now as the user is really not doing any abusive action it would be bad to slow down on penalize the user in normal circumstance, however if the server is under load then it would be worth while to throttle the user for some time.

Luckily Jdk1.6 has a light weight api to get system load average.

 public double getSystemLoadAverage() {
  OperatingSystemMXBean osStats = ManagementFactory.getOperatingSystemMXBean();
  double loadAverage = osStats.getSystemLoadAverage();
  return loadAverage;
 }


And now I can write code like

  if(throttlePropFinds(req, user)) {
   logger.info("Throttling due to excessive webdav requests from user {}", user.getId());
   if(throttlingHelper.isBlockUsers()) {
    double loadAverage = SpringBeanLocator.getJvmStatsLogger().getSystemLoadAverage();
    if (loadAverage > 4) {
     logger.info("Sending 503 as loadAverage>4 and excessive webdav requests from user {}", user.getId());
           resp.sendError(WebdavStatus.SC_SERVICE_UNAVAILABLE);
           return;
    }
   }
  }


The logic I used for propfind rate limiting is simple where I keep track of all users who made>1000 requests in last 5 min and only if loadAvg is >4, I send 503. For brevity I am not putting the code to count no of request made by users in last 5 min but I am using memcache to maintain the counters in one minute buckets.

Comments

Popular posts from this blog

Killing a particular Tomcat thread

Update: This JSP does not work on a thread that is inside some native code.  On many occasions I had a thread stuck in JNI code and it wont work. Also in some cases thread.stop can cause jvm to hang. According to javadocs " This method is inherently unsafe. Stopping a thread with Thread.stop causes it to unlock all of the monitors that it has locked". I have used it only in some rare occasions where I wanted to avoid a system shutdown and in some cases we ended up doing system shutdown as jvm was hung so I had a 70-80% success with it.   -------------------------------------------------------------------------------------------------------------------------- We had an interesting requirement. A tomcat thread that was spawned from an ExecutorService ThreadPool had gone Rogue and was causing lots of disk churning issues. We cant bring down the production server as that would involve downtime. Killing this thread was harmless but how to kill it, t

Adding Jitter to cache layer

Thundering herd is an issue common to webapp that rely on heavy caching where if lots of items expire at the same time due to a server restart or temporal event, then suddenly lots of calls will go to database at same time. This can even bring down the database in extreme cases. I wont go into much detail but the app need to do two things solve this issue. 1) Add consistent hashing to cache layer : This way when a memcache server is added/removed from the pool, entire cache is not invalidated.  We use memcahe from both python and Java layer and I still have to find a consistent caching solution that is portable across both languages. hash_ring and spymemcached both use different points for server so need to read/test more. 2) Add a jitter to cache or randomise the expiry time: We expire long term cache  records every 8 hours after that key was added and short term cache expiry is 2 hours. As our customers usually comes to work in morning and access the cloud file server it can happe

Preparing for an interview after being employed 11 years at a startup

I would say I didn't prepared a hell lot but  I did 2 hours in night every day and every weekend around 8 hours for 2-3 months. I did 20-30 leetcode medium problems from this list https://leetcode.com/explore/interview/card/top-interview-questions-medium/.  I watched the first 12 videos of Lecture Videos | Introduction to Algorithms | Electrical Engineering and Computer Science | MIT OpenCourseWare I did this course https://www.educative.io/courses/grokking-the-system-design-interview I researched on topics from https://www.educative.io/courses/java-multithreading-for-senior-engineering-interviews and leetcode had around 10 multithreading questions so I did those I watched some 10-20 videos from this channel https://www.youtube.com/channel/UCn1XnDWhsLS5URXTi5wtFTA