Skip to main content

hierarchical locking using mysql database

We are a cloud file server company and one of the requirements we have is to be able to hierarchically lock the paths i.e. for a particula customer if one user is trying to move a path /shared/marketing/Dallas then no one should be allowed to add/delete files/folders in Dallas and its children, also no one should be able to make any edits to shared or marketing folder.  But users should be able to add files to any sibling hierarchies like /Shared/Engineering/test or /private/kpatel or /Shared/QA.

so the requirement is to have an api where I can do

try {
lockManger.lockPaths("/Shared/source", "/Shared/sbc", "/Shared/
Do some long running task (may be 1msec to 10 minutes).
} finally {

I evaluated some alternatives so here is the summary :
1) Apache commons has this locking api with class org.apache.commons.transaction.locking.DefaultHierarchicalLockManager
and it has a method called as lockInHierarchy that matches our reuirement 100% but only downside is that it uses inmemory locks and we are moving our servers to HA so this is not a solution.

2) Another alternative is to start a seprate  JVM and run DefaultHierarchicalLockManager inside it and expose apis to lock/unlock paths. I have been badly bitten by SPOFs and this is a SPOF because what if this tomcat goes down then you all your running jvms are in a limbo.

3) Another solution is to use Terracotta distributed locking api but I would have to roll out my own hierarchical locking on top of terracotta distributed locks.

4) Same problem was with hazlecast distributed locks.  Also the ops need to be famaliar with it and who will train ops and what if there are bugs then its a whole new learning curve.

5) Last alternative was to design our own locking api based on top of a mysql database table. From our graphite dashboard I can find that for a particular customer the concurrency is not more than 8-10 request per sec so I think designing a locking based on database will work.  The only issue is when you release the lock how do you notify the other waiting threads in a distributed manner. Now I could have used rabbitmq or something JMS for this but it sounds like an overkill considering most of the time a customer is working on diff paths. So I thought to start with a polling based approach +reverse exponential backoff for now and later introduce JMS if required which I dont think we will require.

So I created these tables

CREATE TABLE path_locks (
    customer_id BIGINT NOT NULL,
    path                   VARCHAR(4000) NOT NULL,
    creation_time         DATETIME NOT NULL,
    thread_id             VARCHAR(64) NOT NULL,
    node_id             VARCHAR(64) NOT NULL,

CREATE TABLE customer_locks (
    customer_id BIGINT NOT NULL,

and the logic was to

1)We delete all locks for that node_id on startup of a node and destroy of a node.
2) A quartz job will delete all expired locks (older than timeoutsec+1min).
3)Now to get a lock for /Shared/kpatel/Software/Releases
  while not lock acquired   
    lock sentinel row in customer_locks table for that customer_id to guard against race conditions between query and acquisition steps.
    Fire this query to detect lock conflicts

    select * from path_lock where threadId !='Thread1' and (creation_time-systime) < lockTimeout
          and (path like '/Shared/kpatel/Software/Releases/%' or path in ('/Shared','/Shared/kpatel','/Shared/kpatel/Software', '/Shared/kpatel/Software/Releases'))
        If you dont find any conflict then
            Insert the lock row
            Release sentinel lock.
        If you get any conflicts then
            check if you timedout waiting for lock then throw LockTimeoutException?.
            release sentinel lock
            wait for max(10, 1024/(retryCount%15))msec (reverse exponential backoff to reduce thread starvation). retryCount%15 is done to backoff and not pound the db continously as possibly this is some dead node issue or some long running lock hogger is doing its job and we have to wait.

The real thik in this solution is this reverse exponential backoff where if 10 threads come to acquire locks then we would not bombard the table or we would not starve the threads also as the thread that came first would be more aggresively querying than the thread that came late

This is going to be live soon and I am confident that this should be able to handle the load.


Popular posts from this blog

RabbitMQ java clients for beginners

Here is a sample of a consumer and producer example for RabbitMQ. The steps are
Download ErlangDownload Rabbit MQ ServerDownload Rabbit MQ Java client jarsCompile and run the below two class and you are done.
This sample create a Durable Exchange, Queue and a Message. You will have to start the consumer first before you start the for the first time.

For more information on AMQP, Exchanges, Queues, read this excellent tutorial
import com.rabbitmq.client.Connection; import com.rabbitmq.client.Channel; import com.rabbitmq.client.*; public class RabbitMQProducer { public static void main(String []args) throws Exception { ConnectionFactory factory = new ConnectionFactory(); factory.setUsername("guest"); factory.setPassword("guest"); factory.setVirtualHost("/"); factory.setHost(""); factory.setPort(5672); Conne…

Logging to Graphite monitoring tool from java

We use Graphite as a tool for monitoring some stats and watch trends. A requirement is to monitor impact of new releases as build is deployed to app nodes to see if things like
1) Has the memcache usage increased.
2) Has the no of Java exceptions went up.
3) Is the app using more tomcat threads.
Here is a screenshot

We changed the installer to log a deploy event when a new build is deployed. I wrote a simple spring bean to log graphite events using java. Logging to graphite is easy, all you need to do is open a socket and send lines of events.
import org.slf4j.Logger;import org.slf4j.LoggerFactory; import; import; import; import java.util.HashMap; import java.util.Map; public class GraphiteLogger { private static final Logger logger = LoggerFactory.getLogger(GraphiteLogger.class); private String graphiteHost; private int graphitePort; public String getGraphiteHost() { return graphiteHost; } public void setGraphite…

Jersey posting multipart data

This took me sometime to figure out mostly it was because I was only including jersey-multipart-1.6.jar but I was not including mimepull-1.3.jar.

So the intent is to upload a file using REST api and we need pass meta attributes in addition to uploading the file. Also the intent is to stream the file instead of first storing it on the local disk. Here is some sample code.
@Path("/upload-service") public class UploadService { @Context protected HttpServletResponse response; @Context protected HttpServletRequest request; @POST @Consumes(MediaType.MULTIPART_FORM_DATA) @Produces(MediaType.APPLICATION_JSON) public String uploadFile(@PathParam("fileName") final String fileName, @FormDataParam("workgroupId") String workgroupId, @FormDataParam("userId") final int userId, @FormDataParam("content") final InputStream content) throws JSONException { //.......Upload the file to S3 or netapp or any storage service } }
Now to tes…