We dont run our nodes in HA yet and once a customer registers he is assigned a node and he lives there. Problem is that if the node dies we incur a downtime for that customer and also we need to overallocate hardware to prepare for worse case scenario. for the past 6 months I have been working to making the code stateless so that we can do HA and reduce our node count by 4 times. So we used to run quartz using in memory scheduler but for HA I need to run quartz in a cluster. We chose org.quartz.impl.jdbcjobstore.JobStoreTX for this. Problem was that as soon as I tried it I ran into issues because I was injecting spring beans into our quartz job using JobDataMap and JobStoreTX was trying to serialize the jobData into a table and our spring beans are not serializable. There were two options: 1) Load the entire applicationContext in each job and read the bean from there. 2) Use the schedulerContextAsMap. After evaluating options I found scheduler context as the best o...