We dont run our nodes in HA yet and once a customer registers he is assigned a node and he lives there. Problem is that if the node dies we incur a downtime for that customer and also we need to overallocate hardware to prepare for worse case scenario. for the past 6 months I have been working to making the code stateless so that we can do HA and reduce our node count by 4 times.
So we used to run quartz using in memory scheduler but for HA I need to run quartz in a cluster. We chose org.quartz.impl.jdbcjobstore.JobStoreTX for this.
Problem was that as soon as I tried it I ran into issues because I was injecting spring beans into our quartz job using JobDataMap and JobStoreTX was trying to serialize the jobData into a table and our spring beans are not serializable. There were two options:
1) Load the entire applicationContext in each job and read the bean from there.
2) Use the schedulerContextAsMap.
After evaluating options I found scheduler context as the best option .The way to do this would be
<bean id="haQuartzScheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="configLocation" value="classpath:quartz.properties"/>
<property name="triggers">
<list>
<ref bean="CommitFileJobTrigger" />
</list>
</property>
<property name="schedulerContextAsMap">
<map>
<entry key="commitFileJobHelper" value-ref="commitFileJobHelper" />
<entry key="numThreads" value="2" />
</map>
</property>
</bean>
In the quartz job I wrote a function
protected T getSchedulerContextBean(JobExecutionContext context, String beanName) throws JobExecutionException {
SchedulerContext schedulerContext;
try {
schedulerContext = context.getScheduler().getContext();
} catch (SchedulerException e) {
throw new JobExecutionException(e);
}
T bean = (T)schedulerContext.get(beanName);
return bean;
}
and then all jobs can use this function like
CommitFileJobHelper commitFileJobHelper = getSchedulerContextBean(context, "commitFileJobHelper");
So we used to run quartz using in memory scheduler but for HA I need to run quartz in a cluster. We chose org.quartz.impl.jdbcjobstore.JobStoreTX for this.
Problem was that as soon as I tried it I ran into issues because I was injecting spring beans into our quartz job using JobDataMap and JobStoreTX was trying to serialize the jobData into a table and our spring beans are not serializable. There were two options:
1) Load the entire applicationContext in each job and read the bean from there.
2) Use the schedulerContextAsMap.
After evaluating options I found scheduler context as the best option .The way to do this would be
<bean id="haQuartzScheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="configLocation" value="classpath:quartz.properties"/>
<property name="triggers">
<list>
<ref bean="CommitFileJobTrigger" />
</list>
</property>
<property name="schedulerContextAsMap">
<map>
<entry key="commitFileJobHelper" value-ref="commitFileJobHelper" />
<entry key="numThreads" value="2" />
</map>
</property>
</bean>
In the quartz job I wrote a function
protected
SchedulerContext schedulerContext;
try {
schedulerContext = context.getScheduler().getContext();
} catch (SchedulerException e) {
throw new JobExecutionException(e);
}
T bean = (T)schedulerContext.get(beanName);
return bean;
}
and then all jobs can use this function like
CommitFileJobHelper commitFileJobHelper = getSchedulerContextBean(context, "commitFileJobHelper");
Comments
Post a Comment