Skip to main content

nodejs push java redis

Disclaimer: The code below is just a general guidelines and I am not pasting full code so you would have to fill in the blanks if you want to use it.

Continuing  finally I was able to write the nodejs app that would listen to events from redis and publish message to browser.  The architecture looks like

1) Your browser will use SockJS javascript library to connect to the nodejs server and it would then listen to events. The normal html code looks like

<script src=""></script>
<script type="text/javascript">
    isOpen = false;
    var newConn = function(){
        var sockjs_url = '';
        var sockjs = new SockJS(sockjs_url);
        sockjs.onopen = function(){
            sockjs.send(JSON.stringify({"userName": "", "sessionId": "e1930358-87d2-4d2b-86b6-2b2373acfaf1"}));
            isOpen = true;           
        sockjs.onmessage = function(e){
            console.log("Got message:" + e);           
            if( == 'fileSystemEvent'){
       = si.fn();
        sockjs.onclose = function(){
            var recInterval = setInterval(function(){
                    if(isOpen) clearInterval(recInterval);
                , 15000

2) Now on server first thing you need is a c10K server like nginx or haproxy.
I had to install nginx1.4.3 because the default that came with sudo apt-get didnt had the websocket support.
I just used this to install nginx 1.4.3
tar xvzf nginx-1.4.3.tar.gz
cd nginx-1.4.3

./configure \
--user=nginx                          \
--group=nginx                         \
--prefix=/etc/nginx                   \
--sbin-path=/usr/sbin/nginx           \
--conf-path=/etc/nginx/nginx.conf     \
--pid-path=/var/run/         \
--lock-path=/var/run/nginx.lock       \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module        \
--with-http_stub_status_module        \
--with-http_ssl_module                \
--with-pcre                           \
--with-file-aio                       \
--with-http_realip_module             \
--without-http_scgi_module            \
--without-http_uwsgi_module           \
sudo make install

3) Then I had to add the below upgrade headers to make nginx upgrade the http socket to websocket

    location /push {
            proxy_pass              http://localhost:8090;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header        Host $http_host;
        # WebSocket support (nginx 1.4)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";


4) I started the nodejs server on 8090 port and tomcat was on 8080. Now bare bones nodejs wont give you everything you need so you need several packages. I installed all of them using

sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs
sudo npm install -g redis sockjs winston

a) I used winston for logging like log4j is in java
b) redis package to talk to redis
c) sockjs for sockjs connections code

I just installed the modules globally as this is a scratch app

5) Inside nodejs app when a connection comes the client will pass username and sessionid. so nodejs needs to validate that sessionid with tomcat and then add it to a local map. Then when it receives the redis event it needs to just write it to the socket associated with the username or reject it if there is no open connection to the user.

var sockjs  = require('sockjs');
var http    = require('http');
var redis   = require('redis');

//    Store all connected sessions
var sessionMap = {};

//    Create redis client
var redisClient = redis.createClient(6379, '');

//    Create sockjs server
var sockjsServer = sockjs.createServer();

// Sockjs server

sockjsServer.on('connection', function(conn) {
    conn.on('data', function(message){
        var data = JSON.parse(message || "{}");
        //    call function to check if current user is exist in db
        result = validateSession(data.userName, data.sessionId, function(response){
  "Got response" + response);
            if(response.success === true){
                conn.write(JSON.stringify({success:true, statusCode:response.statusCode}));
            } else {
      "Got error: " + response.message);
                conn.write(JSON.stringify({success:false, statusCode:response.statusCode}));
    conn.on('close', function(){
        //    remove connection from Maps if connection is closed
            removeConnection(conn, '');


//    Push incoming message to all connected users
redisClient.on("message", function(channel, message){
    var data = JSON.parse(message || "{}");
    //figure out the connections to write message based on users and write message to them

//    Create http server
var server = http.createServer();
//    Hook sockjs in to http server
sockjsServer.installHandlers(server, {prefix:'/push'});   

6) But coming from java world nodejs seemed fragile as it would just kill itself on every exception
so that when I added logging and in winston I added "exitOnError: false"

var winston = require('winston');

var logger = new (winston.Logger)({
  transports: [
    new winston.transports.File({ filename: __dirname + '/logs/push.log', json: false })
  exceptionHandlers: [
    new winston.transports.File({ filename: __dirname + '/logs/push.log', json: false })
  exitOnError: false

7) also nodejs wont have usual startup/stop things like tomcat has so you need to cook up your own something like

export NODE_PATH=/usr/lib/node_modules
nohup node push_server.js 1>>"nohup.out" 2>&1 &
echo $! >"

In short I am still excited for nodejs because if your startup has JS skills then JS developers can quickly start pitching into nodejs code and write server code.

Also being event model it scales pretty fine similar to the way c10k servers like nginx and haproxy scales. To me bigger learning was event based programming where you just write what happens on what event and let the framework take care of things. Good thing was that the browser has a live connection and I can publish message to redis when a file is added and in browser console logs for all users I can see the event instantly :).


Popular posts from this blog

RabbitMQ java clients for beginners

Here is a sample of a consumer and producer example for RabbitMQ. The steps are
Download ErlangDownload Rabbit MQ ServerDownload Rabbit MQ Java client jarsCompile and run the below two class and you are done.
This sample create a Durable Exchange, Queue and a Message. You will have to start the consumer first before you start the for the first time.

For more information on AMQP, Exchanges, Queues, read this excellent tutorial
import com.rabbitmq.client.Connection; import com.rabbitmq.client.Channel; import com.rabbitmq.client.*; public class RabbitMQProducer { public static void main(String []args) throws Exception { ConnectionFactory factory = new ConnectionFactory(); factory.setUsername("guest"); factory.setPassword("guest"); factory.setVirtualHost("/"); factory.setHost(""); factory.setPort(5672); Conne…

Logging to Graphite monitoring tool from java

We use Graphite as a tool for monitoring some stats and watch trends. A requirement is to monitor impact of new releases as build is deployed to app nodes to see if things like
1) Has the memcache usage increased.
2) Has the no of Java exceptions went up.
3) Is the app using more tomcat threads.
Here is a screenshot

We changed the installer to log a deploy event when a new build is deployed. I wrote a simple spring bean to log graphite events using java. Logging to graphite is easy, all you need to do is open a socket and send lines of events.
import org.slf4j.Logger;import org.slf4j.LoggerFactory; import; import; import; import java.util.HashMap; import java.util.Map; public class GraphiteLogger { private static final Logger logger = LoggerFactory.getLogger(GraphiteLogger.class); private String graphiteHost; private int graphitePort; public String getGraphiteHost() { return graphiteHost; } public void setGraphite…

Jersey posting multipart data

This took me sometime to figure out mostly it was because I was only including jersey-multipart-1.6.jar but I was not including mimepull-1.3.jar.

So the intent is to upload a file using REST api and we need pass meta attributes in addition to uploading the file. Also the intent is to stream the file instead of first storing it on the local disk. Here is some sample code.
@Path("/upload-service") public class UploadService { @Context protected HttpServletResponse response; @Context protected HttpServletRequest request; @POST @Consumes(MediaType.MULTIPART_FORM_DATA) @Produces(MediaType.APPLICATION_JSON) public String uploadFile(@PathParam("fileName") final String fileName, @FormDataParam("workgroupId") String workgroupId, @FormDataParam("userId") final int userId, @FormDataParam("content") final InputStream content) throws JSONException { //.......Upload the file to S3 or netapp or any storage service } }
Now to tes…