The current SyslogAppender in log4j uses UDP to transmit the logs to syslog server. It potentially has risks of losing data and our ops guy was looking for a TCP based appender and ran into this http://www.rsyslog.com/tcp-syslog-rfc5424-log4j-appender/ and plugged it in. Then something weird happen as an appnode would keep running out of file handles and other app node ran into issue where all threads were stuck. The implementation at this link is buggy so I rewrote this and publishing here in case anyone is interested.
/** * TCP appender to syslog. This class uses a blocking queue with 10K message capacity and any requests beyond that would be rejected. * The append method from all caller threads inserts the message into blocking queue and there is a single background thread that logs to the syslog. * This complex queueing is introduced to relieve the user thread as soon as possible. * * */ public class Syslog4jTCPAppender extends Syslog4jAppender { private static final long serialVersionUID = 1L; private static ThreadLocal @Override protected SimpleDateFormat initialValue() { SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.S'Z'"); df.setTimeZone(TimeZone.getTimeZone("UTC")); return df; } }; private Map private String localHost; private BlockingQueue @Override public void activateOptions() { super.activateOptions(); String[] facilities = { "KERN", "USER", "MAIL", "DAEMON", "AUTH", "SYSLOG", "LPR", "NEWS", "UUCP", "CRON", "AUTHPRIV", "FTP", "LOCAL0", "LOCAL1", "LOCAL2", "LOCAL3", "LOCAL4", "LOCAL5", "LOCAL6", "LOCAL7" }; int[] facIntArray = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 16, 17, 18, 19, 20, 21, 22, 23 }; for (int i = 0; i < facilities.length; i++) { facilitiesMap.put(facilities[i], facIntArray[i]); } try { localHost = InetAddress.getLocalHost().getHostName(); } catch (UnknownHostException e) { System.out.println("UnknownHostException" + e.getMessage()); localHost = "Unknown"; } SocketLoggerThread socketLoggerThread = new SocketLoggerThread(); socketLoggerThread.setDaemon(true); socketLoggerThread.start(); } @Override protected void append(LoggingEvent event) { int priority = calcPriority(event); String trace = super.layout.format(event); String newLineChar = "\n"; String msg = trace.replaceAll(newLineChar, " "); Date dt = new Date(); String dateString = dateFormat.get().format(dt); String message = "<" + priority + ">" + dateString + " " + localHost + " " + super.getIdent() + ": " + msg + "\n"; blockingQueue.offer(message); } private int calcPriority(LoggingEvent event) { String facility = super.getFacility().toUpperCase(); Integer facPriority = facilitiesMap.get(facility); if (facPriority == null) { facPriority = 1; } int level = event.getLevel().getSyslogEquivalent(); int priority = facPriority * 8 + level; return priority; } private class SocketLoggerThread extends Thread { private Socket socket; private DataOutputStream os; private int counter; @Override public void run() { reinit(); while (true) { try { consume(blockingQueue.take()); } catch (InterruptedException e) { System.out.println("Syslog socket logger interrupted while waiting" + e.getMessage()); } catch (Exception e) { System.out.println("Unknown exception " + e.getMessage()); } } } private void close() { try { if (os != null) { os.close(); } } catch (IOException e) { System.out.println("IOException closing os" + e.getMessage()); } try { if (socket != null) { socket.close(); } } catch (IOException e) { System.out.println("IOException closing socket " + e.getMessage()); } } private void reinit() { close(); try { socket = new Socket(Syslog4jTCPAppender.super.getSyslogHost(), Integer.parseInt(Syslog4jTCPAppender.super.getPort())); os = new DataOutputStream(socket.getOutputStream()); } catch (IOException e) { System.out.println("IOException opening socket" + e.getMessage()); } } private void consume(String message) { try { counter++; os.writeUTF(message); if (counter % 5000 == 0) { System.out.println("Reiniting syslog socket"); reinit(); counter = 0; } } catch (IOException e) { System.out.println("IOException writing message" + e.getMessage()); reinit(); } } } } |
One of the biggest problems I have been trying to solve at our startup is to put our tomcat nodes in HA mode. Right now if a customer comes, he lands on to a node and remains there forever. This has two major issues: 1) We have to overprovision each node with ability to handle worse case capacity. 2) If two or three high profile customers lands on to same node then we need to move them manually. 3) We need to cut over new nodes and we already have over 100+ nodes. Its a pain managing these nodes and I waste lot of my time in chasing node specific issues. I loath when I know I have to chase this env issue. I really hate human intervention as if it were up to me I would just automate thing and just enjoy the fruits of automation and spend quality time on major issues rather than mundane task,call me lazy but thats a good quality. So Finally now I am at a stage where I can put nodes behing HAProxy in QA env. today we were testing the HA config and first problem I immediately
Thanks for the rewrite! Would you mind if we include this on the rsyslog side (and/or the tarball?)
ReplyDeleteRainer please go ahead. I will github it soon when I get time but for now please go ahead and include it on the rsyslog side.
ReplyDeletefyi the code is live on 100+ nodes and so far I am not seeing any issues.
Kalpesh, did this code get onto github?
ReplyDeleteno I didnt.
ReplyDelete