Skip to main content

Programatically extracting quoted reply from an email

When files are uploaded to our cloud file server, we wanted to send notification email per file with its own unique email address. I will discuss how to have so many unique email address without creating a user on mail server for each file and scale out the solution in some later blog. People can just hit reply button on the generated notification email and comment on the uploaded file. When reply email reaches back the server we want to extract the comment that user added after stripping out the quoted reply form the mail client and add the clean comment to file. Seems like an easy problem isnt it, but unfortunately there is no easy way to detect the quoted reply from an incoming email because different mail clients use different way to quote a reply. On top of it quoted reply of html emails are different than plain text quoted replies.
  1. Angle Brackets "> xxx zzz"
  2. "---Original Message---"
  3. "On such-and-such day, so-and-so wrote:"
  4. html email reply in thunderbird uses blockquote tags.
  5. yahoo/hotmail uses some div tags
Got an brilliant idea from someone to add a hash marker in the outbound notification email  so that when it comes back we can strip the text after that marker. Then I found other sites are already doing this like redmine or issueburner already does that. These guys add a marker text in outbound email like below

##### Please do not write below this line #####
Hi kalpesh,

The issue has been updated.

Updated by:     Kris Katta
Comment added:     this is a test comment
Kris Katta's Reply..

This is a test reply

To track the status of your request and set up a profile for yourself, follow the link below:





Now all that is left to extract the mail header so using some regex you can strip that. I  have handled thunderbird and outlook and will soon add yahoo/hotmail. Below is some sample code.


/**
 * @author kpatel
 */
public class QuotedReplyExtractor {
    public static final String 
REPLY_MARKER = "--- Please reply ABOVE THIS LINE to comment on this file ---";

    private static final List patterns = new ArrayList();
    static {
        patterns
                .add(Pattern.compile(".*on.*?wrote:", Pattern.CASE_INSENSITIVE));
        patterns.add(Pattern.compile("-+original\\s+message-+\\s*",
                Pattern.CASE_INSENSITIVE));
    }

    public String stripQuotedReply(String comment) {
        int startIndex = comment.indexOf(REPLY_MARKER);
        if (startIndex > 0) {
            comment = comment.substring(0, startIndex);
        }
        for (Pattern pattern : patterns) {
            Matcher matcher = pattern.matcher(comment);
            if (matcher.find()) {
                startIndex = matcher.start();
                comment = comment.substring(0, startIndex);
            }
        }
        return comment;
    }

} 

Comments

Post a Comment

Popular posts from this blog

Haproxy and tomcat JSESSIONID

One of the biggest problems I have been trying to solve at our startup is to put our tomcat nodes in HA mode. Right now if a customer comes, he lands on to a node and remains there forever. This has two major issues: 1) We have to overprovision each node with ability to handle worse case capacity. 2) If two or three high profile customers lands on to same node then we need to move them manually. 3) We need to cut over new nodes and we already have over 100+ nodes.  Its a pain managing these nodes and I waste lot of my time in chasing node specific issues. I loath when I know I have to chase this env issue. I really hate human intervention as if it were up to me I would just automate thing and just enjoy the fruits of automation and spend quality time on major issues rather than mundane task,call me lazy but thats a good quality. So Finally now I am at a stage where I can put nodes behing HAProxy in QA env. today we were testing the HA config and first problem I immediat...

Adding Jitter to cache layer

Thundering herd is an issue common to webapp that rely on heavy caching where if lots of items expire at the same time due to a server restart or temporal event, then suddenly lots of calls will go to database at same time. This can even bring down the database in extreme cases. I wont go into much detail but the app need to do two things solve this issue. 1) Add consistent hashing to cache layer : This way when a memcache server is added/removed from the pool, entire cache is not invalidated.  We use memcahe from both python and Java layer and I still have to find a consistent caching solution that is portable across both languages. hash_ring and spymemcached both use different points for server so need to read/test more. 2) Add a jitter to cache or randomise the expiry time: We expire long term cache  records every 8 hours after that key was added and short term cache expiry is 2 hours. As our customers usually comes to work in morning and access the cloud file server it ...

Spring 3.2 quartz 2.1 Jobs added with no trigger must be durable.

I am trying to enable HA on nodes and in that process I found that in a two test node setup a job that has a frequency of 10 sec was running into deadlock. So I tried upgrading from Quartz 1.8 to 2.1 by following the migration guide but I ran into an exception that says "Jobs added with no trigger must be durable.". After looking into spring and Quartz code I figured out that now Quartz is more strict and earlier the scheduler.addJob had a replace parameter which if passed to true would skip the durable check, in latest quartz this is fixed but spring hasnt caught up to this. So what do you do, well I jsut inherited the factory and set durability to true and use that public class DurableJobDetailFactoryBean extends JobDetailFactoryBean {     public DurableJobDetailFactoryBean() {         setDurability(true);     } } and used this instead of JobDetailFactoryBean in the spring bean definition     <bean i...