NewRelic really shines at discovering these data driven performance issues. Earlier we would find them late or these would be buried but now they seem so obvious if the engineer is paying attention. I was casually trolling new relic and sorted all apps by avg time per api and one of our core application in one DC was taking twice the avg time for each call than all other DCs. I immediately compared that DC with other DCs and I saw was a graph like below in DC1
and I saw this in DC2
Clearly DC1 is spending abnormal amount of time in database. So I went to database view and saw this in DC1
and I saw this in DC2
Clearly something is weird in DC1 even though its same codebase. 309K queries per minute seems abnormal. Within 5 min I found out its a n query problem. Aparently some customer has 4000 users and he has created 3000 groups and the group_member table has 40K rows for this customer. Normally all of our customers would create 10-50 groups and there is a code that iterates over each group and calls get members.
For normal customers if he makes 100 calls per minute to this api it would cause 100*10 calls or 1K calls per minute but in this dc it causes 100*3000 or 300K queries. As we are near to weekend release, for now I replaced n query with a bulk query and then we would optimize this code in next release or work with customer to see if his data modelling has some flaws and it can be achieved some different way.
Comments
Post a Comment