An internet-shop’s business is directly related to its availability: it runs 24/7, orders are easy to make, employees are swiftly contacting clients, working out details and delivering what was promised.
But what do you do if, instead of a usual number of 100 orders per day, the shop receives 10000 of them? All seem to possess a legitimate phone number, address, name, but a phone call reveals that the order, in fact, had not been made.
It’s a total disaster! And filtering out 9900 bot-generated orders from the 10000 is an extremely tough ask.
This is precisely what one internet-shop owner came to me with. Asking programmers for help or attempting to deal with the crisis by himself were to no avail. Distinguishing smart bots from real customers using run-of-the-mill methods is virtually impossible.
Smart spam-bots compare to customers just like zombies compare to real people: can’t tell the difference unless you look closely,.
Why programmers’ actions did not work:
Poor solution #1. A programmer added a ReCaptcha element to the order form.
Latin script based Captcha is considered a good solution, but in this case the situation only worsened: regular order volume dropped (people aren’t fond of Captcha), spammers, on the other hand, had no trouble with it.
Poor solution #2. A programmer figured that spammers have a counting problem and added a ‘solve 2+2’ exercise in place of the captcha. However, spammers that can get through a tough captcha can perform simple arithmetic calculations just as well.
So how do you deal with spam-bots ravaging an internet-shop?
But these ones were quite capable! It had to happen sometime: browser engine based bot-writing technologies were finally sophisticated enough.
Geolocation filtering did not help – all bots came from Russian IP addresses.
So I hooked up behavioural factors: all in all, a bot is not quite human, it’s behaviour is based on patterns that can be used to reveal and filter out.
After thorough examination of the bots’ pedigree I learned that they all have the same User-Agent.
Meaning, the botnet is not made out of bots, but in fact real browsers. To be more precise, a dated and vulnerable Firefox build.
A user-agent check in addition to behavioural filtering enabled me to spot 100% of the bots. However, it’s an obvious parameter and soon the user-agent was made arbitrary. So I focused primarily on fine-tuning my behavioural models. By the time the user-agent was randomized, I was all arms.
After enabling the full-on behaviour-based filtering, the bots were being dealt with. The shop continued to process orders just like before. My client was happy, his buyers were happy. Only the spammers felt sour.
What conclusions do we make here?
In order to not become part of a botnet, do update your browser software. Everyone knows this, not everyone follows through. Google Chrome does this autonomously, avoiding needless questions, and, thus, is convenient.
Out-of-the-box solutions perform poorly. Utilizing modern technology, one can paralyze any internet-shop. Security is a process, a struggle between defense and offense, hackers and security specialists.
Even the smartest botnets can be tackled with a high enough probability. Damage done to one’s business shall be minimal, and an attack that does not harm a business holds no meaning to the attacker.
Once an attack has been thwarted – look out for another one. If you’ve beaten spam — be ready for DDoS.