Search Imperva Community for
2014 was the first year bots outnumbered human users online. That number has only increased since then, and is virtually guaranteed to continue.
In previous articles, we’ve talked about how Imperva differentiates good bots from bad bots, and what kinds of strategies are effective against various kinds of bad bots. But all of these processes rely on a single, all-important first step – distinguishing between bots and legitimate users.
Bots are simply software applications that run scripts on the Internet. Simple bots are easy to detect: Most human users cannot type more than 80 words per minute, or navigate dozens of web pages per second.
But the malicious bots developed by today’s well-funded cyber criminal enterprises are anything but simple. Detecting automated behavior requires some of the tech world’s most sophisticated technologies alongside clever engineering.
For example, there are companies out there that offer wide proxy network services that can defeat simple bot mitigation strategies. There are APIs designed to solve CAPTCHAs. Cybercriminals know this, and are increasingly adopting these tools in their illicit workflows.
APB is the technical term for the complex bots behind web scraping schemes, credential stuffing scams, and denial of service attacks. These bots are programmed to act like human users online, and can be difficult to detect using old methods.
For example, it used to be the case that a security engineer could identify automated behavior by scanning the IP addresses sending traffic to a website. If a single IP address sent 1000 requests to a website, it would block that device.
An modern APB could bypass this IP-oriented security tactic by making one request each, using 1000 different IP addresses to do it. Catching automated behavior requires a more sophisticated approach.
There are several techniques available today that can detect bot behavior far better than simple IP logging. According to the OWASP Automated Threats Handbook, some examples include:
Imperva uses a variety of these measures and empowers them with machine learning technology. The data sets generated by fingerprinting and reputation monitoring tend to be large and diverse, making them perfect for machine learning-oriented analytics.
Imperva uses state-of-the-art machine learning tools and biometric validation to analyze user behavior and identify bots. Our high-definition fingerprinting tools analyze more than 200 device attributes and assign a unique reputation score to each user. This approach ensures that our bot detection and mitigation strategies remain up-to-date and effective against the latest threats.
Taking the big data approach offers security vendors like Imperva the ability to correlate important data points and understand the dynamic bot landscape of today. Bots, like their creators, share a great deal of characteristics that sufficiently advanced analysis can uncover.
Every year, we comb through this data and release our findings to the greater security community. Some of the key findings from our Bad Bot Report 2020: Bad Bots Strike Back
Bot operators and cybersecurity vendors like Imperva are locked in a constant arms race against one another. Investing in the latest technologies and continually improving our approach is critical to the success of our bot detection and mitigation initiatives. Our bot mitigation strategies will continue to improve as we gather more data and hone our response to one of the web’s most dynamic threats.
The Imperva Community is a great place to learn more about how to use Imperva cybersecurity technologies like On-Prem WAF, Cloud WAF and more to establish efficient, secure processes for enterprise networks. Rely on the expertise of Imperva partners, customers and technical experts.
or Contact Us
Copyright @ 2019 Imperva. All rights reserved