Bad Bots: What to Do When They Come for You

Bad Bots: What to Do When They Come for You

Bad bots

The vast quantity of false data in web traffic and digital advertising can make accurate analysis challenging for businesses. Over half of all web traffic is attributable to bots, or web robots, which are applications designed to automate actions such as following links1. In addition to fake website visits, bots also target digital advertising campaigns, creating fraudulent results that cost advertisers billions of dollars every year. Experts estimate that 11 percent of display ads and 23 percent of video ads are viewed by bots as opposed to the intended human targets.2 Fortunately, there are simple solutions that can mitigate the issues caused by bad bots.

Good Bots vs. Bad Bots

Not all bots are bad. Good bots are typically crawlers sent from search engines to index content for search results (Googlebot and Bingbot are prime examples). These ‘good bots’ are typically automatically filtered out of web analytics reports and do not impact website traffic numbers. Good bots represent roughly half of all bot traffic.

Bad bots, on the other hand, often impersonate user behavior which makes them susceptible to inclusion in web analytics reports (Figure 1). Twenty-two percent of all web traffic has been attributed to this ‘impersonator’ type of bot.

Figure 1

Share of total website traffic by bad bot type

Source: https://www.incapsula.com/blog/bot-traffic-report-2014.html

 

Identifying if you have a bad bot problem

Taking a deeper look at web analytics data can uncover clues to possible bad bot visitors. A common trait of bad bot visits is a sudden increase in traffic from an unidentifiable traffic source with uncharacteristically high or low bounce rates. When driving traffic to your website from digital advertising campaigns, examine the share of new versus returning visitors. If the campaign is not a retargeting campaign (which specifically targets previous visitors) and you have a very high share of returning visitors, it may be an indicator of a problem. Additionally, if new traffic data appears to have abnormally consistent metrics over time (same time on page, pages per visit, etc.) and/or comes from a particular server, it may be signs of a bad bot.

Anticipate and Mitigate

Whether or not you have a bot problem, it is wise to use the built-in features available in most web analytics software to weed out known bot traffic. In 2014, Google Analytics introduced bot and spam filtering that can be activated with a simple check box selection. For more information, see this.

Brand USA has found that this solution has successfully filtered suspected bot traffic visiting DiscoverAmerica.com. Bot traffic can be a problematic issue in China. If conventional methods fail to filter the traffic, using segmentation to exclude the suspicious traffic from reporting is a good secondary option. Brand USA utilizes source data to exclude certain traffic sources in China from its reporting.

Addressing bot activity in advertisement serving environments is a trickier undertaking. For those placing digital advertising, it is helpful to discuss click fraud policy with media buyers or publishers and inquire as to what steps they take to address the problem. Some agencies employ ad verification software to better ensure placements are seen by real people. It is particularly important to scrutinize ads purchased on a cost-per-impression or cost-per-click basis. Digital media in China is often purchased in time increments (e.g. 1/4 homepage display rotation per week) to avoid paying for fraudulent impressions and clicks.

Bad bots are going to be around for the foreseeable future, but with regular reviews of web data, much of their activity can be addressed with a few corrective steps.

Date Published
Newsletter category