Bot activity at websites is skewing marketing analytics and costing businesses millions annually, according to a report released Tuesday by a bot detection and mitigation firm.
Netacea maintained in its report that the skewed analytics problem is as costly to businesses as click fraud.
Ad fraud costs businesses US$42 billion annually, or four percent of their revenue, the same percentage lost annually to skewed analytics, noted the report based on a survey of 440 businesses across the travel, entertainment, e-commerce, financial services, and telecom sectors in the U.S. and the U.K.
Of the businesses surveyed, 73 percent revealed they were affected by click fraud, which cost them an average of four percent annually, while 68 percent acknowledged they’d been affected by skewed analytics, with an average loss of 4.07 percent in lost revenue.
The report explained that bots are used by hackers to buy goods before other customers, hack accounts using stolen passwords, check the validity of stolen card details, and steal content or prices by bulk scraping.
But even if they do not do damage directly, it continued, bots can skew data that lead marketing teams to make bad decisions. Analytics skewed by bots can hide what real customers are doing, making it impossible to target genuine audiences.
“Bots can skew all statistics because you’re not getting a feel for the real market,” said Rosemary Coates, president of Blue Silk Consulting, a business advisory firm in Los Gatos, Calif.
“It’s not true to the reality of what’s happening in the marketplace,” she told the E-Commerce Times.
Bad Data, Bad Decisions
That can be problematic for marketers who don’t monitor their campaigns on the fly. “They’re going to walk away from a campaign having spent a whole lot of money and getting zero returns,” observed Liz Miller, vice president and a principal analyst at Constellation Research, a technology research and advisory firm in Cupertino, Calif.
“Someone is running up a tab that the brand has to pay for,” she told the E-Commerce Times.
Skewed analytics can lead to bad marketing decisions, the report noted. In its survey, it found that more than half the businesses ran special promotions (54%), ordered new stock (55%), or “burned through” a marketing budget (55%) because of incorrect data caused by bots.
“With bots often accounting for up to half of web traffic, losses from bad business decisions made due to skewed analytics can be significant, ranging from millions to a few billion dollars,” explained Brian Uffelman, vice president and security evangelist at PerimeterX, a web security service provider in San Mateo, Calif.
“Bots skew many KPIs and metrics, including user tracking and engagement, session duration, bounce rates, ad clicks, look-to-book ratios, campaign data, and conversion funnel,” he told the E-Commerce Times.
“For e-commerce, travel and media sites, unauthorized scraping bots mimic humans by dynamically checking listings, pricing, and content resulting in skewed data,” he added.
Undermining Data Confidence
The report also found that most businesses base at least a quarter of their marketing and other business decisions on analytics that are vulnerable to being skewed by bots.
That threat of skewed data may be steering marketers away from analytics. “What we think is happening is people aren’t trusting their data because when they make decisions based on data, it’s not coming out well for them, probably because their data is rubbish,” maintained Matthew Gracey-McMinn, head of threat research for Netacea.
“They’re getting bad data because of the bots,” he told the E-Commerce Times.
Uffelman added that many marketing professionals are under the misconception that Google Analytics is filtering out bot traffic.
“Google Analytics is good at filtering spam and some crawlers, but today’s bots are far more sophisticated and, as a result, are not reliably handled by Google’s built-in capabilities,” he said.
“Filtering out sessions within Google Analytics is a complex and time-consuming operation that can sometimes exclude good user traffic,” he continued. “Most companies do not recognize the problem and continue making decisions using polluted data.”
Misplaced Faith
A high number of companies also believe web application firewalls (WAF) and DDoS prevention systems can protect their data from being poisoned by bots, with 71 percent expressing their faith in DDoS prevention systems and 73 percent in WAFs.
“When it comes to bot traffic in particular, WAFs just aren’t sufficient,” Uffelman maintained. “The sophisticated attack techniques of bad bots have far outpaced any incremental improvements in WAF bot management technology.”
Gracey-McMinn explained that WAFs are designed to stop traditional cyberattacks, and DDoS prevention is looking for a mass attack.
“Bots are very clever, though, so they’ll test how many requests can be made at a website before DDoS prevention kicks in and stay under that number,” he said.
“Bots exploit business logic vulnerabilities, rather than things like capacity limits and SQL injection that WAFs and DDoS prevention is designed to stop,” he added.
WAFs aren’t totally ineffectual against bots, countered James McQuiggan, a security awareness advocate at KnowBe4 in Clearwater, Fla.
“Some filters can be implemented on the logs to filter out the bots and misrepresented data,” he told the E-Commerce Times.
Filters can include screening by traffic source. ” If there is an increase in direct source connections, that can point to a bot,” he said.
Session length can be another valuable filter. “A number of short sessions can also point to bot activity,” he explained.
Geolocation of IP addresses can be another valuable filter. “If you see a lot of traffic from China, North Korea, or Russia for a U.S.-based ad in English, it is a safe bet that it is a bot,” he maintained.
Better Cooperation
A contributing factor to the successful pollution of data by bots is the lack of communication between security teams and marketing. “Quite often, security teams aren’t aware what’s going on,” Gracey-McMinn said.
“We need communication across business functions in order to facilitate proper responses,” he noted.
“What we have to start doing is having the CISO and the CMO looking at cyberattacks and fraud together,” added Miller.
“If security discovers anomalous behavior on the network,” she continued, “it has to let marketing know and ask, is this anomalous behavior, or do we just have a great promotion going on?”
Best Practice Recommendations
To help identify potential issues, Netacea included in its report these questions to ask if there’s reason to suspect that bots are distorting marketing analytics:
Has the number of new sessions on your site spiked? An abnormally large number of new sessions alongside a high bounce rate and low session duration is an indicator of automated traffic activity.
Is your average session duration below three seconds? A recurring low session duration may not be due to the speed of your website but to crawlers scraping your site for images and content.
Is your average bounce rate high? Whether it’s site-wide or on a selection of pages, a high bounce rate of between 95 and 100 percent implies the presence of bot traffic.
Has your conversion rate dropped? A spike in new sessions without an increase in conversions will reduce your overall conversion rate.
Has direct and referral traffic increased? These two channels are common sources of bot traffic and where you are likely to see the highest spikes in traffic.
Social Media
See all Social Media