The Ultimate Guide To Web Scraping And Easily Bypassing Website Blocks

A practical guide that will help you understand the most common data crawling blocks and how to best overcome them.
Web Scraping Guide - Scraping a website for data collection
Aviv Besinksky
Aviv Besinsky | Product Manager
12-Nov-2020
Share:

The goal: Using web scraping to collect business-critical data sets.

The obstacle: Data crawling and web scraping are often blocked by target sites.

The pain point: Companies are collecting inaccurate and incomplete data sets which impairs their ability to make data-driven decisions.

In this guide we will cover:

What Is Web Scraping?

Also known as web data extraction, web scraping is retrieving data from a specific website. Web scraper software saves having to manually extract data, a painstaking process. Scrapers use automation to extract millions of data points from websites. This helps companies make decisions based on real user data, enhancing their operations, improving customer experience, cybersecurity, and more.

How does web scraping work?

Web scraping consists of two parts, the web scraper itself and the web crawler. While some people use the terms interchangeably, they fulfill two different functions.

The crawler – This software browses the internet searching for content according to a set of keywords. The crawler then indexes the information it finds.

The scraper – Is a software tool that extracts data from web pages, pulling actionable information from it. The scraper then stores this data in databases.

What can you use web scraping for?

Here are some of the leading use cases:

Price Monitoring: You can monitor competitor’s prices and product trends, then apply this info for your pricing strategy and revenue optimization efforts.

Finance: Extract insights for investors from U.S. Securities and Exchange Commission (SEC) filings, company reports, and news monitors.

Consumer sentiment analysis: Understand the constantly changing whims, opinions, and buying tendencies of your target audience regarding your brand, perform ad verification as well as brand protection.

Market research: Analyze micro and macro industry trends in order to make fact-based decisions.

Real Estate: Gather information about listing prices, property values, vacancy rates, as well as estimating rental yields.

How Websites Can Block your Web Scraping attempts?

Although web scraping is a legitimate business practice, sometimes web pages do not allow data extraction. The most common reason for this is out of fear that high amounts of requests can very often lead to inundating a website’s servers and in some extreme cases, cause a website to crash. Other sites block scraping based on geolocation-based concerns, for example, content copyrights that are limited to specific countries. Whatever the reason for being blocked, it is important to understand which blocks currently exist and how to overcome them. Here are some of the most common website blocks and solutions:

Block: IP Detection

Sometimes websites will block you based on your IP address’s location. This type of geolocation block is common on websites that adapt their available content based on customer location.

Other times the websites want to reduce the amount of traffic from non-humans, (for example, crawlers). Thus, a website may block your access based on the type of IP you are using.

Solution

Use an international proxy network with a wide selection of IPs in different countries using different IP types. This enables you to seem as if you are a real user in your desired location so that you can access the data you need.

Block: IP Rate Limitations

This type of block may limit your access based on the number of requests sent from a single IP address at a given time. This can mean 300 requests a day or ten requests per minute, depending on the target site. When you pass the limit, you’ll get an error message or a CAPTCHA trying to find out if you are a human or a machine.

Solution

There are two main ways to bypass rate limiting. First of all, you can actually limit the maximum number of requests per second. This will make the crawling process slower but can help work around rate limitations. Second, you can use a proxy which rotates IP addresses before requests reach the target site’s rate limits.

Block: User-Agent Detection

Some websites use the user-agent HTTP header to identify specific devices and block access.

Solution

Rotate your user agents to overcome this type of block.

Block: Honeypot Traps

Honeypots are a type of security measure that aims to deviate the attention of a potential attacker from crucial data sets and resources. What works for attackers can also intercept data crawlers. In this scenario, websites lure a given crawler with mask links, and when the scraper follows those links, there is no real data at the end, but the honeypot can identify the crawler and block further requests from it.

Solution

Look for specific CSS properties in the links, like “display: none” or “visibility: hidden”. This is an indication that the link doesn’t hold real data and is a trap.

Block: Scrape behind login

Sometimes the only way to access a website’s data is to log in. For example, social media pages.

Solution

Some scrapers mock human browsing behavior and let you include inputting usernames and passwords as part of the scraping process. Do note that collecting data when password or login is required is an illegal practice in many regions including the US, Canada, and Europe.

Block: JavaScript encryption

Some sites use JS encryption tech to protect data from being scraped.

Solution

Some scrapers access the data from the target website itself by having a built-in browser.

Web Scraping Best Practices to prevent Being Blocked

Here are a number of best practices you should follow in order to avoid being blocked while scraping:

#1: Respect the Site Rules

Crawlers should follow the robot.txt file of a given website. This file, which you can find in the root directory, contains rules of what the website allows in terms of scraping and what doesn’t. For instance, how frequently you can scrape, which pages you can scrape, and which are off-limits. Anti-scraping tools look for markers that you are a robot/scraper:

  • You scrape more pages than a human possibly can
  • Follow the same routine when scraping (humans are not that predictable)
  • Asking too many requests from the same IP address in a short period of time

#2: Slower the crawl

As we mentioned before, web scrapers are very fast when collecting data, much faster than humans. The problem is that if a website gets too many requests too fast, it can crash. By slowing your crawl time and adding a delay of 10-20 seconds between clicks, you can avoid loading a target website. In addition, avoid giving your scraper away by following the same pattern over and over. Add some random clicks and actions that will make the crawler look more human.

#3: Rotate User Agents

A user agent is a software tool that tells the server which web browser you’re using. Without a user agent, websites don’t let you view content. Every request a browser makes needs to have a user-agent header. When you use the same user-agent every time you scrape data – this raises a red flag that this is a bot. There are some ways to work around this, for instance, you can fake the user agent. You can create combinations for multiple browsers and rotate the headers between requests.

#4: Use a Real User-Agent

Faking the user agent can create undesirable issues if the website doesn’t recognize the user-agent for example. In order to avoid getting blacklisted you should aim to set up real user-agents – you can choose from a list of user agents that suit your needs. You can also use a Googlebot User-Agent. Using an existing user agent can be an extremely effective tool in preventing data collection blockades as well as being blacklisted.

#5: Use Headless Browsers

A headless browser means the user can interact without a given UI. Therefore, using a headless browser can enable you to scrape websites faster as you don’t need to manually open any user interfaces. Beyond scraping, headless browsers can be used for automated testing for web applications or mapping user journeys across websites.

#6: Use a Proxy

Proxies networks are a great solution for individuals or businesses that need to carry out mid to large-scale data collection on a regular basis. Proxies typically have servers on different continents and IPs both in the form of data center (low threshold data collection) and real residential IPs (high threshold target sites). Proxy networks allow you to manage headless browsers, sophisticated blocks, fingerprints, geolocation-based blocks. Many proxy solutions also give tools to help you manage IP rotation and request journeys so that they are more cost-effective and have higher success rates.

Why a Proxy Service is Essential for Web Scraping

Using a proxy reduces the chances that your crawler gets detected and/or blacklisted, severely reducing the chances of a website’s anti-scraping mechanisms detecting you. The success of your proxy will depend on several factors. Among them, how frequently you send requests, how you manage your proxies, and the type of proxies you are using. Let’s explore the different types of proxy network you can use:

Data Center– These are the most common type of proxy and correspond to the IPs of servers residing in data centers. These are usually the most affordable to buy though are meant for easier target sites.

Residential – These correspond to private residences. That means actual people let you use their residential network as a server to route traffic. Since these are real people who opt-in and are being compensated for network participation, they are usually more expensive but also much more effective.

Mobile – These are the IPs of mobile devices. This is the most expensive type of network you can use but also the most effective. This network is typically used for the hardest target sites, with the capability of targeting specific cellular carriers and specific 3G or 4G devices. This network can be especially useful for user experience testing on mobile applications, mobile ad verification, and any other use case which is exclusively mobile-based.

The types of proxies can also vary according to ownership. They can be shared or dedicated.

Dedicated proxies mean you pay for accessing a private pool of IPs. This can be a better option than a shared pool of IPs because you know which crawling activities have been carried out with these IPs. A dedicated pool of proxies that are exclusively used by you is the safest, most effective option as you have ultimate control over what activities are and are not carried out with your IP pool – many proxy providers offer this as a built-in option in their packages.

How To Manage Your Proxy IP Pool

We recommend using a range of IPs which is commonly known as an ‘IP pool’. Why? If you only use one proxy for scraping, the chances you raise red flags among target sites are high. The best option is to own a group, or pool, of IPs and rotate them periodically. Let’s explore this more.

If you don’t rotate your IPs, you are giving the websites time to locate and identify them. That’s why you need to appropriately manage them, changing the configuration, adding random delays, and managing user agents. There are three main ways in which you can manage your IP pool:

Do it Yourself (DIY) – This means buying or leasing a pool of proxies and managing them yourself. While this is the cheapest option, it is very time-consuming.

Use a proxy management solution- In this instance your proxy provider takes care of the entire proxy management process. The proxy solution takes care of the rotation, blacklists, session management, and so on.

Choosing the best option for you will depend on your budget and the frequency you need to scrape data. You should also consider your technical skills and the time you have to manage your proxy pool. These considerations can help you choose which proxy management option is best for you.

Summing it up

In this post, we gave you a primer of how you can conduct web scraping safely and avoid anti-scraping measures. Following these best practices can help prevent you from being blacklisted and/ or banned as follows:

  • Respect target site rules
  • Crawl at a pace that is optimized for target site limitations
  • Use real User-Agents
  • Properly rotate User-Agents
  • Use headless browsers
  • Use a proxy service, a pool of IPs, and IP rotation

There is no doubt that using a proxy service can solve issues and help you overcome anti-scraping measures put in place by target sites. We presented many alternatives in which you can unlock, crawl, and manage IPs independently. Ultimately, the choice is yours and will depend on your web scraping needs, budget, and technical requirements.

Aviv Besinksky
Aviv Besinsky | Product Manager

Aviv is a lead product manager at Bright Data. He has been a driving force in taking data collection technology to the next level - developing technological solutions in the realms of data unblocking, static proxy networks, and more. Sharing his data crawling know-how is one of his many passions.

Share: