User-Agents For Web Scraping 101

Using the correct user agent when performing data scraping tasks is crucial to your success in collecting your target data while avoiding being blocked. This is the only guide you will need to get started.
7 min read
Chrome browser and web scraping to data collection

In this post you will learn:

What is a user agent?

The term refers to any piece of software that facilitates end-user interaction with web content. A user agent (UA) string is a text that the client computer software sends through a request.

The user agent string helps the destination server identify which browser, type of device, and operating system is being used. For example, the string tells the server you are using the Chrome browser and Windows 10 on your computer. The server can then use this information to adjust the response for the type of device, OS, and browser.

Most browsers send a user agent header in the following format, though there’s not much consistency in how user agents are chosen:

User-Agent: Mozilla/5.0 (system-information>)
<platform> (<platform-details>) <extensions>

Every browser adds its own comment components, such as platform or RV (release version). Mozilla offers examples of strings to be used for crawlers:

Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

You can learn more about the different strings you can use for the Mozilla browser on their “>developers’ site.

Below you can find examples from Chrome’s developer site of how the UA string format looks for different devices and browsers:

Chrome for Android

Phone UA:

Mozilla/5.0 (Linux; <Android Version>; <Build Tag etc.>)AppleWebKit/<WebKit Rev>
(KHTML, like Gecko) Chrome/<Chrome Rev>Mobile Safari/<WebKit Rev>

Tablet UA:

Mozilla/5.0 (Linux; <Android Version>; <Build Tag etc.>)AppleWebKit/<WebKit Rev>
(KHTML, like Gecko) Chrome/<Chrome Rev>Safari/<WebKit Rev>

Why should you use a user agent?

When you are web scraping, sometimes you will find that the webserver blocks certain user agents. This is mostly because it identifies the origin as a bot, and certain websites don’t allow bot crawlers or scrapers. More sophisticated websites do this the other way around, i.e., they only allow user agents they think are valid to perform crawling jobs. The really sophisticated ones check that the browser behavior actually matches the user agent you claim.

You may think that the correct solution would be to not include a user agent in your requests. However, this causes tools to use a default UA. In many cases, the destination web server has it blacklisted and blocks it.

So how do you ensure your user agent doesn’t get banned?

Tips to avoid getting your UA banned when scraping:

#1: Use a real user agent

If your user agent doesn’t belong to a major browser, some websites will block its requests. Many bot-based web scrapers skip the step of defining a UA, with the consequence of being detected and banned for missing the wrong/default UA.

You can avoid this problem by setting a widely used UA for your web crawler. You can find a large list of popular user agents here. You can compile a list of popular strings and rotate them by performing a cURL request for a website. Nevertheless, we recommend using your browser’s user agent because your browser’s behavior is more likely to match what is expected from the user agent if you don’t change it too much.

#2: Rotate user agents

When you make numerous requests while web scraping, you should randomize them. This will minimize the possibility of the web server identifying and blocking your UAs.

How do you randomize requests?

One solution would be to change the request IP address using rotating proxies. This way, you send a different set of headers every single time. On the web server end, it will look like the request is coming from different computers and different browsers.

Pro tip: A user agent is a header, but headers include much more than just user agents. You can’t just send random headers, you need to make sure that the user agent you send matches the headers you’re sending.

How to check and rotate user agents

First, you need to collect a list of user agent strings. We recommend using strings from real browsers, which can be found here. The next step is adding the strings to a Python List. And finally, defining that every request picks a random string from the list.

You can see an example of how to rotate user agents using Python 3 and Selenium 4 in this stack overflow discussion. The code example looks like this:

# you will need to install Selenium and fake_useragent using pip or similar 
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from fake_useragent import UserAgentimport sleep

options = Options() options.add_argument(f'user-agent={UserAgent().random}')
driver = webdriver.Chrome(chrome_options=options) driver.get("http://www.whatsmyua.info/")

sleep(5)
driver.execute_cdp_cmd("Network.enable", {}) driver.execute_cdp_cmd("Network.setExtraHTTPHeaders", {"headers": {"User-Agent": f"{UserAgent().random}"}}) driver.get ("http://www.whatsmyua.info/")

Whichever program or method you choose to use to rotate your UA headers, you should follow the same techniques to avoid getting detected and blocked:

  • #1: Rotate a full set of headers that are associated with each UA
  • #2: Send headers in the order a real browser typically would
  • #3: Use the previous page you visited as a ‘referrer header’

Pro tip: You need to make sure the IP address and cookies don’t change when using a referrer header. Ideally, you’d actually visit the previous page so that there is a record of it on your target server.

Rotate use agents using a Proxy

You can avoid the headache and hassle of having to manually define lists and rotating IPs manually by using a rotating proxy network. Proxies have the capability of setting up automatic IP rotation and UA string rotation. This means that your requests look like they originated from a variety of web browsers. This severely decreases blockages and increases success rates as requests appear to have originated from real web users. Keep in mind that only very specific proxies that employ Data Unlocking technology have the ability to properly manage and rotate your user agents.

List of User-Agents for scraping

There is a wide variety of browser/phone/device/bot/search engine/developer tool-based User- Agents that can be utilized to emulate various browsers while using tools such as wget, and cURL. These include:

  • Lynx: Lynx/2.8.8pre.4 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/2.12.23
  • Wget: Wget/1.15 (linux-gnu)
  • Curl: curl/7.35.0
  • HTC: Mozilla/5.0 (Linux; Android 7.0; HTC 10 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.83 Mobile Safari/537.36
  • Google Nexus: Mozilla/5.0 (Linux; U; Android-4.0.3; en-us; Galaxy Nexus Build/IML74K) AppleWebKit/535.7 (KHTML, like Gecko) CrMo/16.0.912.75 Mobile Safari/535.7
  • Samsung Galaxy Note 4: Mozilla/5.0 (Linux; Android 6.0.1; SAMSUNG SM-N910F Build/MMB29M) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/4.0 Chrome/44.0.2403.133 Mobile Safari/537.36
  • Samsung Galaxy Note 3: Mozilla/5.0 (Linux; Android 5.0; SAMSUNG SM-N900 Build/LRX21V) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/2.1 Chrome/34.0.1847.76 Mobile Safari/537.36
  • Samsung Phone: Mozilla/5.0 (Linux; Android 6.0.1; SAMSUNG SM-G570Y Build/MMB29K) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/4.0 Chrome/44.0.2403.133 Mobile Safari/537.36
  • Bing’s Search Engine Bot: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
  • Google’s Search Engine Bot: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
  • Apple iPhone: Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1
  • Apple iPad: Mozilla/5.0 (iPad; CPU OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12H321 Safari/600.1.4
  • Microsoft Internet Explorer 11 / IE 11: Mozilla/5.0 (compatible, MSIE 11, Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko
  • Microsoft Internet Explorer 10 / IE 10: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0; MDDCJS)
  • Microsoft Internet Explorer 9 / IE 9: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0; Trident/5.0)
  • Microsoft Internet Explorer 8 / IE 8: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
  • Microsoft Internet Explorer 7 / IE 7: Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)
  • Microsoft Internet Explorer 6 / IE 6: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
  • Microsoft Edge: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14393
  • Mozilla Firefox: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:53.0) Gecko/20100101 Firefox/53.0
  • Google Chrome: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36

The Bottom Line

Since most websites block requests missing a valid or recognizable browser user agent, learning how to properly rotate UA is important in avoiding site blocks. Using the correct user agent will tell your target website that your request came from a valid origin, enabling you to freely collect data from your desired target sites.

Bright Data has developed a fully automated Data Unlocking solution that saves teams time and resources using machine-learning algorithms to generate site-specific browser user agents as well as bypassing bot detection systems.