How Web Unlocker is enabling better fingerprinting, auto-unlocking, and CAPTCHA-solving

From customized Transport Layer Security (TLS) handshakes at the Network level, and User-agent generation at the Protocol level to complete cookie management, and browser fingerprint emulation at the browser-level, ‘Web Unlocker’ takes ‘Unblocking’ to the next level
Web Unlocker Used To Be Called Unblocker copy
Aviv Besinksky
Aviv Besinsky | Product Manager

Web Unlocker was formerly called Web Unblocker – for businesses out there wondering what improvements were made to the product in the process, this article will focus on Web Unlocker’s: 

Better auto-unlocking capabilities 

Web Unlocker’s environment emulation capabilities include: 

  • When examining Web Unlocker from a Network perspective it is able to successfully handle everything from IP type selection to IP rotations (when necessary).
  • On the Protocol-level Web Unlocker is able to carry out HTTP header management more effectively both during the decoding (i.e request received) and encoding (i.e. response sent) processes. Additionally, Web Unlocker has better ‘User-agents generation’ capabilities, meaning that it can deal with unique browser-fingerprint properties while enabling you to restrict the generated User-agents to match target site requirements. Lastly, Web Unlocker can now support HTTP2, meaning it can more effectively handle HTTP header field compression as well as server push. 
  • As far as Hardware (HW), and Operating Systems (OS) are concerned Web Unlocker is able to emulate all devices attached to any given system as well as their correlating drivers (i.e. full device enumeration imitation), mouse movements, screen resolution and various device properties.

Increased CAPTCHA-recognition, and solving 

Web Unlocker has better request management capabilities. It can now adapt itself in real-time to new blockades that have been set up by target sites. It uses Machine Learning (ML), improved CAPTCHA-resolving, and retry logic so that the most efficient (both in terms of time and resources) data-retrieval path can be attained. 

The algorithm that powers Web Unlocker is trained to use ideal fingerprint configurations and attempt to retrieve desired responses and information from target sites. When the website responds, for example, by:

  • Categorizing activity as ‘suspicious’ based on the number of requests from a specific IP (i.e. ‘rate limitations’) 
  • Detecting ‘unacceptable’ User-Agents (such as ‘HTTP’)
  • Or blocking requests based on geolocation 

Web Unlocker is able to analyze this information in real-time, and recalibrate settings on a per-domain basis, using specific/customized settings that will provide users with the highest success rates. 

Improved fingerprinting 

‘Device fingerprints’  are unique user identifiers for devices browsing the web. A fingerprint contains information regarding the configuration of a user’s browser as well as software/hardware environment. Having the fingerprinting aspect down pat is probably the most important aspect of unblocking as it solves the root of the issue and not just a derivative ‘symptom’ such as a ‘Recaptcha’. 

In this instance, you can ‘easily’, albeit manually change your IP, and start a new session. Or alternatively, trigger the ‘HTML body element’ rule. 

Web Unlocker completely automates all of this including fingerprinting adaptation to target sites. At the browser level it performs browser fingerprint emulation including:

  • Plugins, and fonts
  • Canvas/webGL fingerprints
  • Headers
  • Zombie, and sync cookies
  • WebRTC
  • Web Audio API fingerprints
  • cookie management

Web Unlocker also handles:

  • Disabling target-device fingerprinting
  • Real-time debugging

The bottom line 

‘Unblocker’ has been renamed and is now called ‘Web Unlocker’, utilizing the same technology, and algorithmic approach. Yes. Web Unlocker is the same product as ‘Web Unblocker’ but it has been upgraded as far as the resolution, and span of its environment emulation goes on all levels – Network, Protocol, Hardware/Operating systems as well as at the browser level. Its retry logic, CAPTCHA-resolving, and fingerprinting capabilities have also been taken one step further, putting it in a league of its own. 

Aviv Besinksky
Aviv Besinsky | Product Manager

Aviv is a lead product manager at Bright Data. He has been a driving force in taking data collection technology to the next level - developing technological solutions in the realms of data unblocking, static proxy networks, and more. Sharing his data crawling know-how is one of his many passions.

You might also be interested in

What is data aggregation

Data Aggregation – Definition, Use Cases, and Challenges

This blog post will teach you everything you need to know about data aggregation. Here, you will see what data aggregation is, where it is used, what benefits it can bring, and what obstacles it involves.
What is a data parser featured image

What Is Data Parsing? Definition, Benefits, and Challenges

In this article, you will learn everything you need to know about data parsing. In detail, you will learn what data parsing is, why it is so important, and what is the best way to approach it.
What is a web crawler featured image

What is a Web Crawler?

Web crawlers are a critical part of the infrastructure of the Internet. In this article, we will discuss: Web Crawler Definition A web crawler is a software robot that scans the internet and downloads the data it finds. Most web crawlers are operated by search engines like Google, Bing, Baidu, and DuckDuckGo. Search engines apply […]

A Hands-On Guide to Web Scraping in R

In this tutorial, we’ll go through all the steps involved in web scraping in R with rvest with the goal of extracting product reviews from one publicly accessible URL from Amazon’s website.

The Ultimate Web Scraping With C# Guide

In this tutorial, you will learn how to build a web scraper in C#. In detail, you will see how to perform an HTTP request to download the web page you want to scrape, select HTML elements from its DOM tree, and extract data from them.
Javascript and node.js web scraping guide image

Web Scraping With JavaScript and Node.JS

We will cover why frontend JavaScript isn’t the best option for web scraping and will teach you how to build a Node.js scraper from scratch.
Web scraping with JSoup

Web Scraping in Java With Jsoup: A Step-By-Step Guide

Learn to perform web scraping with Jsoup in Java to automatically extract all data from an entire website.
Static vs. Rotating Proxies

Static vs Rotating Proxies: Detailed Comparison

Proxies play an important role in enabling businesses to conduct critical web research.