How The ‘Parler Incident’ Highlighted The Social And Democratic Benefits Of Web Data Collection

In another social victory for democracy and social order, rioters who illegally broke into and vandalized the capitol may be able to be brought to justice thanks to public data collected from the insurrection’s collusion phase, and before the website was taken down by Google, Apple, and Amazon
Nadav Roiter - Bright Data content manager and writer
Nadav Roiter | Data Collection Expert

In this article we will discuss:

What happened at the capitol [recap]

Watching the news lately, many of us saw images of American demonstrators illegally breaking into the Capitol in Washington. But for people concerned with using web scraping as a tool for social justice, that was not the most interesting part of the story.

How collecting public web data became the final frontier of law and order

Many of the rioters actually colluded on this premeditated breach weeks in advance on a platform many have not even heard of, ‘Parler’. This became the platform of choice after some social media platforms marked some of President Trump’s posts on their networks. Loyalists were looking for a platform with a less stringent content moderation policy, and that is indeed what they found.

But right before tech giants such as Google, Apple, and Amazon killed the site due to ‘incitement’, one ‘cyber activist’ who goes by the handle @donk_enby on Twitter, scraped millions of rioter data points. This included posts that were published in the weeks leading up to the riots, including:

  • Photos
  • Videos
  • Posts
screen shot of donk enby thanking jared for helping him in collecting data on parler

Image source: Twitter

But why did she feel a need to do this?

Based on self-declarations and news reports from across the internet, this cyber activist did not do this for self-gain but rather had a social agenda. She was determined to collect publicly available information about those responsible for the disorderly conduct, which included, among other things ‘metadata’.

Quick definition: Metadata is the information associated with a certain file such as what type of device was used when publishing a certain post.

Metadata, which is unusually removed by most web services, was left intact by Parler. This development means that law enforcement officials will now be able to more easily identify certain suspects at the scene of the crime, as well as identify individuals who were masked at the time of committing the offense.

For those concerned about their personal information being compromised, @donk_enby was quick to clarify that only publicly available data was indeed collected from the social network.

Crash Override clarifying that only publicly available data was collected and that nothing that wasn't already public was collected

She reaffirmed that no:

  • Email addresses
  • Phone numbers
  • Credit cards numbers

Were scraped, further proving that her attempts were not concerned with self-interest but social justice. Modern culture is actually fairly familiar with this phenomenon and has even coined a term to define it:


The keyword here is ‘accountability’. This is not an isolated incident. Many cases in recent history point to a generation that is conscious of what is going on around them and demands accountability for others’ actions, be they politicians, bankers, or protesters.

Summing it up

‘Parler’ was merely one recent development in a long list of instances in which socially conscious men and women decided to use web data collection as their preferred instrument for achieving internet transparency and equality. As we begin a new decade, I foresee web scraping gaining popularity among individuals with strong value systems who want to fight for social justice. This will ultimately benefit our society at large in the form of driving both economic and political accountability.

Nadav Roiter - Bright Data content manager and writer
Nadav Roiter | Data Collection Expert

Nadav Roiter is a data collection expert at Bright Data. Formerly the Marketing Manager at Subivi eCommerce CRM and Head of Digital Content at Novarize audience intelligence, he now dedicates his time to bringing businesses closer to their goals through the collection of big data.

You might also be interested in

What is data aggregation

Data Aggregation – Definition, Use Cases, and Challenges

This blog post will teach you everything you need to know about data aggregation. Here, you will see what data aggregation is, where it is used, what benefits it can bring, and what obstacles it involves.
What is a data parser featured image

What Is Data Parsing? Definition, Benefits, and Challenges

In this article, you will learn everything you need to know about data parsing. In detail, you will learn what data parsing is, why it is so important, and what is the best way to approach it.
What is a web crawler featured image

What is a Web Crawler?

Web crawlers are a critical part of the infrastructure of the Internet. In this article, we will discuss: Web Crawler Definition A web crawler is a software robot that scans the internet and downloads the data it finds. Most web crawlers are operated by search engines like Google, Bing, Baidu, and DuckDuckGo. Search engines apply […]

A Hands-On Guide to Web Scraping in R

In this tutorial, we’ll go through all the steps involved in web scraping in R with rvest with the goal of extracting product reviews from one publicly accessible URL from Amazon’s website.

The Ultimate Web Scraping With C# Guide

In this tutorial, you will learn how to build a web scraper in C#. In detail, you will see how to perform an HTTP request to download the web page you want to scrape, select HTML elements from its DOM tree, and extract data from them.
Javascript and node.js web scraping guide image

Web Scraping With JavaScript and Node.JS

We will cover why frontend JavaScript isn’t the best option for web scraping and will teach you how to build a Node.js scraper from scratch.
Web scraping with JSoup

Web Scraping in Java With Jsoup: A Step-By-Step Guide

Learn to perform web scraping with Jsoup in Java to automatically extract all data from an entire website.
Static vs. Rotating Proxies

Static vs Rotating Proxies: Detailed Comparison

Proxies play an important role in enabling businesses to conduct critical web research.