Sidestepping supply chain challenges using public web data

We sat down with David Amézquita, product manager at Container xChange – a rapidly growing company in the supply chain space – to gain a bit of perspective on how his organization uses public web data to help companies be more successful in navigating the supply chain and minimizing the impact disruptions have on operations.
Zachary Keyser | Global Communications Manager

Perennial Disruption

Supply chains are, once again, capturing headlines across the world. Notably, within the past few years, global supply chains have fallen into a steady state of uncertainty, stemming from a mix of rising trade tensions, coronavirus lockdowns and sweeping border closures that at one point nearly reduced movement between countries to a standstill.

While clearly an unfortunate set of events, the pandemic provided a much needed jolt to the systematic make-up of the supply chain, which for so long had relied on outdated and manual processes that made it nearly impossible to avoid, let alone foresee, any difficulties nearing on the horizon until proven to be too late.

This long-standing “wait and see” approach — resembling colonial shipping practices — put a serious strain on the procurement of both essential and non-essential items within the early moments of the coronavirus pandemic — leaving both the private and public sector helpless and unprepared to take the challenges of the “new normal” head-on.

However, learning from past mistakes, companies are using public web data strategies to carefully monitor the lead up to, predict and, if utilized properly, even sidestep impending supply chain challenges to solidify a market advantage over their competition even amid the deepest points of confusion.

Where Are We At Now?

Now, although the most recent health crisis opened the world’s eyes to the pitfalls of an old-fashioned supply chain dependent on cross-border considerations, disruptions within this delicate network are nothing new.

According to Mckinsey Research, supply chain disruptions lasting a month or longer can be expected to take place every 3.7 years on average, occurring as a consequence of climate-related events, geo-political tensions such as wars or trade disputes, recessions, viral outbreaks, etc.

And although we are still feeling the residual effects of the ongoing health crisis, the war in Ukraine has now set a new strain on the supply of raw materials such as metals, natural gasses and agricultural products — driving up food prices, energy costs and shipping rates in the immediate aftermath of the Russian invasion.

So, as we have clearly seen, supply chain disruptions occur rapidly with little to no warning. But that does not mean it is a lost cause.

Amézquita, a product manager at Container xChange — one of the top ten global logistics tech companies as well as the world’s fastest-growing neutral marketplace for container leasing and trading — shared his perspectives on how companies can use web data to ensure they stay ahead of the supply chain and identify disruptions before they even happen.

But let’s first run through the basics.

Public Web Data Logistics

There is a swath of publicly available web data that companies can use to assess the health of their own supply chains.

Some of the more traditional sources for accomplishing this include:

  • Satellite imagery data: Used to track production at key manufacturing plants, the progression of climate change, troop movements, new construction, or the flow of containers entering and leaving major ports around the world. 
  • Geo-location data: Used to track anything from the movement of container cargo ships to the spread of the novel coronavirus. This form of data allows companies to better predict logistical delays or follow imminent disruptions that could be caused by a slow-down or outbreak in a specific region of the world.
  • Competitor data: By establishing the availability of competitor stock on competing products, organizations can determine where there are current supply chain shortages and find gaps in inventory to exploit, opportunities to raise the prices of certain products or identify other avenues to secure stock.
  • Relationship data: Used to identify where there are supply chain dependencies, helping businesses better diversify their supplier portfolios as well as allowing investors to identify which companies are dependent on receiving products from a specific region of the world.

21st Century Shipping

While the logistical pieces that hold together the global economy can be unpredictable, companies like Container xChange are helping customers navigate the intricacies of the supply chain — providing the shipping industry with a much needed update — by simplifying many of outdated processes that held back the industry for so long, and making them easily accessible on a single online platform. 

Up until recently, the shipping industry could be described as a traditional and manual marketplace. Only within the past few years have carriers begun introducing online or electronic Bills of Lading, invoices, tracking, scheduling, etc. to consolidate and automate many of the administrative tasks involved in the import and export business.

“xChange is a tech company that is disrupting an old industry,” said Amézquita. “Our 1,000+ customers used to do everything offline, and with xChange they can buy, lease and sell containers, access their documentation and all their processes in one place.”


“There was an estimation that to move one container, you needed to send around 180 emails,” Amézquita added. “We are automating, and speeding up that process by having a platform where they can do everything online.”

The company is further revolutionizing the shipping sector by using public web data to gather insights on container moves, showing us that every single industry — even the more “traditional” ones — can be transformed using data.

The platform itself collects public web data to identify all the points of interaction for client containers: What they’re being used for, where they’ve been, where they’re going and where they will eventually end up. 

This allows customers to identify the balance of containers in a specific location, which is particularly useful to determine if there are any lulls or slow-downs in shipping within a certain region of the world.

Furthermore, businesses would be uniquely positioned to have the ability to change, or diversify, stock or suppliers before disruptions begin to affect sales and operations. While exporters would have the ability to suspend orders or have its ships change course before finding out its cargo is unable to unload at port on time.

Supply Chain Data, Data, Everywhere

With everything going online now, new web data points are being created every day within the shipping industry, which is providing more real-time visibility to the movements of the supply chain. 

Platforms, like Container xChange, are using public web data in order to provide market transparency, as well as help its 1,000+ customers — including leasing companies, shipping companies, freight forwarders, traders and NVOs — avoid demurrage, detention charges and enhance operational efficiency and flexibility.

“xChange wants to be the single source of truth for container logistics, and this is impossible without public web data,” said Amézquita. “We want to collect more and more data to be able to provide our customers with the most reliable data possible.”

Public web data also allows xChange customers to decide which products are most beneficial to ship at the moment, what prices to pay for the containers or shipping costs. It allows them to be able to trace the delivery of their goods or containers from point A to B, and the same methodology could be applied to their competitors.

And through the integration of historical data into its analysis — using data surrounding container moves collected over the last few years — xChange is able to predict in real-time, how the supply chain might appear or behave in specific locations around the world within the coming months, weeks or even days.

“From the beginning, we have been using web data, but we wanted to scale,” said Amézquita. “We wanted to get more information from the web, and as we started to search for providers, we found Bright Data, which has been the company that has supported us to be more and more successful, in terms of the data quality that we are getting.

“Bright Data allows us to focus on what is important. First, exploring new data sources, and second, focusing on developing new tools and insights for our customers, and exploring new opportunities that we didn’t have.”

Zachary Keyser | Global Communications Manager

Zachary Keyser is the Global Communications Manager for Bright Data. Previously a journalist and editor for a leading international newspaper, his topics of coverage centered around hi tech, med-tech, fintech and AI, as well as emerging technologies focused on business innovation. He now dedicates his time working to bring transparency to the Internet and helping companies leverage the use of public web data to drive their business goals home.

You might also be interested in

What is data aggregation

Data Aggregation – Definition, Use Cases, and Challenges

This blog post will teach you everything you need to know about data aggregation. Here, you will see what data aggregation is, where it is used, what benefits it can bring, and what obstacles it involves.
What is a data parser featured image

What Is Data Parsing? Definition, Benefits, and Challenges

In this article, you will learn everything you need to know about data parsing. In detail, you will learn what data parsing is, why it is so important, and what is the best way to approach it.
What is a web crawler featured image

What is a Web Crawler?

Web crawlers are a critical part of the infrastructure of the Internet. In this article, we will discuss: Web Crawler Definition A web crawler is a software robot that scans the internet and downloads the data it finds. Most web crawlers are operated by search engines like Google, Bing, Baidu, and DuckDuckGo. Search engines apply […]

A Hands-On Guide to Web Scraping in R

In this tutorial, we’ll go through all the steps involved in web scraping in R with rvest with the goal of extracting product reviews from one publicly accessible URL from Amazon’s website.

The Ultimate Web Scraping With C# Guide

In this tutorial, you will learn how to build a web scraper in C#. In detail, you will see how to perform an HTTP request to download the web page you want to scrape, select HTML elements from its DOM tree, and extract data from them.
Javascript and node.js web scraping guide image

Web Scraping With JavaScript and Node.JS

We will cover why frontend JavaScript isn’t the best option for web scraping and will teach you how to build a Node.js scraper from scratch.
Web scraping with JSoup

Web Scraping in Java With Jsoup: A Step-By-Step Guide

Learn to perform web scraping with Jsoup in Java to automatically extract all data from an entire website.
Static vs. Rotating Proxies

Static vs Rotating Proxies: Detailed Comparison

Proxies play an important role in enabling businesses to conduct critical web research.