- API based scraper
Use our interface to build your api request - Automation in scale
Build your own scheduler to control the frequency - Delivery
Deliver the data to your preferred storage or download it
LinkedIn Scraper
Scrape LinkedIn profiles, posts, companies, and jobs. Collect data such as ID, name, city, position, about, posts, current company, experience, company size ,industry, employee profiles, corporate activites, and so much more.
- Scrape on demand via API or no-code scraper
- Bulk request handling up to 5K URLs
- Retrieve results in multiple formats (JSON/CSV)
What is the LinkedIn Scraper API and how does it work?
The LinkedIn Scraper API is a powerful tool designed to automate data extraction from LinkedIn. It allows developers to efficiently gather profiles, company information, job listings, and posts at scale without worrying about blocks, CAPTCHAs, or infrastructure maintenance.
1
2
3
4
Example URLs
How do I scrape LinkedIn profiles?
1
2
3
4
5
Authentication
Authorization: Bearer YOUR_API_TOKEN
Request Parameters
| Parameter | Type | Required | Description | Default |
|---|---|---|---|---|
url | string | Required | LinkedIn URL to scrape | - |
Example Request
[
{"url": "https://www.linkedin.com/in/elad-moshe-05a90413/"},
{"url": "https://www.linkedin.com/in/jonathan-myrvik-3baa01109"},
{"url": "https://www.linkedin.com/in/aviv-tal-75b81/"},
{"url": "https://www.linkedin.com/in/bulentakar/"},
{"url": "https://www.linkedin.com/in/nnikolaev/"}
]
Request Example
curl -H "Authorization: Bearer API_TOKEN" -H "Content-Type: application/json" -d '[{"url":"https://www.linkedin.com/in/elad-moshe-05a90413/"},{"url":"https://www.linkedin.com/in/jonathan-myrvik-3baa01109"},{"url":"https://www.linkedin.com/in/aviv-tal-75b81/"},{"url":"https://www.linkedin.com/in/bulentakar/"},{"url":"https://www.linkedin.com/in/nnikolaev/"}]' "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1viktl72bvl7bjuj0&format=json&uncompressed_webhook=true"
Sample JSON Response
[
{
"db_source": "1768849012114",
"timestamp": "2026-01-19",
"id": "ben***-he***ngs*********9",
"name": "Bennet H******s",
"city": "Hamburg, Hamburg, Germany",
"country_code": "DE",
"position": "Studium an der Hochschule Universität Hamburg",
"about": "B.S***Wir***haf*********urw******"
},
{
"db_source": "1768849012114",
"timestamp": "2026-01-19",
"id": "dr-***ihu***aza*********5b",
"name": "Dr w*****n N***r",
"city": "Delhi, India",
"country_code": "IN",
"position": "--",
"about": null
},
{
"db_source": "1768849012114",
"timestamp": "2026-01-19",
"id": "jas***n-p***l-8*********",
"name": "jashmin p***l",
"city": "Ahmedabad, Gujarat, India",
"country_code": "IN",
"position": "Team lead project at Meridian Infotech Ltd",
"about": null
},
{
"db_source": "1768849012114",
"timestamp": "2026-01-19",
"id": "hei***gon***ez-*********a41******",
"name": "Heidy G******z C***z",
"city": "Chihuahua, Mexico",
"country_code": "MX",
"position": "Docente en Universidad Tecmilenio campus Chihuahua",
"about": null
},
{
"db_source": "1768849012114",
"timestamp": "2026-01-19",
"id": "jim***ito***462*********",
"name": "Jimoh I***a",
"city": "Lagos, Lagos State, Nigeria",
"country_code": "NG",
"position": "--",
"about": null
}
]
Sample Data Fields
| Field | Type | Description | Example |
|---|---|---|---|
db_source | string | Data source ID | 1768849012114 |
timestamp | string | Scrape timestamp | 2026-01-19 |
id | string | LinkedIn profile ID | bennet-herrings-9 |
name | string | Full name | Bennet Herrings |
city | string | Location | Hamburg, Germany |
country_code | string | ISO country code | DE |
position | string | Job title | Student at University |
about | string | Profile bio | B.S. Wirtschaft... |
posts | integer | Posts count | 42 |
current_company | string | Employer | University Hamburg |
experience | array | Work history | [{...}] |
education | array | Education | [{...}] |
skills | array | Skills list | ["Python", ...] |
connections | integer | Connections | 500+ |
LinkedIn Scraper Pricing
import requests
API_TOKEN = "your_api_token_here"
url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1viktl72bvl7bjuj0&format=json"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}
payload = [
{"url": "https://www.linkedin.com/in/elad-moshe-05a90413/"},
{"url": "https://www.linkedin.com/in/jonathan-myrvik-3baa01109"},
{"url": "https://www.linkedin.com/in/aviv-tal-75b81/"}
]
response = requests.post(url, headers=headers, json=payload)
print(response.json())
const API_TOKEN = "your_api_token_here";
const url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1viktl72bvl7bjuj0&format=json";
const payload = [
{ url: "https://www.linkedin.com/in/elad-moshe-05a90413/" },
{ url: "https://www.linkedin.com/in/jonathan-myrvik-3baa01109" },
{ url: "https://www.linkedin.com/in/aviv-tal-75b81/" }
];
fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_TOKEN}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
})
.then(res => res.json())
.then(data => console.log(data));
curl -H "Authorization: Bearer API_TOKEN" \
-H "Content-Type: application/json" \
-d '[{"url":"https://www.linkedin.com/in/elad-moshe-05a90413/"},{"url":"https://www.linkedin.com/in/jonathan-myrvik-3baa01109"},{"url":"https://www.linkedin.com/in/aviv-tal-75b81/"},{"url":"https://www.linkedin.com/in/bulentakar/"},{"url":"https://www.linkedin.com/in/nnikolaev/"}]' \
"https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1viktl72bvl7bjuj0&format=json&uncompressed_webhook=true"
import requests
API_TOKEN = "your_api_token_here"
url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lyy3tktm25m4avu764&format=json"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}
payload = [
{"url": "https://www.linkedin.com/pulse/ab-test-optimisation-earlier-decisions-new-readout-de-b%C3%A9naz%C3%A9"},
{"url": "https://www.linkedin.com/posts/orlenchner_scrapecon-activity-7180537307521769472-oSYN"},
{"url": "https://www.linkedin.com/posts/karin-dodis_web-data-collection-for-businesses-bright-activity-7176601589682434049-Aakz"}
]
response = requests.post(url, headers=headers, json=payload)
print(response.json())
const API_TOKEN = "your_api_token_here";
const url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lyy3tktm25m4avu764&format=json";
const payload = [
{ url: "https://www.linkedin.com/pulse/ab-test-optimisation-earlier-decisions-new-readout-de-b%C3%A9naz%C3%A9" },
{ url: "https://www.linkedin.com/posts/orlenchner_scrapecon-activity-7180537307521769472-oSYN" },
{ url: "https://www.linkedin.com/posts/karin-dodis_web-data-collection-for-businesses-bright-activity-7176601589682434049-Aakz" }
];
fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_TOKEN}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
})
.then(res => res.json())
.then(data => console.log(data));
curl -H "Authorization: Bearer API_TOKEN" \
-H "Content-Type: application/json" \
-d '[{"url":"https://www.linkedin.com/pulse/ab-test-optimisation-earlier-decisions-new-readout-de-b%C3%A9naz%C3%A9?trk=public_profile_article_view"},{"url":"https://www.linkedin.com/posts/orlenchner_scrapecon-activity-7180537307521769472-oSYN?trk=public_profile"},{"url":"https://www.linkedin.com/posts/karin-dodis_web-data-collection-for-businesses-bright-activity-7176601589682434049-Aakz?trk=public_profile"},{"url":"https://www.linkedin.com/pulse/getting-value-out-sunburst-guillaume-de-b%C3%A9naz%C3%A9?trk=public_profile_article_view"}]' \
"https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lyy3tktm25m4avu764&format=json&uncompressed_webhook=true"
import requests
API_TOKEN = "your_api_token_here"
url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1vikfnt1wgvvqz95w&format=json"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}
payload = [
{"url": "https://il.linkedin.com/company/ibm"},
{"url": "https://www.linkedin.com/company/figueroa-real-estate/"},
{"url": "https://il.linkedin.com/company/bright-data"}
]
response = requests.post(url, headers=headers, json=payload)
print(response.json())
const API_TOKEN = "your_api_token_here";
const url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1vikfnt1wgvvqz95w&format=json";
const payload = [
{ url: "https://il.linkedin.com/company/ibm" },
{ url: "https://www.linkedin.com/company/figueroa-real-estate/" },
{ url: "https://il.linkedin.com/company/bright-data" }
];
fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_TOKEN}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
})
.then(res => res.json())
.then(data => console.log(data));
curl -H "Authorization: Bearer API_TOKEN" \
-H "Content-Type: application/json" \
-d '[{"url":"https://il.linkedin.com/company/ibm"},{"url":"https://www.linkedin.com/company/figueroa-real-estate/"},{"url":"https://www.linkedin.com/organization-guest/company/the-kraft-heinz-company"},{"url":"https://il.linkedin.com/company/bright-data"}]' \
"https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1vikfnt1wgvvqz95w&format=json&uncompressed_webhook=true"
import requests
API_TOKEN = "your_api_token_here"
url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lpfll7v5hcqtkxl6l&format=json"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}
payload = [
{"url": "https://www.linkedin.com/jobs/view/software-engineer-at-epic-3986111804"},
{"url": "https://www.linkedin.com/jobs/view/software-engineer-at-pave-4310512612/"}
]
response = requests.post(url, headers=headers, json=payload)
print(response.json())
const API_TOKEN = "your_api_token_here";
const url = "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lpfll7v5hcqtkxl6l&format=json";
const payload = [
{ url: "https://www.linkedin.com/jobs/view/software-engineer-at-epic-3986111804" },
{ url: "https://www.linkedin.com/jobs/view/software-engineer-at-pave-4310512612/" }
];
fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_TOKEN}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
})
.then(res => res.json())
.then(data => console.log(data));
curl -H "Authorization: Bearer API_TOKEN" \
-H "Content-Type: application/json" \
-d '[{"url":"https://www.linkedin.com/jobs/view/software-engineer-at-epic-3986111804?_l=en"},{"url":"https://www.linkedin.com/jobs/view/software-engineer-at-pave-4310512612/"}]' \
"https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_lpfll7v5hcqtkxl6l&format=json&uncompressed_webhook=true"
Benchmark
Proven at 100,000 parallel requests
220+ Fields per request
Reliability
- 99.99% uptime
- 24/7 Support
Scale
- 150M+ IPs
- Bulk request up to 5K URLs
Integration
- Delivery:S3,Snowflake, Azure, Webhook
- Results in JSON, CSV, or NDJSON
Ready to see these results in your environment?
Trusted by 20,000+ customers worldwide
Available LinkedIn scrapers
LinkedIn people profiles
LinkedIn company information
Linkedin job listings information
Linkedin job listings information - Discover new jobs by keyword
Linkedin job listings information - Discover jobs by company URL
LinkedIn posts
LinkedIn posts - Discover user's articles by URL
LinkedIn posts - Discover posts by Profile URL
LinkedIn posts - Discover new posts company URL
Just want data? Skip scraping.
Purchase a LinkedIn dataset
Get LinkedIn structured data into your database or AI tool. No proxy management. No parsing

API for Seamless Web Data Access
Comprehensive, Scalable, and Compliant Web Data Extraction
API for Seamless Web Data Access
Tailored to your workflow
Get structured data in JSON, NDJSON, or CSV files through Webhook or API delivery.
Built-in infrastructure and unblocking
Get maximum control and flexibility without maintaining proxy and unblocking infrastructure. Easily scrape data from any geo-location while avoiding CAPTCHAs and blocks.
Battle-proven infrastructure
Bright Data’s platform powers over 20,000+ companies worldwide, offering peace of mind with 99.99% uptime, access to 150M+ real user IPs covering 195 countries.
Industry leading compliance
Our privacy practices comply with data protection laws, including the EU data protection regulatory framework, GDPR, and CCPA.
Effortlessly scrape LinkedIn data
Why 20,000+ Customers Choose Bright Data
100% Compliant
24/7 Global Support
Complete Data Coverage
Unmatched Data Quality
Powerful Infrastructure
Custom Solutions
Bright Data is used by world's top brands
LinkedIn Scraper use cases
Effective competitive analysis
Market trends & growth
B2B company data
LinkedIn Scraper FAQs
What is the LinkedIn Scraper API?
The LinkedIn Scraper API is a powerful tool designed to automate data extraction from the LinkedIn website, allowing users to efficiently gather and process large volumes of data for various use cases.
How does the LinkedIn Scraper API work?
The LinkedIn Scraper API works by sending automated requests to the LinkedIn website, extracting the necessary data points, and delivering them in a structured format. This process ensures accurate and quick data collection.
What data points can be collected with the LinkedIn Scraper API?
The data points that can be collected with the LinkedIn Scraper API include user profiles, job listings, company information, connections, and other relevant data.
Is the LinkedIn Scraper API compliant with data protection regulations?
Yes, the LinkedIn Scraper API is designed to comply with data protection regulations, including GDPR and CCPA. It ensures that all data collection activities are performed ethically and legally.
How can I integrate the LinkedIn Scraper API with my existing systems?
The LinkedIn Scraper API offers flawless integration with various platforms and tools. You can use it with your existing data pipelines, CRM systems, or analytics tools to improve your data processing capabilities.
What are the usage limits for the LinkedIn Scraper API?
There are no specific usage limits for the LinkedIn Scraper API, offering you the flexibility to scale as needed. Prices start from $0.001 per record, ensuring cost-effective scalability for your web scraping projects.
Do you provide support for the LinkedIn Scraper API?
Yes, we offer dedicated support for the LinkedIn Scraper API. Our support team is available 24/7 to assist you with any questions or issues you may encounter while using the API.
What delivery methods are available?
Amazon S3, Google Cloud Storage, Google PubSub, Microsoft Azure Storage, Snowflake, and SFTP.
What file formats are available?
JSON, NDJSON, JSON lines, CSV, and .gz files (compressed).
Can I automate LinkedIn crawling with your LinkedIn scraper?
Yes, you can automate LinkedIn crawling using our LinkedIn crawler. The LinkedIn Scraper API lets you schedule recurring LinkedIn crawls, so you can automatically collect the latest profile, company, post, or job data at intervals you choose. This approach makes LinkedIn crawling efficient for lead generation, business intelligence, and ongoing data monitoring, all without manual effort.