
How to Scrape Bing Search Results using Python (2025 Tutorial)

Aadi
Founding Engineer
How to Scrape Bing Search Results using Python (2025 Tutorial)
Bing search results contain valuable data including product listings, articles, Copilot AI answers, and competitor insights. Scraping Bing SERPs lets you analyze top-ranking pages, extract SEO insights, and make data-driven decisions for your business.
With Microsoft retiring their official Bing Search API on August 11, 2025, developers need reliable alternatives to extract this data programmatically.
Does Bing Allow Scraping?
Scraping publicly available data is generally legal when done responsibly. We recommend conducting appropriate legal consultation before engaging in any scraping activities. Bing search results, which are publicly available, can be scraped following applicable terms and conditions.
Difficulties with Scraping Bing
Bing is particularly effective at detecting automated requests, leading to IP bans and CAPTCHA challenges. The platform uses:
- Advanced bot detection that identifies scraping patterns
- Dynamic JavaScript content requiring full browser rendering
- Rate limiting that blocks excessive requests
- Complex HTML structures that change frequently
These challenges make consistent Bing scraping extremely difficult without proper tools.
Setting Up the Project Environment
We'll use Python for this tutorial. If you haven't installed Python, download it from the official website. Install the required dependencies:
pip install requests pandas
This installs the requests
library for API interactions and pandas
for data processing.
Olostep API Query Parameters
Olostep's Bing Search API operates with simple parameters that handle all the complexity of Bing scraping automatically.
Required Parameters
For scraping Bing search results, you need three essential parameters:
payload = {
"url_to_scrape": "https://www.bing.com/search?q=your+search+query",
"parser": {"id": "@olostep/bing-search"},
"formats": ["json"]
}
Optional Parameters
Enhance your scraping with additional parameters:
payload = {
"url_to_scrape": "https://www.bing.com/search?q=your+search+query",
"parser": {"id": "@olostep/bing-search"},
"formats": ["json"],
"wait_before_scraping": 4000, # Wait for Copilot answers to load
"country": "US" # Geographic targeting
}
Scraping Bing SERPs for Any Keyword
Let's write a Python script to extract structured data from Bing search results.
1. Load Necessary Python Packages
import requests
import pandas as pd
import json
2. Set Up API Configuration
API_URL = "https://api.olostep.com/v1/scrapes"
API_KEY = "your-api-key-here" # Get free API key at olostep.com
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
3. Prepare the Payload
payload = {
"url_to_scrape": "https://www.bing.com/search?q=web+scraping+tools",
"parser": {"id": "@olostep/bing-search"},
"formats": ["json"],
"wait_before_scraping": 4000
}
4. Send Request to Olostep API
response = requests.post(API_URL, headers=headers, json=payload)
print(f"Status Code: {response.status_code}")
If the request is successful, you'll see status code 200
. Other codes indicate errors - check your API key and payload format.
5. Extract and Process the Data
if response.status_code == 200:
data = response.json()
# Extract different components
search_results = data.get("search_results", [])
copilot_answer = data.get("copilot_answer", {})
people_also_ask = data.get("people_also_ask", [])
related_searches = data.get("related_searches", [])
print(f"Found {len(search_results)} search results")
print(f"Copilot answer: {'Yes' if copilot_answer else 'No'}")
print(f"People Also Ask questions: {len(people_also_ask)}")
else:
print(f"Error: {response.status_code}")
print(response.text)
6. Save Results to JSON File
# Save complete results
with open("bing_search_results.json", "w", encoding="utf-8") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
# Save organic results to CSV
if search_results:
df = pd.DataFrame(search_results)
df.to_csv("bing_organic_results.csv", index=False)
print("Results saved to bing_organic_results.csv")
Understanding the Response Structure
Olostep's Bing parser returns comprehensive structured data:
{
"search_parameters": {
"q": "web scraping tools",
"device": "desktop",
"engine": "bing"
},
"copilot_answer": {
"answer": "Web scraping tools are software applications...",
"sources": [
{
"title": "Best Web Scraping Tools",
"link": "https://example.com/tools",
"source_name": "TechGuide"
}
]
},
"search_results": [
{
"title": "Top 10 Web Scraping Tools for 2025",
"link": "https://example.com/top-tools",
"snippet": "Comprehensive guide to the best web scraping tools...",
"position": 1,
"type": "organic"
}
],
"people_also_ask": [
{
"question": "What are the best web scraping tools?",
"answer": "Popular web scraping tools include...",
"source": {
"title": "Web Scraping Guide",
"link": "https://example.com/guide"
}
}
],
"related_searches": [
{
"query": "python web scraping",
"link": "https://www.bing.com/search?q=python+web+scraping"
}
]
}
Advanced Scraping: Multiple Keywords
To scrape multiple keywords efficiently:
def scrape_bing_keyword(keyword):
"""Scrape Bing results for a single keyword"""
url = f"https://www.bing.com/search?q={keyword.replace(' ', '+')}"
payload = {
"url_to_scrape": url,
"parser": {"id": "@olostep/bing-search"},
"formats": ["json"],
"wait_before_scraping": 4000
}
response = requests.post(API_URL, headers=headers, json=payload)
if response.status_code == 200:
return response.json()
else:
print(f"Error scraping {keyword}: {response.status_code}")
return None
# Scrape multiple keywords
keywords = ["seo tools", "keyword research", "content marketing"]
all_results = []
for keyword in keywords:
print(f"Scraping: {keyword}")
result = scrape_bing_keyword(keyword)
if result:
result["keyword"] = keyword
all_results.append(result)
# Add delay between requests
time.sleep(2)
# Save all results
with open("multiple_keywords_results.json", "w") as f:
json.dump(all_results, f, indent=2)
Batch Processing for Scale
For large-scale scraping, use Olostep's batch processing endpoint:
# Prepare multiple URLs
urls = [
"https://www.bing.com/search?q=web+scraping",
"https://www.bing.com/search?q=data+extraction",
"https://www.bing.com/search?q=api+tools"
]
batch_payload = {
"urls": urls,
"parser": {"id": "@olostep/bing-search"},
"formats": ["json"]
}
# Send batch request
batch_response = requests.post(
"https://api.olostep.com/v1/batches",
headers=headers,
json=batch_payload
)
print(f"Batch job submitted: {batch_response.status_code}")
Extracting Specific Data Types
Extract Only Organic Results
def extract_organic_results(data):
"""Extract only organic search results"""
organic_results = []
for result in data.get("search_results", []):
if result.get("type") == "organic":
organic_results.append({
"title": result.get("title"),
"url": result.get("link"),
"snippet": result.get("snippet"),
"position": result.get("position")
})
return organic_results
Extract Copilot Insights
def extract_copilot_data(data):
"""Extract Copilot AI-generated content"""
copilot = data.get("copilot_answer", {})
if copilot:
return {
"answer": copilot.get("answer"),
"source_count": len(copilot.get("sources", [])),
"sources": [s.get("source_name") for s in copilot.get("sources", [])]
}
return None
Error Handling and Best Practices
Robust Error Handling
def scrape_with_retry(url, max_retries=3):
"""Scrape with automatic retry logic"""
for attempt in range(max_retries):
try:
payload = {
"url_to_scrape": url,
"parser": {"id": "@olostep/bing-search"},
"formats": ["json"]
}
response = requests.post(API_URL, headers=headers, json=payload, timeout=30)
if response.status_code == 200:
return response.json()
elif response.status_code == 429: # Rate limited
wait_time = 2 ** attempt
print(f"Rate limited. Waiting {wait_time} seconds...")
time.sleep(wait_time)
else:
print(f"Error {response.status_code}: {response.text}")
return None
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
return None
Rate Limiting
import time
from datetime import datetime
class RateLimiter:
def __init__(self, requests_per_minute=30):
self.requests_per_minute = requests_per_minute
self.min_interval = 60.0 / requests_per_minute
self.last_request_time = 0
def wait_if_needed(self):
current_time = time.time()
time_since_last = current_time - self.last_request_time
if time_since_last < self.min_interval:
sleep_time = self.min_interval - time_since_last
time.sleep(sleep_time)
self.last_request_time = time.time()
# Usage
limiter = RateLimiter(requests_per_minute=30)
for keyword in keywords:
limiter.wait_if_needed()
result = scrape_bing_keyword(keyword)
Comparing Olostep vs Traditional Scraping
Feature | Traditional Scraping | Olostep API |
---|---|---|
Setup Time | Days/weeks | Minutes |
Maintenance | Constant updates needed | Zero maintenance |
Success Rate | 60-80% | 99%+ |
CAPTCHA Handling | Manual intervention | Automatic |
Copilot Extraction | Nearly impossible | Built-in |
Structured Output | Custom parsing required | Ready-to-use JSON |
Cost | High development + infrastructure | $0.0005 per request |
Conclusion
Scraping Bing search results provides valuable insights for SEO, competitive analysis, and market research. While traditional scraping methods face significant challenges with Bing's anti-bot systems, Olostep's specialized Bing Search API offers a reliable, cost-effective solution.
The API extracts complete SERP data including Copilot answers, People Also Ask questions, and all search result types in clean, structured JSON format. With 99%+ success rates and pricing starting at just $0.0005 per request, it's the most efficient way to access Bing search data.
Ready to start scraping Bing? Get your free API key and extract your first 1,000 search results at no cost.
If you encounter any issues or have questions, contact our 24/7 support team at info@olostep.com.