Send custom headers to proxies and receive proxy response headers in Scrapy.
When making HTTPS requests through a proxy, Scrapy cannot send custom headers to the proxy itself. This is because HTTPS requests create an encrypted tunnel (via HTTP CONNECT) - any headers you add to request.headers are encrypted and only visible to the destination server, not the proxy.
┌──────────┐ CONNECT ┌───────┐ Encrypted ┌────────────┐
│ Scrapy │ ───────────────► │ Proxy │ ════════════════► │ Target URL │
└──────────┘ (unencrypted) └───────┘ (tunnel) └────────────┘
│ │
Proxy headers request.headers
go HERE go here (encrypted)
This extension solves the problem by:
- Sending custom headers to the proxy during the CONNECT handshake
- Capturing response headers from the proxy's CONNECT response
- Making those headers available in your spider
pip install scrapy-proxy-headersIn your Scrapy settings.py:
DOWNLOAD_HANDLERS = {
"https": "scrapy_proxy_headers.HTTP11ProxyDownloadHandler"
}Or in your spider's custom_settings:
class MySpider(scrapy.Spider):
custom_settings = {
"DOWNLOAD_HANDLERS": {
"https": "scrapy_proxy_headers.HTTP11ProxyDownloadHandler"
}
}Use request.meta["proxy_headers"] to send headers to the proxy:
import scrapy
class MySpider(scrapy.Spider):
name = "example"
def start_requests(self):
yield scrapy.Request(
url="https://api.ipify.org?format=json",
meta={
"proxy": "http://your-proxy:port",
"proxy_headers": {"X-ProxyMesh-Country": "US"}
}
)
def parse(self, response):
# Proxy response headers are available in response.headers
proxy_ip = response.headers.get("X-ProxyMesh-IP")
self.logger.info(f"Proxy IP: {proxy_ip}")Headers from the proxy's CONNECT response are automatically merged into response.headers:
def parse(self, response):
# Access headers sent by the proxy
proxy_ip = response.headers.get(b"X-ProxyMesh-IP")
if proxy_ip:
print(f"Request made through IP: {proxy_ip.decode()}")import scrapy
class ProxyHeadersSpider(scrapy.Spider):
name = "proxy_headers_demo"
custom_settings = {
"DOWNLOAD_HANDLERS": {
"https": "scrapy_proxy_headers.HTTP11ProxyDownloadHandler"
}
}
def start_requests(self):
yield scrapy.Request(
url="https://api.ipify.org?format=json",
meta={
"proxy": "http://us.proxymesh.com:31280",
"proxy_headers": {"X-ProxyMesh-Country": "US"}
},
callback=self.parse_ip
)
def parse_ip(self, response):
data = response.json()
proxy_ip = response.headers.get(b"X-ProxyMesh-IP")
self.logger.info(f"Public IP: {data['ip']}")
if proxy_ip:
self.logger.info(f"Proxy IP: {proxy_ip.decode()}")
yield {
"public_ip": data["ip"],
"proxy_ip": proxy_ip.decode() if proxy_ip else None
}- HTTP11ProxyDownloadHandler - Custom download handler that manages proxy header caching
- ScrapyProxyHeadersAgent - Agent that reads
proxy_headersfrom request meta - TunnelingHeadersAgent - Sends custom headers in the CONNECT request
- TunnelingHeadersTCP4ClientEndpoint - Captures proxy response headers from CONNECT response
The handler also caches proxy response headers by proxy URL. This ensures headers remain available even when Scrapy reuses existing tunnel connections for subsequent requests.
A test harness is included to verify proxy header functionality:
# Basic test
PROXY_URL=http://your-proxy:port TEST_URL=https://api.ipify.org python test_proxy_headers.py
# With custom proxy header
PROXY_URL=http://your-proxy:port \
PROXY_HEADER=X-ProxyMesh-IP \
SEND_PROXY_HEADER=X-ProxyMesh-Country \
SEND_PROXY_VALUE=US \
python test_proxy_headers.py
# Verbose output
python test_proxy_headers.py -v| Variable | Description | Default |
|---|---|---|
PROXY_URL |
Proxy URL (also checks HTTPS_PROXY) |
Required |
TEST_URL |
URL to request | https://api.ipify.org?format=json |
PROXY_HEADER |
Response header to check for | X-ProxyMesh-IP |
SEND_PROXY_HEADER |
Header name to send to proxy | Optional |
SEND_PROXY_VALUE |
Value for the send header | Optional |
Full documentation is available at scrapy-proxy-headers.readthedocs.io.
- Geographic targeting: Send
X-ProxyMesh-Countryto route through specific countries - Session consistency: Request the same IP across multiple requests
- Debugging: Capture proxy response headers to see which IP was assigned
- Load balancing: Use proxy headers to control request distribution
- Python 3.8+
- Scrapy 2.0+
BSD License - see LICENSE for details.