ChipaPocketOptionData is a powerful Python library for collecting high-volume market data from PocketOption using multiple demo accounts with optional proxy support via multiprocessing. Built on top of BinaryOptionsToolsV2.
- π Multi-Process Architecture: Collect data using multiple demo accounts simultaneously
- π High Throughput: Leverage multiprocessing to maximize data collection speed
- π Proxy Support: Each process can use its own proxy server for distributed data collection
- π Multiple Data Collection Methods:
- Real-time symbol subscriptions
- Time-based chunked candles
- Count-based aggregated candles
- Historical candle data
- οΏ½οΈ Fault Tolerant: Automatic reconnection on errors
- π Comprehensive Logging: Built-in logging system for debugging and monitoring
- π― Simple API: Easy-to-use interface inspired by BinaryOptionsToolsV2
This library solves the common problem of needing high-volume market data from PocketOption:
- Multiple Demo Accounts: Create several demo accounts to bypass rate limits
- Proxy Distribution: Use different proxies for each account to avoid IP-based restrictions
- Parallel Collection: Collect data from multiple sources simultaneously
- No Rate Limiting Worries: Distribute your data collection across multiple connections
pip install ChipaPocketOptionDatagit clone https://github.com/ChipaDevTeam/ChipaPocketOptionData.git
cd ChipaPocketOptionData
pip install -e .from ChipaPocketOptionData import subscribe_symbol_timed
from datetime import timedelta
# Your demo account SSIDs
ssids = [
"your_demo_ssid_1",
"your_demo_ssid_2",
"your_demo_ssid_3",
]
# Start collecting 5-second candles
collector = subscribe_symbol_timed(
asset="EURUSD_otc",
time_delta=timedelta(seconds=5),
ssids=ssids,
proxy_support=False
)
# Iterate over incoming data
for candle in collector:
if 'error' in candle:
print(f"Error: {candle['error']}")
continue
print(f"Candle from {candle['ssid']}: "
f"Open={candle['open']}, Close={candle['close']}")from ChipaPocketOptionData import subscribe_symbol_timed, ProxyConfig
ssids = ["ssid1", "ssid2", "ssid3"]
# Configure proxy servers
proxies = [
ProxyConfig(host="proxy1.com", port=8080, username="user1", password="pass1"),
ProxyConfig(host="proxy2.com", port=8080, username="user2", password="pass2"),
ProxyConfig(host="proxy3.com", port=1080, protocol="socks5"),
]
# Start collecting with proxies
collector = subscribe_symbol_timed(
asset="EURUSD_otc",
time_delta=5, # Can use int for seconds
ssids=ssids,
proxies=proxies,
proxy_support=True
)
for candle in collector:
print(f"Received: {candle}")Subscribe to real-time symbol updates (1-second candles).
from ChipaPocketOptionData import subscribe_symbol
collector = subscribe_symbol(
asset="EURUSD_otc",
ssids=["ssid1", "ssid2"],
proxy_support=False
)
for candle in collector:
print(candle)subscribe_symbol_timed(asset, time_delta, ssids, proxies=None, proxy_support=False, **config_kwargs)
Subscribe to time-chunked symbol updates.
from ChipaPocketOptionData import subscribe_symbol_timed
from datetime import timedelta
collector = subscribe_symbol_timed(
asset="EURUSD_otc",
time_delta=timedelta(seconds=5), # or just: time_delta=5
ssids=["ssid1", "ssid2"],
proxy_support=False
)
for candle in collector:
print(candle) # 5-second aggregated candlessubscribe_symbol_chunked(asset, chunk_size, ssids, proxies=None, proxy_support=False, **config_kwargs)
Subscribe to chunk-aggregated symbol updates.
from ChipaPocketOptionData import subscribe_symbol_chunked
collector = subscribe_symbol_chunked(
asset="EURUSD_otc",
chunk_size=15, # Aggregate every 15 candles
ssids=["ssid1", "ssid2"],
proxy_support=False
)
for candle in collector:
print(candle) # Aggregated from 15 candlesGet historical candles (non-streaming).
from ChipaPocketOptionData import get_candles
candles = get_candles(
asset="EURUSD_otc",
period=60, # 1-minute candles
time=3600, # Last hour
ssids=["ssid1", "ssid2"]
)
print(f"Collected {len(candles)} candles")from ChipaPocketOptionData import DataCollectorConfig, ProxyConfig
config = DataCollectorConfig(
ssids=["ssid1", "ssid2"],
proxies=[ProxyConfig(host="proxy.com", port=8080)],
proxy_support=True,
max_workers=2, # Defaults to len(ssids)
reconnect_on_error=True,
error_retry_delay=5, # seconds
log_level="INFO",
log_path="./logs"
)from ChipaPocketOptionData import ProxyConfig
# HTTP proxy with auth
proxy = ProxyConfig(
host="proxy.example.com",
port=8080,
username="user",
password="pass",
protocol="http"
)
# SOCKS5 proxy without auth
proxy = ProxyConfig(
host="proxy.example.com",
port=1080,
protocol="socks5"
)Check out the examples/ directory for more detailed examples:
- basic_usage.py: Simple data collection without proxies
- with_proxy_support.py: Using multiple proxies
- save_to_database.py: Storing data in SQLite
- multiple_assets.py: Collecting from multiple assets simultaneously
from ChipaPocketOptionData import subscribe_symbol_timed
ssids = ["ssid1", "ssid2"]
with subscribe_symbol_timed("EURUSD_otc", 5, ssids=ssids) as collector:
for i, candle in enumerate(collector):
print(candle)
if i >= 100:
break
# Automatically cleaned upcollector = subscribe_symbol_timed(
asset="EURUSD_otc",
time_delta=5,
ssids=["ssid1", "ssid2"],
reconnect_on_error=True,
error_retry_delay=10
)
for candle in collector:
if 'error' in candle:
print(f"Error from {candle['ssid']}: {candle['error']}")
# Error is logged, connection will be retried
continue
# Process valid candle
process_candle(candle)collector = subscribe_symbol_timed(
asset="EURUSD_otc",
time_delta=5,
ssids=["ssid1", "ssid2"],
log_level="DEBUG", # DEBUG, INFO, WARN, ERROR
log_path="./logs" # Log directory
)βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Main Process β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β MultiProcessDataCollector β β
β β - Manages worker processes β β
β β - Aggregates data from queue β β
β β - Handles graceful shutdown β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββΌβββββββββββββ
β β β
βΌ βΌ βΌ
ββββββββββββ ββββββββββββ ββββββββββββ
β Worker 1 β β Worker 2 β β Worker N β
β SSID 1 β β SSID 2 β β SSID N β
β Proxy 1 β β Proxy 2 β β Proxy N β
β β β β β β
β BO2 API β β BO2 API β β BO2 API β
βββββββ¬βββββ βββββββ¬βββββ βββββββ¬βββββ
β β β
ββββββββββββββΌβββββββββββββ
β
βΌ
βββββββββββββββββ
β Shared Queue β
β (Thread-safe) β
βββββββββββββββββ
- Python 3.8+
- BinaryOptionsToolsV2 >= 1.0.0
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built on top of BinaryOptionsToolsV2
- Inspired by the need for high-volume data collection from PocketOption
If you have any questions or issues, please:
This library is for educational and research purposes only. Use at your own risk. Make sure to comply with PocketOption's Terms of Service.
Made with β€οΈ by the ChipaDev Team
# Start all services
docker-compose up
# Build and start in detached mode
docker-compose up -d --build
# View logs
docker-compose logs -f
# Stop services
docker-compose downCreate a .env file from the template:
cp .env.example .env
# Edit .env with your configuration- Docker (for local builds)
- gcloud CLI
- GCP project with billing enabled
# Deploy using local Docker build (faster for small projects)
./deploy.sh
# Or use Cloud Build (better for complex builds)
USE_LOCAL_BUILD=false ./deploy.sh
# Override GCP project
GCP_PROJECT_ID=my-project ./deploy.shThe deploy script will:
- Read configuration from
project.config.json - Enable required GCP APIs
- Build Docker image (locally or via Cloud Build)
- Push to Google Container Registry
- Deploy to Cloud Run
- Provide service URL and testing commands
# Set your GCP project
gcloud config set project YOUR_PROJECT_ID
# Build and push manually
docker build -t gcr.io/YOUR_PROJECT_ID/SERVICE_NAME:latest .
docker push gcr.io/YOUR_PROJECT_ID/SERVICE_NAME:latest
# Deploy to Cloud Run
gcloud run deploy SERVICE_NAME \
--image gcr.io/YOUR_PROJECT_ID/SERVICE_NAME:latest \
--region us-central1 \
--platform managed.
βββ project.config.json # Central project configuration
βββ .env.example # Environment variable template
βββ .env # Your environment variables (git-ignored)
βββ setup.sh # Interactive project setup script
βββ deploy.sh # Automated deployment script
βββ Dockerfile # Container build configuration
βββ docker-compose.yml # Local development services
βββ cloudbuild.yaml # GCP Cloud Build configuration
βββ .dockerignore # Files to exclude from Docker build
βββ .gcloudignore # Files to exclude from Cloud Build
βββ docs/ # Documentation
βββ tests/ # Test files
Edit project.config.json to customize:
{
"docker": {
"imageName": "my-app-api",
"containerName": "my-app-api",
"port": 8080,
"appDirectory": "/app",
"tempDirectory": "/tmp/myapp"
}
}Configure deployment settings:
{
"gcp": {
"cloudRun": {
"memory": "2Gi",
"cpu": "2",
"timeout": "600",
"maxInstances": "10",
"concurrency": "80"
}
}
}Define different configurations per environment:
{
"environment": {
"production": {
"LOG_LEVEL": "info",
"API_BASE_URL": "https://api.example.com"
},
"development": {
"LOG_LEVEL": "debug",
"API_BASE_URL": "http://localhost:8080"
}
}
}# In Dockerfile, update builder stage:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Runtime stage:
FROM node:20-alpine
COPY --from=builder /app/dist /app/dist
COPY --from=builder /app/node_modules /app/node_modules
CMD ["node", "dist/index.js"]# Builder stage:
FROM python:3.11-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Runtime:
FROM python:3.11-slim
COPY --from=builder /app /app
CMD ["python", "main.py"]# Builder:
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN go build -o server
# Runtime:
FROM alpine:latest
COPY --from=builder /app/server /app/server
CMD ["/app/server"]Edit docker-compose.yml to add databases, caches, etc:
services:
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data- β Use non-root user in containers
- β Keep secrets in environment variables
- β
Use
.envfor local development (never commit it) - β Use Google Secret Manager for production secrets
- β Enable Cloud Run authentication if not public
- β Use multi-stage builds to minimize image size
- β Optimize layer caching in Dockerfile
- β Set appropriate memory and CPU limits
- β Configure auto-scaling (min/max instances)
- β Enable Cloud CDN for static assets
# View Cloud Run logs
gcloud run services logs read SERVICE_NAME --limit=100
# Stream logs in real-time
gcloud run services logs tail SERVICE_NAME
# Monitor metrics
gcloud run services describe SERVICE_NAME --region REGION# Check Docker daemon is running
docker ps
# Clean Docker cache
docker system prune -a
# View build logs
docker build --progress=plain -t test .# Check GCP authentication
gcloud auth list
# Verify project access
gcloud projects describe PROJECT_ID
# Check Cloud Run service status
gcloud run services list --region REGION"permission denied": Ensure scripts are executable
chmod +x setup.sh deploy.sh"project not found": Set correct GCP project
gcloud config set project YOUR_PROJECT_ID"insufficient permissions": Enable required APIs
gcloud services enable run.googleapis.com cloudbuild.googleapis.comTo use this template for a new project:
- Clone or download this repository
- Run
./setup.shto initialize configuration - Customize
Dockerfilefor your tech stack - Add your application code
- Test locally with
docker-compose up - Deploy with
./deploy.sh
- Docker Documentation
- Google Cloud Run Documentation
- Cloud Build Documentation
- Docker Compose Documentation
See LICENSE file for details.
This is a template repository. Feel free to customize it for your organization's needs.
Need Help? Check the documentation in the docs/ folder or review the configuration files for inline comments and examples.