Skip to content

ChipaDevTeam/ChipaPocketOptionData

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ChipaPocketOptionData

Python Version License: MIT

ChipaPocketOptionData is a powerful Python library for collecting high-volume market data from PocketOption using multiple demo accounts with optional proxy support via multiprocessing. Built on top of BinaryOptionsToolsV2.

✨ Key Features

  • πŸš€ Multi-Process Architecture: Collect data using multiple demo accounts simultaneously
  • πŸ”„ High Throughput: Leverage multiprocessing to maximize data collection speed
  • 🌐 Proxy Support: Each process can use its own proxy server for distributed data collection
  • πŸ“Š Multiple Data Collection Methods:
    • Real-time symbol subscriptions
    • Time-based chunked candles
    • Count-based aggregated candles
    • Historical candle data
  • �️ Fault Tolerant: Automatic reconnection on errors
  • πŸ“ Comprehensive Logging: Built-in logging system for debugging and monitoring
  • 🎯 Simple API: Easy-to-use interface inspired by BinaryOptionsToolsV2

🎯 Why ChipaPocketOptionData?

This library solves the common problem of needing high-volume market data from PocketOption:

  1. Multiple Demo Accounts: Create several demo accounts to bypass rate limits
  2. Proxy Distribution: Use different proxies for each account to avoid IP-based restrictions
  3. Parallel Collection: Collect data from multiple sources simultaneously
  4. No Rate Limiting Worries: Distribute your data collection across multiple connections

πŸ“¦ Installation

Using pip (recommended)

pip install ChipaPocketOptionData

From source

git clone https://github.com/ChipaDevTeam/ChipaPocketOptionData.git
cd ChipaPocketOptionData
pip install -e .

πŸš€ Quick Start

Basic Usage (No Proxies)

from ChipaPocketOptionData import subscribe_symbol_timed
from datetime import timedelta

# Your demo account SSIDs
ssids = [
    "your_demo_ssid_1",
    "your_demo_ssid_2",
    "your_demo_ssid_3",
]

# Start collecting 5-second candles
collector = subscribe_symbol_timed(
    asset="EURUSD_otc",
    time_delta=timedelta(seconds=5),
    ssids=ssids,
    proxy_support=False
)

# Iterate over incoming data
for candle in collector:
    if 'error' in candle:
        print(f"Error: {candle['error']}")
        continue
    
    print(f"Candle from {candle['ssid']}: "
          f"Open={candle['open']}, Close={candle['close']}")

With Proxy Support

from ChipaPocketOptionData import subscribe_symbol_timed, ProxyConfig

ssids = ["ssid1", "ssid2", "ssid3"]

# Configure proxy servers
proxies = [
    ProxyConfig(host="proxy1.com", port=8080, username="user1", password="pass1"),
    ProxyConfig(host="proxy2.com", port=8080, username="user2", password="pass2"),
    ProxyConfig(host="proxy3.com", port=1080, protocol="socks5"),
]

# Start collecting with proxies
collector = subscribe_symbol_timed(
    asset="EURUSD_otc",
    time_delta=5,  # Can use int for seconds
    ssids=ssids,
    proxies=proxies,
    proxy_support=True
)

for candle in collector:
    print(f"Received: {candle}")

πŸ“š Documentation

Main Functions

subscribe_symbol(asset, ssids, proxies=None, proxy_support=False, **config_kwargs)

Subscribe to real-time symbol updates (1-second candles).

from ChipaPocketOptionData import subscribe_symbol

collector = subscribe_symbol(
    asset="EURUSD_otc",
    ssids=["ssid1", "ssid2"],
    proxy_support=False
)

for candle in collector:
    print(candle)

subscribe_symbol_timed(asset, time_delta, ssids, proxies=None, proxy_support=False, **config_kwargs)

Subscribe to time-chunked symbol updates.

from ChipaPocketOptionData import subscribe_symbol_timed
from datetime import timedelta

collector = subscribe_symbol_timed(
    asset="EURUSD_otc",
    time_delta=timedelta(seconds=5),  # or just: time_delta=5
    ssids=["ssid1", "ssid2"],
    proxy_support=False
)

for candle in collector:
    print(candle)  # 5-second aggregated candles

subscribe_symbol_chunked(asset, chunk_size, ssids, proxies=None, proxy_support=False, **config_kwargs)

Subscribe to chunk-aggregated symbol updates.

from ChipaPocketOptionData import subscribe_symbol_chunked

collector = subscribe_symbol_chunked(
    asset="EURUSD_otc",
    chunk_size=15,  # Aggregate every 15 candles
    ssids=["ssid1", "ssid2"],
    proxy_support=False
)

for candle in collector:
    print(candle)  # Aggregated from 15 candles

get_candles(asset, period, time, ssids, proxies=None, proxy_support=False, **config_kwargs)

Get historical candles (non-streaming).

from ChipaPocketOptionData import get_candles

candles = get_candles(
    asset="EURUSD_otc",
    period=60,  # 1-minute candles
    time=3600,  # Last hour
    ssids=["ssid1", "ssid2"]
)

print(f"Collected {len(candles)} candles")

Configuration

DataCollectorConfig

from ChipaPocketOptionData import DataCollectorConfig, ProxyConfig

config = DataCollectorConfig(
    ssids=["ssid1", "ssid2"],
    proxies=[ProxyConfig(host="proxy.com", port=8080)],
    proxy_support=True,
    max_workers=2,  # Defaults to len(ssids)
    reconnect_on_error=True,
    error_retry_delay=5,  # seconds
    log_level="INFO",
    log_path="./logs"
)

ProxyConfig

from ChipaPocketOptionData import ProxyConfig

# HTTP proxy with auth
proxy = ProxyConfig(
    host="proxy.example.com",
    port=8080,
    username="user",
    password="pass",
    protocol="http"
)

# SOCKS5 proxy without auth
proxy = ProxyConfig(
    host="proxy.example.com",
    port=1080,
    protocol="socks5"
)

πŸ“– Examples

Check out the examples/ directory for more detailed examples:

πŸ”§ Advanced Usage

Context Manager

from ChipaPocketOptionData import subscribe_symbol_timed

ssids = ["ssid1", "ssid2"]

with subscribe_symbol_timed("EURUSD_otc", 5, ssids=ssids) as collector:
    for i, candle in enumerate(collector):
        print(candle)
        if i >= 100:
            break
# Automatically cleaned up

Error Handling

collector = subscribe_symbol_timed(
    asset="EURUSD_otc",
    time_delta=5,
    ssids=["ssid1", "ssid2"],
    reconnect_on_error=True,
    error_retry_delay=10
)

for candle in collector:
    if 'error' in candle:
        print(f"Error from {candle['ssid']}: {candle['error']}")
        # Error is logged, connection will be retried
        continue
    
    # Process valid candle
    process_candle(candle)

Logging

collector = subscribe_symbol_timed(
    asset="EURUSD_otc",
    time_delta=5,
    ssids=["ssid1", "ssid2"],
    log_level="DEBUG",  # DEBUG, INFO, WARN, ERROR
    log_path="./logs"   # Log directory
)

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Main Process                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚       MultiProcessDataCollector                   β”‚ β”‚
β”‚  β”‚  - Manages worker processes                       β”‚ β”‚
β”‚  β”‚  - Aggregates data from queue                     β”‚ β”‚
β”‚  β”‚  - Handles graceful shutdown                      β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚            β”‚            β”‚
      β–Ό            β–Ό            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Worker 1 β”‚ β”‚ Worker 2 β”‚ β”‚ Worker N β”‚
β”‚ SSID 1   β”‚ β”‚ SSID 2   β”‚ β”‚ SSID N   β”‚
β”‚ Proxy 1  β”‚ β”‚ Proxy 2  β”‚ β”‚ Proxy N  β”‚
β”‚          β”‚ β”‚          β”‚ β”‚          β”‚
β”‚ BO2 API  β”‚ β”‚ BO2 API  β”‚ β”‚ BO2 API  β”‚
β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
      β”‚            β”‚            β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β”‚
                   β–Ό
           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
           β”‚ Shared Queue  β”‚
           β”‚ (Thread-safe) β”‚
           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“‹ Requirements

  • Python 3.8+
  • BinaryOptionsToolsV2 >= 1.0.0

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Built on top of BinaryOptionsToolsV2
  • Inspired by the need for high-volume data collection from PocketOption

πŸ“§ Support

If you have any questions or issues, please:

  1. Check the examples/ directory
  2. Open an issue on GitHub

⚠️ Disclaimer

This library is for educational and research purposes only. Use at your own risk. Make sure to comply with PocketOption's Terms of Service.


Made with ❀️ by the ChipaDev Team

# Start all services
docker-compose up

# Build and start in detached mode
docker-compose up -d --build

# View logs
docker-compose logs -f

# Stop services
docker-compose down

Environment Variables

Create a .env file from the template:

cp .env.example .env
# Edit .env with your configuration

4. Deploy to Production

Prerequisites

Deploy Script

# Deploy using local Docker build (faster for small projects)
./deploy.sh

# Or use Cloud Build (better for complex builds)
USE_LOCAL_BUILD=false ./deploy.sh

# Override GCP project
GCP_PROJECT_ID=my-project ./deploy.sh

The deploy script will:

  1. Read configuration from project.config.json
  2. Enable required GCP APIs
  3. Build Docker image (locally or via Cloud Build)
  4. Push to Google Container Registry
  5. Deploy to Cloud Run
  6. Provide service URL and testing commands

Manual Deployment

# Set your GCP project
gcloud config set project YOUR_PROJECT_ID

# Build and push manually
docker build -t gcr.io/YOUR_PROJECT_ID/SERVICE_NAME:latest .
docker push gcr.io/YOUR_PROJECT_ID/SERVICE_NAME:latest

# Deploy to Cloud Run
gcloud run deploy SERVICE_NAME \
  --image gcr.io/YOUR_PROJECT_ID/SERVICE_NAME:latest \
  --region us-central1 \
  --platform managed

πŸ“ Project Structure

.
β”œβ”€β”€ project.config.json      # Central project configuration
β”œβ”€β”€ .env.example             # Environment variable template
β”œβ”€β”€ .env                     # Your environment variables (git-ignored)
β”œβ”€β”€ setup.sh                 # Interactive project setup script
β”œβ”€β”€ deploy.sh                # Automated deployment script
β”œβ”€β”€ Dockerfile               # Container build configuration
β”œβ”€β”€ docker-compose.yml       # Local development services
β”œβ”€β”€ cloudbuild.yaml          # GCP Cloud Build configuration
β”œβ”€β”€ .dockerignore            # Files to exclude from Docker build
β”œβ”€β”€ .gcloudignore            # Files to exclude from Cloud Build
β”œβ”€β”€ docs/                    # Documentation
└── tests/                   # Test files

πŸ”§ Configuration Guide

Docker Configuration

Edit project.config.json to customize:

{
  "docker": {
    "imageName": "my-app-api",
    "containerName": "my-app-api",
    "port": 8080,
    "appDirectory": "/app",
    "tempDirectory": "/tmp/myapp"
  }
}

Cloud Run Settings

Configure deployment settings:

{
  "gcp": {
    "cloudRun": {
      "memory": "2Gi",
      "cpu": "2",
      "timeout": "600",
      "maxInstances": "10",
      "concurrency": "80"
    }
  }
}

Environment-Specific Variables

Define different configurations per environment:

{
  "environment": {
    "production": {
      "LOG_LEVEL": "info",
      "API_BASE_URL": "https://api.example.com"
    },
    "development": {
      "LOG_LEVEL": "debug",
      "API_BASE_URL": "http://localhost:8080"
    }
  }
}

πŸ› οΈ Customization

For Different Technology Stacks

Node.js/TypeScript

# In Dockerfile, update builder stage:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Runtime stage:
FROM node:20-alpine
COPY --from=builder /app/dist /app/dist
COPY --from=builder /app/node_modules /app/node_modules
CMD ["node", "dist/index.js"]

Python

# Builder stage:
FROM python:3.11-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

# Runtime:
FROM python:3.11-slim
COPY --from=builder /app /app
CMD ["python", "main.py"]

Go

# Builder:
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN go build -o server

# Runtime:
FROM alpine:latest
COPY --from=builder /app/server /app/server
CMD ["/app/server"]

Adding Services

Edit docker-compose.yml to add databases, caches, etc:

services:
  postgres:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=${DB_NAME}
      - POSTGRES_USER=${DB_USER}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data

πŸ“ Best Practices

Security

  • βœ… Use non-root user in containers
  • βœ… Keep secrets in environment variables
  • βœ… Use .env for local development (never commit it)
  • βœ… Use Google Secret Manager for production secrets
  • βœ… Enable Cloud Run authentication if not public

Performance

  • βœ… Use multi-stage builds to minimize image size
  • βœ… Optimize layer caching in Dockerfile
  • βœ… Set appropriate memory and CPU limits
  • βœ… Configure auto-scaling (min/max instances)
  • βœ… Enable Cloud CDN for static assets

Monitoring

# View Cloud Run logs
gcloud run services logs read SERVICE_NAME --limit=100

# Stream logs in real-time
gcloud run services logs tail SERVICE_NAME

# Monitor metrics
gcloud run services describe SERVICE_NAME --region REGION

πŸ› Troubleshooting

Build Failures

# Check Docker daemon is running
docker ps

# Clean Docker cache
docker system prune -a

# View build logs
docker build --progress=plain -t test .

Deployment Issues

# Check GCP authentication
gcloud auth list

# Verify project access
gcloud projects describe PROJECT_ID

# Check Cloud Run service status
gcloud run services list --region REGION

Common Errors

"permission denied": Ensure scripts are executable

chmod +x setup.sh deploy.sh

"project not found": Set correct GCP project

gcloud config set project YOUR_PROJECT_ID

"insufficient permissions": Enable required APIs

gcloud services enable run.googleapis.com cloudbuild.googleapis.com

πŸ”„ Updating the Template

To use this template for a new project:

  1. Clone or download this repository
  2. Run ./setup.sh to initialize configuration
  3. Customize Dockerfile for your tech stack
  4. Add your application code
  5. Test locally with docker-compose up
  6. Deploy with ./deploy.sh

πŸ“š Additional Resources

πŸ“„ License

See LICENSE file for details.

🀝 Contributing

This is a template repository. Feel free to customize it for your organization's needs.


Need Help? Check the documentation in the docs/ folder or review the configuration files for inline comments and examples.

About

ChipaPocketOptionData

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published