A production-ready quantum-first ML platform with async execution, comprehensive verification/profiling/visualization tools, and hardware-agnostic support for quantum computing. Leverages quantum mechanical properties—superposition, entanglement, and tunneling—for quantum-accelerated ML training with 10-20x throughput improvements.
Q-STORE website Link<<<<<<< HEAD
=======
main
- AsyncQuantumExecutor: Non-blocking circuit submission with parallel execution
- Zero-Blocking Storage: Async Zarr/Parquet writers with background tasks
- Result Caching: LRU cache for instant retrieval of repeated circuits
- Connection Pooling: Multi-connection backend clients for better utilization
- Background Polling: Async workers poll quantum backends without blocking training
- PyTorch Integration: Fixed QuantumLayer with proper async support
- Circuit Verification: Equivalence checking, property verification, formal analysis
- Performance Profiling: Gate-level profiling, optimization benchmarks
- State Visualization: Circuit diagrams (ASCII/LaTeX), Bloch sphere, state vectors
- 144 Comprehensive Tests: Full coverage for all verification/profiling/visualization modules
IMPORTANT: Improvements shown are v4.1 vs v4.0 quantum, not quantum vs classical GPU!
| Metric | v4.0 Quantum | v4.1 Quantum | Improvement |
|---|---|---|---|
| Circuit throughput | Sequential | 10-20x parallel | 10-20x faster |
| Storage operations | Blocking | Async (0ms) | ∞ faster |
| Result caching | None | LRU cache | Instant repeats |
| PyTorch integration | Broken | Fixed + async | Production-ready |
| Module count | 22 | 29 | 7 new modules |
| Total Python files | 118 | 145 | 27 new files |
Current NISQ quantum hardware is typically 0.7-1.2x classical GPU speed (often slower)
Why? Circuit overhead, API latency, limited parallelization, measurement shots
Quantum's Value: Better exploration of non-convex loss landscapes, not raw speed
When Quantum Helps:
- ✅ Complex optimization landscapes
- ✅ Small datasets (<10K samples)
- ✅ Problems where classical gets stuck in local minima
- ✅ Research and algorithm development
When Classical GPU Wins:
- ✅ Large datasets (>10K samples)
- ✅ Production workloads
- ✅ Cost-sensitive applications
- ✅ Most practical ML tasks today
- Async Quantum Execution: Non-blocking circuit submission with 10-20x throughput
- Hardware-Agnostic Architecture: Works with Cirq, Qiskit, IonQ, and simulators
- Quantum Feature Extractor: Replace Dense layers with quantum circuits
- Quantum Neural Network Layers: Variational quantum circuits with async execution
- Quantum Gradient Computation: Parameter shift rule and SPSA estimation
- Hybrid Classical-Quantum Pipelines: Seamless PyTorch/TensorFlow integration
- Quantum Data Encoding: Amplitude and angle encoding strategies
- Production Storage: Async Zarr checkpoints and Parquet metrics
- Quantum Transfer Learning: Fine-tune pre-trained quantum models
- Quantum Data Augmentation: Superposition-based data expansion
- Quantum Regularization: Entanglement-based model optimization
- Quantum Adversarial Training: Robust model training with quantum gradients
- Hyperparameter Optimization: Quantum annealing for HPO
- Async Execution Pipeline: Non-blocking quantum circuit execution
- Background Workers: Async polling without blocking training loop
- Result Caching: LRU cache for repeated circuit measurements
- Connection Pooling: Multi-connection quantum backend clients
- Distributed Quantum Training: Multi-backend orchestration (v4.0)
- Training Data Management: Store datasets with async writers
- Model Checkpointing: Zarr-based async checkpoint saves
- Metrics Tracking: Parquet-based async metrics logging
- Framework Integration: PyTorch, TensorFlow, and JAX support
Q-Store provides a hardware-agnostic hybrid classical-quantum database architecture that:
- Stores data in quantum superposition for context-aware retrieval
- Uses entanglement for automatic relationship synchronization
- Applies decoherence as adaptive time-to-live (TTL)
- Leverages quantum tunneling for global pattern discovery
- Trains quantum ML models with variational quantum circuits (8-12x faster in v3.4)
- Supports multiple quantum backends (Cirq/IonQ, Qiskit/IonQ, simulators)
- Integrates with classical ML frameworks (PyTorch, TensorFlow, JAX)
- Scales with Pinecone for classical vector storage
- Optimized IonQ execution with batch API, native gates, and smart caching
Store vectors in superposition of multiple contexts simultaneously. Measurement collapses to the most relevant context for your query.
await db.insert(
id='doc_1',
vector=embedding,
contexts=[
('technical_query', 0.6),
('general_query', 0.3),
('historical_query', 0.1)
],
coherence_time=5000.0 # ms
)Create entangled groups where updates propagate automatically via quantum correlation. No cache invalidation needed.
db.create_entangled_group(
group_id='related_docs',
entity_ids=['doc_1', 'doc_2', 'doc_3'],
correlation_strength=0.85
)Physics-based relevance decay. Old data naturally fades without explicit TTL management.
Physics-based relevance decay. Old data naturally fades without explicit TTL management.
await db.insert(
id='hot_data',
vector=embedding,
coherence_time=1000 # ms - stays relevant
)Escape local optima to find globally optimal patterns that classical methods miss.
results = await db.query(
vector=query_embedding,
enable_tunneling=True, # Find distant patterns
mode=QueryMode.EXPLORATORY,
top_k=10
)Train quantum neural networks with hardware-agnostic quantum circuits.
QuantumLayer - Variational quantum circuit layer for neural networks QuantumTrainer - Training orchestration with quantum gradient computation QuantumGradientComputer - Parameter shift rule for gradient calculation QuantumDataEncoder - Classical-to-quantum data encoding (amplitude/angle) IonQBatchClient (v3.4) - Parallel circuit submission with connection pooling SmartCircuitCache (v3.4) - Template-based circuit caching IonQNativeGateCompiler (v3.4) - Native gate optimization
# Define quantum neural network layer
quantum_layer = QuantumLayer(
n_qubits=10,
depth=4,
backend=backend,
entanglement='linear'
)
# Train quantum model with v3.4 optimizations
trainer = QuantumTrainer(config, backend_manager)
await trainer.train(
model=quantum_model,
train_loader=data_loader,
epochs=100 # Now 8x faster with v3.4!
)New users: See docs/QUICKSTART.md for a step-by-step beginner guide.
- Python 3.11+
- Conda package manager (recommended) or pip
- Pinecone API key
- IonQ API key (optional for quantum hardware)
- Choose quantum SDK: Cirq or Qiskit (for hardware-agnostic support)
- Clone the repository:
git clone https://github.com/yucelz/q-store.git
cd q-store- Create conda environment:
conda env create -f environment.yml
conda activate q-store- Install the package in development mode:
# Install with all dependencies
pip install -e ".[dev,backends]"
# Or use the Makefile
make install-dev- Install required libraries:
# Install the new Pinecone SDK (not pinecone-client)
pip install pinecone
# Verify installation
python -c "import pinecone; print('Pinecone installed successfully')"- Configure your API keys in
.envfile:
Create a .env file in the project root:
# Required: Pinecone for vector storage
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=us-east-1
# Optional: IonQ for quantum features
IONQ_API_KEY=your_ionq_api_key
# Quantum SDK selection (cirq or qiskit)
QUANTUM_SDK=cirq # or 'qiskit' for hardware-agnostic support
QUANTUM_TARGET=simulator # or 'qpu.aria', 'qpu.forte'Get your API keys:
- Pinecone: Sign up at pinecone.io and get your API key from the dashboard
- IonQ (Optional): Get your API key from cloud.ionq.com/settings/keys
- First Test - Run the Quickstart Example:
# Verify installation
python verify_installation.py
# Run the full quickstart demo
python examples/quantum_db_quickstart.pyExpected output from verification:
============================================================
Q-Store Installation Verification
============================================================
Checking imports...
✓ NumPy
✓ SciPy
✓ Cirq
✓ Pinecone
✓ Q-Store
Checking .env file...
✓ .env file exists
✓ PINECONE_API_KEY set
✓ PINECONE_ENVIRONMENT set
Testing basic functionality...
✓ DatabaseConfig created
✓ QuantumDatabase instantiated
============================================================
✓ All checks passed!
============================================================
Expected output from quickstart:
============================================================
QUANTUM DATABASE - INTERACTIVE DEMO
============================================================
=== Quantum Database Setup ===
Configuration:
- Pinecone Index: quantum-demo
- Pinecone Environment: us-east-1
- Dimension: 768
- Quantum Enabled: True
- Superposition: True
- IonQ Target: simulator
Initializing database...
INFO:q_store.quantum_database:Pinecone initialized with environment: us-east-1
INFO:q_store.quantum_database:Creating Pinecone index: quantum-demo
INFO:q_store.quantum_database:Pinecone index 'quantum-demo' created successfully
✓ Database initialized successfully
=== Example 1: Basic Operations ===
...
Note: The first run will create Pinecone indexes (quantum-demo and production-index). Subsequent runs will use existing indexes.
- Create a
.envfile in your project root:
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=us-east-1
IONQ_API_KEY=your_ionq_api_key # Optional- Run the quickstart example:
python examples/quantum_db_quickstart.pyThe example automatically loads credentials from .env using python-dotenv.
import asyncio
import numpy as np
from dotenv import load_dotenv
from q_store import QuantumDatabase, DatabaseConfig, QueryMode
# Load environment variables
load_dotenv()
async def main():
# Configure database (reads from .env automatically)
config = DatabaseConfig(
# Pinecone settings
pinecone_api_key=os.getenv('PINECONE_API_KEY'),
pinecone_environment=os.getenv('PINECONE_ENVIRONMENT', 'us-east-1'),
pinecone_index_name='my-index',
pinecone_dimension=768,
# Quantum backend (hardware-agnostic)
quantum_sdk=os.getenv('QUANTUM_SDK', 'cirq'), # 'cirq' or 'qiskit'
ionq_api_key=os.getenv('IONQ_API_KEY'),
ionq_target=os.getenv('QUANTUM_TARGET', 'simulator'),
enable_quantum=True,
enable_superposition=True
)
# Initialize database with context manager
db = QuantumDatabase(config)
async with db.connect():
# Insert vector with quantum superposition
embedding = np.random.randn(768)
await db.insert(
id='item_1',
vector=embedding,
contexts=[('context_a', 0.7), ('context_b', 0.3)],
metadata={'category': 'example'}
)
# Query with context-aware collapse
results = await db.query(
vector=embedding,
context='context_a',
mode=QueryMode.BALANCED,
top_k=5
)
# Display results
for result in results:
print(f"ID: {result.id}, Score: {result.score:.4f}")
print(f"Quantum Enhanced: {result.quantum_enhanced}")
# Run
asyncio.run(main())from q_store import QuantumTrainer, QuantumModel, TrainingConfig
# Configure training
training_config = TrainingConfig(
# Database config
**config,
# ML training settings
learning_rate=0.01,
batch_size=32,
epochs=100,
# Quantum model architecture
n_qubits=10,
circuit_depth=4,
entanglement='linear'
)
async def train_quantum_model():
db = QuantumDatabase(training_config)
async with db.connect():
# Store training data in quantum database
await db.store_training_data(
dataset_id='mnist_train',
data=X_train,
labels=y_train
)
# Create quantum model
model = QuantumModel(
input_dim=784,
n_qubits=10,
output_dim=10,
backend=db.backend_manager.get_backend()
)
# Create trainer
trainer = QuantumTrainer(training_config, db.backend_manager)
# Create data loader
train_loader = db.create_ml_data_loader(
dataset_id='mnist_train',
batch_size=32
)
# Train quantum neural network
await trainer.train(
model=model,
train_loader=train_loader,
epochs=100
)
asyncio.run(train_quantum_model())async with db.connect():
# Prepare batch
batch = [
{
'id': f'doc_{i}',
'vector': np.random.rand(768),
'contexts': [('general', 1.0)],
'metadata': {'index': i}
}
for i in range(100)
]
# Batch insert (efficient)
await db.insert_batch(batch)# Get performance metrics
metrics = db.get_metrics()
print(f"Total Queries: {metrics.total_queries}")
print(f"Cache Hit Rate: {metrics.cache_hits / max(1, metrics.total_queries):.2%}")
print(f"Avg Latency: {metrics.avg_latency_ms:.2f}ms")
print(f"Active Quantum States: {metrics.active_quantum_states}")
# Get comprehensive stats
stats = db.get_stats()
print(stats)1. ModuleNotFoundError: No module named 'q_store'
# Solution: Install the package in development mode
pip install -e .2. ImportError: Pinecone package is required
# Solution: Install the new Pinecone SDK (not pinecone-client)
pip uninstall -y pinecone-client
pip install pinecone3. PINECONE_API_KEY not found
# Solution: Create a .env file in the project root
cat > .env << EOF
PINECONE_API_KEY=your_actual_api_key
PINECONE_ENVIRONMENT=us-east-1
IONQ_API_KEY=your_ionq_key
EOF4. Pinecone index creation fails
- Ensure your Pinecone account has available index quota
- Check that the environment (e.g.,
us-east-1) is valid - Verify your API key has the necessary permissions
5. IonQ quantum features not working
- IonQ API key is optional - the system works without it
- Quantum features will be disabled if
IONQ_API_KEYis not set - Verify your IonQ API key at cloud.ionq.com
6. Package version conflicts
# Solution: Recreate the conda environment
conda deactivate
conda env remove -n q-store
conda env create -f environment.yml
conda activate q-store
pip install -e .
pip install pinecone- Check the examples directory for working code
- Review the design document for architecture details
- Submit issues on GitHub
- Contact: yucelz@gmail.com
# Installation and setup
conda activate q-store # Activate environment
python verify_installation.py # Verify installation
pip install -e . # Install package in dev mode
# Running examples
python examples/quantum_db_quickstart.py # Run quickstart demo
python examples/basic_example.py # Run basic example
python examples/financial_example.py # Run financial example
python examples/ml_training_example.py # Run ML training example
python examples/tinyllama_react_training.py # Run TinyLlama fine-tuning
# Testing
pytest tests/ -v # Run all tests
pytest tests/ -v -k "test_state" # Run specific tests
# Maintenance
conda env update -f environment.yml # Update dependencies
conda deactivate # Deactivate environment┌─────────────────────────────────────────────────┐
│ Application Layer │
│ • PyTorch • TensorFlow • JAX │
└────────────────┬────────────────────────────────┘
│
┌────────────────▼────────────────────────────────┐
│ Quantum Training Engine (v4.1) │
│ • QuantumTrainer • QuantumLayer (Fixed) │
│ • QuantumFeatureExtractor (Async) │
│ • QuantumGradientComputer • QuantumOptimizer │
│ • QuantumDataEncoder • Natural Gradients │
└────────────────┬────────────────────────────────┘
│
┌────────────────▼────────────────────────────────┐
│ Async Execution Layer (v4.1 NEW) │
│ • AsyncQuantumExecutor (Non-blocking) │
│ • ResultCache (LRU) • BackendClient (Pool) │
│ • Background Workers • IonQAdapter │
└────────────────┬────────────────────────────────┘
│
┌────────────────▼────────────────────────────────┐
│ Async Storage Layer (v4.1 NEW) │
│ • AsyncBuffer • AsyncMetricsWriter (Parquet) │
│ • CheckpointManager (Zarr) • AsyncLogger │
└────────────────┬────────────────────────────────┘
│
┌────────┴────────┐
│ │
┌───────▼──────┐ ┌─────▼──────────────────────┐
│ Classical │ │ Quantum Backends (v4.1) │
│ Backend │◄──► • IonQ Hardware │
│ │ │ • Cirq Simulators │
│ • Pinecone │ │ • Qiskit Backends │
│ • Vector DB │ │ • Mock Backends │
│ • Zarr/ │ │ • Multi-Backend Orchestr. │
│ Parquet │ │ │
│ • Async I/O │ │ Verification (v4.0): │
│ │ │ • Equivalence • Properties │
│ │ │ │
│ │ │ Profiling (v4.0): │
│ │ │ • CircuitProfiler │
│ │ │ • PerformanceAnalyzer │
│ │ │ │
│ │ │ Visualization (v4.0): │
│ │ │ • CircuitVisualizer │
│ │ │ • StateVisualizer │
└──────────────┘ └─────────────────────────────┘
from q_store import DatabaseConfig
config = DatabaseConfig(
# Pinecone configuration
pinecone_api_key='your_key',
pinecone_environment='us-east-1',
pinecone_index_name='my-index',
pinecone_dimension=768,
pinecone_metric='cosine',
# Quantum backend (hardware-agnostic)
quantum_sdk='cirq', # or 'qiskit'
ionq_api_key='your_ionq_key',
ionq_target='simulator', # or 'qpu.aria', 'qpu.forte'
# Feature flags
enable_quantum=True,
enable_superposition=True,
enable_entanglement=True,
enable_tunneling=True,
# Performance tuning
max_quantum_states=1000,
classical_candidate_pool=1000,
result_cache_ttl=300, # seconds
# Connection pooling
max_connections=50,
connection_timeout=30,
# Coherence settings
default_coherence_time=1000.0, # ms
decoherence_check_interval=60, # seconds
# Monitoring
enable_metrics=True,
enable_tracing=True
)from q_store import TrainingConfig
training_config = TrainingConfig(
# Inherits all DatabaseConfig options
**config,
# ML Training settings
learning_rate=0.01,
batch_size=32,
epochs=100,
optimizer='adam', # 'adam', 'sgd', 'rmsprop'
# Quantum model architecture
n_qubits=10,
circuit_depth=4,
entanglement='linear', # 'linear', 'circular', 'full'
# Data encoding
encoding_method='amplitude', # or 'angle'
# v3.4 Performance Optimizations (NEW)
use_batch_api=True, # Enable IonQ batch API (8x faster)
use_native_gates=True, # Enable native gate compilation (30% faster)
enable_smart_caching=True, # Enable circuit caching (10x faster)
connection_pool_size=5, # HTTP connection pool size
adaptive_batch_sizing=True, # Automatic batch size optimization
# Regularization
quantum_regularization=True,
entanglement_penalty=0.01,
# Checkpointing
checkpoint_interval=10, # epochs
save_best_only=True,
# Advanced features
enable_data_augmentation=True,
enable_adversarial_training=False,
enable_transfer_learning=False
)async def initialize()
Initialize database and start background tasks.
async def close()
Close database and cleanup resources.
async def connect()
Context manager for database lifecycle.
async def insert(id, vector, contexts=None, coherence_time=None, metadata=None)
Insert vector with optional quantum superposition.
async def insert_batch(vectors: List[Dict])
Batch insert for efficiency.
async def query(vector, context=None, mode=QueryMode.BALANCED, enable_tunneling=None, top_k=10)
Query database with quantum enhancements.
async def store_training_data(dataset_id, data, labels, metadata=None)
Store training dataset in quantum database.
async def load_training_batch(dataset_id, batch_size, shuffle=True)
Load training batch from quantum database.
create_ml_data_loader(dataset_id, batch_size=32, shuffle=True)
Create async data loader for training.
get_metrics() -> Metrics
Get performance metrics.
get_stats() -> Dict
Get comprehensive database statistics.
QuantumLayer
__init__(n_qubits, depth, backend, entanglement='linear')async forward(x: np.ndarray) -> np.ndarray- Forward pass through quantum circuit
QuantumTrainer
__init__(config, backend_manager)async train_epoch(model, data_loader, epoch)- Train for one epoch (8x faster in v3.4)async train(model, train_loader, val_loader=None, epochs=100)- Full training loopasync validate(model, val_loader)- Validation pass
QuantumGradientComputer
async compute_gradients(circuit, loss_function, current_params)- Compute quantum gradients using parameter shift rule
QuantumDataEncoder
amplitude_encode(data: np.ndarray) -> QuantumCircuit- Amplitude encodingangle_encode(data: np.ndarray, n_qubits: int) -> QuantumCircuit- Angle encoding
QuantumOptimizer
__init__(learning_rate, method='adam')step(parameters, gradients)- Update parameters
IonQBatchClient (NEW v3.4)
__init__(api_key, connection_pool_size=5)async submit_batch(circuits: List[Circuit])- Submit circuits in parallelasync get_results(job_ids: List[str])- Retrieve results efficiently
SmartCircuitCache (NEW v3.4)
__init__(max_size=1000)get_or_build(template_key, parameters)- Get cached or build circuitget_statistics()- Cache performance metrics
IonQNativeGateCompiler (NEW v3.4)
__init__()compile_to_native(circuit: Circuit)- Compile to GPi, GPi2, MS gatesestimate_fidelity(circuit: Circuit)- Estimate gate fidelity
QuantumHPOSearch
__init__(config, search_space, backend_manager)async search(model_class, dataset_id, metric, n_trials, use_quantum_annealing=True)- Hyperparameter search
CheckpointManager
__init__(config)async save(model, epoch, metrics)- Save model checkpointasync load(checkpoint_name)- Load model checkpoint
MetricsTracker
__init__(config)log_metrics(epoch, metrics)- Log training metricsget_history()- Get training history
PRECISE: High precision, narrow resultsBALANCED: Balanced precision and coverageEXPLORATORY: Broad exploration, diverse results
CREATED: Newly created stateACTIVE: Active coherent stateMEASURED: State has been measuredDECOHERED: State has lost coherenceARCHIVED: Archived state
Q-Store integrates with multiple quantum backends for hardware-agnostic ML training.
Supported SDKs:
cirq- Google Cirq with IonQ integrationqiskit- IBM Qiskit with IonQ integration- Mock simulators for development and testing
Supported Targets:
simulator- Free simulator (unlimited use)qpu.aria- 25 qubits, #AQ 25 (production)qpu.forte- 36 qubits, #AQ 36 (advanced)qpu.forte.1- 36 qubits, enterprise
IonQ Advantages:
- All-to-all qubit connectivity (no SWAP gates)
- High-fidelity native gates (>99.5% single-qubit, >97% two-qubit)
- Native gate set: RX, RY, RZ, XX (Mølmer-Sørensen)
- Optimal for variational quantum circuits in ML training
Backend Selection: The BackendManager automatically selects the best backend based on:
- Circuit requirements (qubit count, depth)
- Cost constraints
- Latency requirements
- Backend availability
| Operation | Classical | Quantum (v3.3.1) | Quantum (v3.4) | v3.4 Speedup |
|---|---|---|---|---|
| Vector Search | O(N) | O(√N) | O(√N) | Quadratic |
| Pattern Discovery | O(N·M) | O(√(N·M)) | O(√(N·M)) | Quadratic |
| Correlation Updates | O(K²) | O(1) | O(1) | K² (entanglement) |
| Storage Compression | N vectors | log₂(N) qubits | log₂(N) qubits | Exponential |
| Gradient Computation | O(N) backprop | O(N) param shift | O(N) param shift | Comparable* |
| Circuit Execution | Sequential | Sequential | Parallel Batch | 8-12x faster |
| HPO Search | O(M·N) grid | O(√M) tunneling | O(√M) tunneling | Quadratic |
*Quantum gradients enable exploration of non-convex loss landscapes
**v3.4 achieves 8-12x speedup through batch API, native gates, and smart caching
- Quantum neural network training
- Hybrid classical-quantum models
- Transfer learning with quantum layers
- Hyperparameter optimization
- Adversarial training
- Few-shot learning
- Portfolio correlation management
- Crisis pattern detection
- Time-series prediction
- Risk analysis
- Context-aware training data selection
- Hyperparameter optimization
- Multi-task learning
- Active learning
- User preference modeling
- Item similarity
- Cold start problem
- Session-based recommendations
- Molecular similarity search
- Protein structure comparison
- Drug discovery
- Materials science
Contributions are welcome! Please feel free to submit a Pull Request.
See LICENSE file for details.
make install-dev # Install with development dependencies
make test # Run tests
make format # Auto-format code
make lint # Run linters
make verify # Run all checksFor support, submit issues in this repository or contact yucelz@gmail.com.
If you use Q-Store in your research, please cite:
@software{qstore2025,
title={Q-Store: Quantum-Native Database Architecture v3.4},
author={Yucel Zengin},
year={2025},
url={https://github.com/yucelz/q-store}
}- NEW: AsyncQuantumExecutor - Non-blocking circuit execution (10-20x throughput)
- NEW: Async Storage System - Zero-blocking Zarr/Parquet with background writers
- NEW: ResultCache - LRU cache for instant repeated circuit results
- NEW: Connection Pooling - Multi-connection backend clients
- NEW: IonQAdapter - Seamless IonQ hardware backend integration
- FIXED: PyTorch QuantumLayer - n_parameters attribute and async execution
- ENHANCED: QuantumFeatureExtractor - Async execution and multi-basis measurements
- FOUNDATION: Built on v4.0.0 verification/profiling/visualization (144 tests)
- ARCHITECTURE: 145 Python files across 29 specialized modules
- PERFORMANCE: 10-20x circuit throughput improvement over v4.0
- STORAGE: Zero-blocking async I/O for all storage operations
- PRODUCTION: Complete async/await API with comprehensive error handling
- NEW: Verification Module - Circuit equivalence, property verification, formal analysis
- NEW: Profiling Module - Performance profiling, optimization benchmarks
- NEW: Visualization Module - Circuit diagrams, state visualization, Bloch sphere
- NEW: 144 comprehensive tests for verification/profiling/visualization
- NEW: Integration tests for end-to-end workflows
- NEW: Benchmark suite for performance tracking
- IMPROVED: Complete examples directory with basic/advanced/QML/chemistry/error-correction
- PERFORMANCE: Benchmark baselines established for regression testing
- NEW: Multi-backend orchestration for distributed quantum computing
- NEW: Adaptive circuit optimization with dynamic simplification
- NEW: Adaptive shot allocation for smart resource management
- NEW: Natural gradient descent for improved convergence
- PERFORMANCE: 2-3x throughput improvement via multi-backend distribution
- PERFORMANCE: 30-40% faster execution with adaptive optimization
- NEW: IonQ Batch API integration for parallel circuit submission
- NEW: Smart circuit caching with template-based caching
- NEW: IonQ native gate compilation (GPi, GPi2, MS gates)
- NEW: Connection pooling for persistent HTTP connections
- PERFORMANCE: 8-12x faster training (29 min → 3.3 min)
- PERFORMANCE: 5-8 circuits/second (up from 0.5-0.6)
- New: Hardware-agnostic quantum ML training infrastructure
- New: QuantumLayer - Variational quantum circuit layers
- New: QuantumTrainer - Training orchestration with quantum gradients
- New: QuantumGradientComputer - Parameter shift rule implementation
- New: QuantumDataEncoder - Amplitude and angle encoding
- New: QuantumOptimizer - Quantum-aware optimization algorithms
- New: QuantumHPOSearch - Quantum-enhanced hyperparameter optimization
- New: CheckpointManager - Model persistence with quantum states
- New: Support for multiple quantum SDKs (Cirq, Qiskit)
- New: Hybrid classical-quantum model support
- New: Quantum transfer learning capabilities
- New: Quantum data augmentation
- New: Quantum regularization techniques
- New: Training data management in quantum database
- New: BackendManager - Intelligent backend selection
- Improved: Database API extended for ML training workflows
- Improved: StateManager for model parameter storage
- New: Modern Python project structure with src/ layout
- New: pyproject.toml-based configuration (PEP 621)
- New: Modular package organization (core/, backends/, utils/)
- New: Development automation with Makefile
- New: Comprehensive documentation in docs/
- Breaking Changes: Full async/await API
- New: Production-ready architecture with connection pooling
- New: Pinecone integration for classical vector storage
- New: Comprehensive monitoring and metrics
- New: Enhanced configuration system (DatabaseConfig)
- New: Type-safe API with full type hints
- New: Lifecycle management with context managers
- New: Result caching for improved performance
- New: Comprehensive test suite
- Improved: State management with background decoherence loops
- Improved: Error handling and retry logic
- Improved: Documentation and examples
- Initial release
- Basic quantum database features
- IonQ integration
- Simple examples
Note: Q-Store v3.4 delivers production-ready quantum ML training with 8-12x performance improvements over v3.3.1. The system features hardware-agnostic support, seamless integration with classical ML frameworks (PyTorch, TensorFlow, JAX), and optimized IonQ execution through batch API, native gates, and smart caching. For mission-critical applications, additional validation and optimization are recommended.
# Clone repository
git clone https://github.com/yucelz/q-store.git
cd q-store
# Install in development mode with all dependencies
pip install -e ".[dev,backends,all]"
# Install pre-commit hooks
pip install pre-commit
pre-commit installQ-Store uses automated code quality tools configured in pyproject.toml and .pre-commit-config.yaml:
Formatting:
# Format code with black (line length: 100)
black src/q_store
# Sort imports with isort
isort src/q_store --profile blackLinting:
# Run ruff (fast Python linter)
ruff check src/q_store
# Run flake8
flake8 src/q_store
# Run mypy for type checking
mypy src/q_storePre-commit Hooks: All code quality checks run automatically on commit:
- Trailing whitespace removal
- End-of-file fixing
- YAML/JSON/TOML validation
- Black formatting
- Import sorting (isort)
- Ruff linting
- Type checking (mypy)
Run All Checks Manually:
pre-commit run --all-files# Run all tests
pytest
# Run with coverage
pytest --cov=src/q_store --cov-report=html
# Run specific test file
pytest tests/test_quantum_database.py
# Run with specific markers
pytest -m "not slow"
pytest -m integration- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Make your changes
- Run code quality tools:
pre-commit run --all-files - Run tests:
pytest - Commit changes (pre-commit hooks will run automatically)
- Push to your fork:
git push origin feature/my-feature - Create a Pull Request