-
Notifications
You must be signed in to change notification settings - Fork 100
fix: track tx depth improvements #289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds multiple design documents for DashJ sync, networking, and optimizations; introduces an executor to offload CoinJoin message processing, minor coinjoin cleanups, replaces wallet manual confidence list with a per-transaction counter, and marks Transaction.isSimple() as deprecated. Changes
Sequence Diagram(s)sequenceDiagram
participant Node as DashJ Node
participant PeerGroup as PeerGroup
participant Peer as Peer (many)
participant Storage as Headers DB / Block Store
participant Wallet as Wallet/Validators
Node->>PeerGroup: start sync (headers-first)
PeerGroup->>Peer: request headers (GetHeaders)
Peer-->>PeerGroup: send headers (chunked)
PeerGroup->>Storage: validate & store headers (checkpoint)
PeerGroup->>Peer: request MNLIST diffs
Peer-->>PeerGroup: send SimplifiedMasternodeListDiff
PeerGroup->>Storage: apply MNLIST / quorum state
PeerGroup->>Peer: request filtered blocks (GetFilteredBlock / GetBlocks)
Peer-->>PeerGroup: stream filtered blocks
PeerGroup->>Wallet: deliver blocks/txs for validation (InstantSend/ChainLocks)
Wallet-->>Storage: commit validated blocks
PeerGroup->>Node: progress updates / complete when headers==blocks
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧹 Recent nitpick comments
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧠 Learnings (1)📓 Common learnings⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @designdocs/proposals/reverse-sync.md:
- Around line 1-1034: PR title is misleading: it references "track tx depth
improvements" but the changes add design docs about reverse block
synchronization and networking; update the PR title and description to reflect
the actual content. Change the PR title to something like "docs: add
reverse-sync and blockchain sync design proposals" (or "docs: add blockchain
sync design documents"), and update the PR description/body to list the included
documents (e.g., reverse-sync.md and related network/proposal docs), summarizing
key changes so reviewers know this is documentation-only and not a tx-depth code
fix; ensure any commit messages referencing tx-depth are corrected or split into
a separate PR if there are actual code changes for transaction depth tracking.
🧹 Nitpick comments (8)
designdocs/proposals/reverse-sync.md (2)
243-274: Reassess severity after mitigation strategy.Pitfalls #6 and #7 are marked as CRITICAL, but Phase 4 implementation (lines 689-748) proposes using the already-synced masternode list from DIP-16's MNLIST stage as a mitigation. Consider adding a "Severity After Mitigation" assessment for these pitfalls, similar to how pitfalls #1, #3, #8, and #11 received updated severity ratings.
580-612: Transaction validation complexity may be understated.The topological sorting of transactions (line 600) is marked as HIGH complexity, which is appropriate. However, the simplified code example doesn't fully convey the challenges of:
- Handling transaction chains across multiple blocks
- Detecting circular dependencies
- Managing mempool transactions that reference block transactions
- Dealing with transaction malleability
Consider noting that this component would require substantial testing with real transaction patterns.
designdocs/proposals/network-optimization-strategies.md (5)
173-231: Batch size increase requires peer compatibility testing.The proposed increase from 500 to 1000-2000 blocks per request is promising. However, the actual limit depends not just on the protocol's MAX_INV_SIZE (50,000) but also on:
- Peer implementation limits
- Peer memory constraints
- Timeout thresholds for large batches
- Network MTU and fragmentation
The risks section mentions "need to verify peer compatibility" but consider adding more emphasis on testing with diverse peer implementations (Core, Dash Core versions, etc.) to avoid interoperability issues.
451-696: Excellent multi-peer design with coordinator pattern.The parallel download design is well-architected:
- Correctly identifies that NIO already provides parallel network I/O
- Proposes sequential block processing via merge queue to maintain blockchain invariants
- Includes handling for out-of-order block arrival
However, note the interaction with the threading model: if the single NIO thread is busy processing blocks in the merge queue, it cannot receive data from peers. This reinforces the recommendation from
peer-networking-threading-model.mdto move message processing to a thread pool (Option 1). Consider cross-referencing that document here.
1260-1318: Implementation timeline may be optimistic.The roadmap structure is clear, but the time estimates appear aggressive:
Phase 1 (1-2 weeks): TCP optimizations, peer performance tracking, AND GetBlocksMessage pipelining. For production-quality code with comprehensive testing, 1-2 weeks for all three seems optimistic.
Phase 3 (5-8 weeks): Multi-peer parallel downloads involves significant complexity (coordinator, merge queue, range assignment, error handling, peer disconnection, race conditions). The implementation section (lines 584-658) shows the coordinator alone is ~75 lines, and that's simplified. 4 weeks seems tight for production quality.
Consider adding 50-100% buffer to time estimates, especially for Phase 3. Also note that the percentage improvements assume optimizations are independent and additive—actual gains may be lower due to interactions.
1322-1361: Testing strategy could be more specific for complex scenarios.The outlined tests are a good start, but given the complexity of multi-peer coordination and threading interactions, consider adding specific test scenarios:
Concurrency & Error Handling:
- Peer disconnection during batch download
- Slow peer causing download stalls
- Race conditions in merge queue
- Out-of-order block arrival
Performance & Resource:
- Memory pressure with 3 peers downloading simultaneously
- CPU utilization on single-core Android devices
- Battery impact measurement
- Network bandwidth saturation
Edge Cases:
- Chain reorganization during parallel download
- Duplicate blocks from multiple peers
- Block validation failure mid-sync
1398-1425: Well-designed configuration options with appropriate defaults.The
NetworkConfigclass provides good tuning flexibility while marking experimental features (parallel download) as opt-in. This allows gradual rollout and A/B testing.Consider adding configuration for:
- Minimum peer performance threshold before switching
- Maximum memory for block merge queue
- Timeout values for slow peers
- Debug/verbose logging toggle
designdocs/blockchain-sync-bip37-dip16.md (1)
1-1020: Optional: Address markdown linting issues for improved formatting consistency.The static analysis tool flagged a few minor markdown formatting issues:
Missing language specifiers on fenced code blocks (lines 212, 288): Adding language identifiers improves syntax highlighting and parsing.
Emphasis used instead of headings (lines 955, 961): Using proper markdown heading syntax (e.g.,
###) instead of bold text improves document structure parsing.These are purely stylistic issues that don't affect readability, but fixing them would improve markdown linting compliance.
Example fixes for code block language specifiers
When you have a code block, specify the language:
-``` +```text Some ASCII diagram or text contentFor emphasis that should be headings, use proper heading syntax: ```diff -**DIP-16 Headers-First Synchronization** +### DIP-16 Headers-First Synchronization
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
designdocs/blockchain-sync-bip37-dip16.mddesigndocs/peer-networking-threading-model.mddesigndocs/proposals/network-optimization-strategies.mddesigndocs/proposals/reverse-sync.md
🧰 Additional context used
🪛 LanguageTool
designdocs/peer-networking-threading-model.md
[grammar] ~334-~334: Ensure spelling is correct
Context: ...ding utilities ## Recommendations For dashj optimization, consider: 1. **Profile f...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~343-~343: Ensure spelling is correct
Context: ...CPU) - Message processing is fast (< 1ms per message) - Memory is constrained...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
designdocs/proposals/reverse-sync.md
[style] ~119-~119: This phrase is redundant. Consider writing “advances”.
Context: ...forward-only assumptions: - Ring cursor advances forward: `setRingCursor(buffer, buffer.position...
(ADVANCE_FORWARD)
[style] ~713-~713: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ...y need to skip ChainLock validation for very old blocks Implementation: ```java pub...
(EN_WEAK_ADJECTIVE)
[grammar] ~1018-~1018: Use a hyphen to join words.
Context: .... Alternative UX Improvements (lower hanging fruit): - Show estimated bala...
(QB_NEW_EN_HYPHEN)
🪛 markdownlint-cli2 (0.18.1)
designdocs/proposals/network-optimization-strategies.md
7-7: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
542-542: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
559-559: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
765-765: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1149-1149: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1174-1174: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1193-1193: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1263-1263: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1278-1278: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1289-1289: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1431-1431: Bare URL used
(MD034, no-bare-urls)
1432-1432: Bare URL used
(MD034, no-bare-urls)
1433-1433: Bare URL used
(MD034, no-bare-urls)
1434-1434: Bare URL used
(MD034, no-bare-urls)
designdocs/blockchain-sync-bip37-dip16.md
212-212: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
288-288: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
955-955: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
961-961: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: JAVA 11 OS macOS-latest Gradle
- GitHub Check: Analyze (java)
- GitHub Check: JAVA 11 OS ubuntu-latest Gradle
🔇 Additional comments (15)
designdocs/proposals/reverse-sync.md (4)
1-29: Clear motivation and well-structured overview.The document effectively establishes the user experience benefits of reverse synchronization and correctly identifies the DIP-16 BLOCKS stage as the modification point. The proposed approach is logical.
32-73: Excellent insight about DIP-16 headers-first enabling reverse sync.The recognition that having complete headers before the BLOCKS stage fundamentally changes the feasibility of reverse synchronization is a critical architectural insight. The code examples clearly demonstrate how headers enable canonical chain knowledge without block bodies.
949-999: Excellent pragmatic recommendation for hybrid approach.The two-phase sync strategy (reverse preview of 500-1000 blocks + forward historical sync) is a well-balanced solution that:
- Achieves the primary UX goal (fast recent transaction visibility)
- Avoids most critical pitfalls by limiting reverse sync scope
- Reuses existing infrastructure
- Provides graceful degradation
The complexity/risk/benefit assessment is realistic. This appears to be the most implementable approach in the document.
1002-1027: Well-reasoned conclusion with practical recommendations.The three-tiered recommendation structure (production hybrid, research prototype, alternative improvements) provides clear guidance for implementation priorities. The acknowledgment that the hybrid approach "balances innovation with pragmatism" accurately reflects the analysis.
designdocs/peer-networking-threading-model.md (4)
7-151: Accurate description of single-threaded NIO architecture.The document clearly explains how NioClientManager uses a single selector thread for all peer connections, with serialized message processing. The code examples from actual source files (NioClientManager, PeerSocketHandler, Peer) provide concrete evidence of the architecture. The identification that "large data transfers from different peers cannot happen concurrently" is a key insight for optimization work.
192-215: Performance implications align with optimization proposals.The identified disadvantages (serialized large data transfers, head-of-line blocking, no parallelism) directly support the optimization strategies proposed in
network-optimization-strategies.md. The observation that "when downloading a 2MB block from Peer A, Peer B's messages wait" explains the 70% network wait time documented in the optimization analysis.This cross-document consistency strengthens both documents.
231-322: Well-explained alternative architectures.The three alternatives are clearly presented with appropriate pros/cons. Option 1 (message processing thread pool) is correctly recommended as it maintains the NIO model's efficiency while addressing the parallelism limitation.
Note: The network-optimization-strategies.md document discusses multi-peer parallel downloads (lines 451-696). That document correctly identifies that bitcoinj's NIO architecture already supports concurrent network I/O, and the challenge is coordinating block processing—which aligns with this document's recommendation.
332-348: Pragmatic recommendations with appropriate caveats.The emphasis on profiling first (measure I/O wait vs CPU processing time) is sound advice. The conditions for keeping vs changing the threading model are reasonable and reflect engineering pragmatism.
designdocs/proposals/network-optimization-strategies.md (2)
3-51: Excellent data-driven performance analysis.The analysis clearly identifies network wait time (70%) as the primary bottleneck, with a detailed breakdown into request latency (403.74s) and inter-block streaming delays (~2010s). This data-driven approach provides strong justification for the proposed optimizations and helps prioritize them.
The recognition that message deserialization (1.9%) and disk I/O (0.6%) are NOT bottlenecks is equally valuable for avoiding premature optimization.
94-157: Remove the proposedblockChainDownload()call — this method does not exist in the codebase.The review proposes executing
Threading.THREAD_POOL.execute(() -> blockChainDownload(Sha256Hash.ZERO_HASH)), but there is no publicblockChainDownload()method. The codebase only contains:
blockChainDownloadLocked(Sha256Hash toHash)— private method requiring lock to be heldstartBlockChainDownload()— public method that acquires the lock and callsblockChainDownloadLocked()To implement pipelining from a background thread, call the existing public API:
Threading.THREAD_POOL.execute(() -> { try { peer.startBlockChainDownload(); } catch (Exception e) { log.error("Error requesting next block batch", e); } });
Threading.THREAD_POOLis available and already used elsewhere in the codebase (PaymentSession, TransactionBroadcast). ThestartBlockChainDownload()method properly handles locking internally, making it thread-safe to call from background threads.Likely an incorrect or invalid review comment.
designdocs/blockchain-sync-bip37-dip16.md (5)
1-486: Excellent comprehensive documentation of DIP-16 Headers-First Synchronization.The multi-stage sync architecture is clearly explained with:
- Well-structured six-stage progression (OFFLINE through COMPLETE)
- Accurate code snippets with file/line references
- Clear ASCII flow diagram showing stage transitions
- Detailed explanation of why headers-first is important for Dash (masternodes, LLMQ quorums)
- Good coverage of checkpoint security and progress tracking
The integration of masternode list synchronization and LLMQ quorum data before block download is well-articulated, providing clear context for developers.
489-733: Well-documented BIP37 bloom filter implementation and sync flow.Strong coverage of:
- Complete bloom filter lifecycle with clear diagrams
- Filter parameters and privacy considerations
- Recalculation triggers (new keys, scripts, false positive rate threshold)
- Protocol message sequences between client and peer
- Block locator construction with exponential backoff
The explanation of filter merging via
FilterMergerand distribution to all peers provides good insight into the architecture.
736-876: Thorough coverage of optimization and edge case handling.Key strengths:
- Fast catchup optimization clearly explained with transition logic at
fastCatchupTimeSecs- Filter exhaustion detection and recovery process well-documented
- Critical detail: ping/pong ensures filter is applied before resuming download
- The
awaitingFreshFilterqueue mechanism prevents missed transactionsThese edge cases are important for reliability, and the documentation provides clear implementation guidance.
879-957: Strong thread safety documentation and useful method references.Highlights:
- Clear explanation of ReentrantLock usage in both Peer and PeerGroup
- Comprehensive list of guarded variables with annotations
- Thread-safe collection choices documented (CopyOnWriteArrayList)
- Executor pattern explained for avoiding lock contention
- Method reference tables provide quick lookup with thread safety notes
This section will help developers understand and maintain thread safety in the synchronization code.
960-1020: Excellent summary and comprehensive references.The summary effectively synthesizes:
- DIP-16 multi-stage synchronization features
- BIP37 bloom filtering capabilities
- Core synchronization features
- Dash-specific functionality (masternodes, LLMQ, InstantSend, ChainLocks)
The reference section provides complete links to specifications (BIP37, DIP-3/4/6/16) and source files, enabling readers to dive deeper into specific areas.
| ## Priority 6: Headers-First Download with Parallel Body Fetching ⭐ | ||
|
|
||
| ### Problem | ||
| - Current BIP37 approach downloads filtered blocks sequentially | ||
| - Cannot parallelize easily because we don't know future block hashes | ||
|
|
||
| ### Solution | ||
| 1. Download all block headers first (very fast - headers are only 80 bytes) | ||
| 2. Once headers are known, fetch block bodies in parallel from multiple peers | ||
|
|
||
| ### Header Storage Challenge | ||
|
|
||
| **Problem**: SPVBlockStore only maintains ~5000 recent headers. For 1.39M blocks, we need a different storage strategy. | ||
|
|
||
| **Header Storage Requirements**: | ||
| - 1.39M headers × 80 bytes = **111 MB** of raw header data | ||
| - Plus indexes, metadata, and overhead | ||
| - Need fast random access by height and hash | ||
| - Must work on mobile devices with limited resources | ||
|
|
||
| ### Header Storage Options | ||
|
|
||
| #### Option 1: Streaming Headers (Recommended for Mobile) ⭐⭐⭐ | ||
|
|
||
| **Concept**: Don't store all headers permanently - just verify the chain as headers arrive, then discard old headers. | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class StreamingHeaderValidator { | ||
| private final NetworkParameters params; | ||
| private StoredBlock checkpoint; // Last known checkpoint | ||
| private StoredBlock currentTip; // Current chain tip | ||
| private LinkedList<StoredBlock> recentHeaders; // Keep last 5000 | ||
|
|
||
| // Verify header chain without storing everything | ||
| public void processHeader(Block header) throws VerificationException { | ||
| // 1. Verify header connects to previous | ||
| verifyHeaderConnects(header); | ||
|
|
||
| // 2. Verify proof of work | ||
| verifyProofOfWork(header); | ||
|
|
||
| // 3. Update tip | ||
| currentTip = new StoredBlock(header, currentTip.getChainWork(), currentTip.getHeight() + 1); | ||
|
|
||
| // 4. Add to recent headers (keep last 5000) | ||
| recentHeaders.addLast(currentTip); | ||
| if (recentHeaders.size() > 5000) { | ||
| recentHeaders.removeFirst(); | ||
| } | ||
|
|
||
| // 5. Periodically save checkpoint | ||
| if (currentTip.getHeight() % 10000 == 0) { | ||
| checkpoint = currentTip; | ||
| saveCheckpoint(checkpoint); | ||
| } | ||
| } | ||
|
|
||
| // After headers sync, we know: | ||
| // - Final chain tip (verified) | ||
| // - Last 5000 headers (in memory) | ||
| // - Checkpoints every 10K blocks (on disk) | ||
|
|
||
| // This is enough to fetch block bodies | ||
| } | ||
| ``` | ||
|
|
||
| **Phase 1: Headers Download with Streaming** | ||
| ```java | ||
| // Request all headers and verify as they arrive | ||
| StreamingHeaderValidator validator = new StreamingHeaderValidator(params); | ||
|
|
||
| Sha256Hash startHash = blockChain.getChainHead().getHeader().getHash(); | ||
| Sha256Hash stopHash = Sha256Hash.ZERO_HASH; // Get all headers | ||
|
|
||
| while (!validator.isFullySynced()) { | ||
| GetHeadersMessage request = new GetHeadersMessage(params, startHash, stopHash); | ||
| peer.sendMessage(request); | ||
|
|
||
| // As headers arrive, validate and discard | ||
| List<Block> headers = waitForHeaders(); | ||
| for (Block header : headers) { | ||
| validator.processHeader(header); | ||
| } | ||
|
|
||
| // Update start for next batch | ||
| startHash = validator.getCurrentTip().getHeader().getHash(); | ||
| } | ||
|
|
||
| // Now we have verified chain tip and recent headers | ||
| // Can fetch bodies starting from our last stored block | ||
| ``` | ||
|
|
||
| **Pros**: | ||
| - ✅ Minimal memory usage (~400KB for 5000 headers) | ||
| - ✅ Minimal disk usage (checkpoints only) | ||
| - ✅ Perfect for mobile/Android | ||
| - ✅ Can resume from checkpoints on interruption | ||
|
|
||
| **Cons**: | ||
| - ❌ Can't randomly access old headers | ||
| - ❌ Must fetch bodies sequentially from last stored block | ||
| - ❌ Limits parallelization (can only fetch forward from known blocks) | ||
|
|
||
| --- | ||
|
|
||
| #### Option 2: Temporary File-Backed Header Cache ⭐⭐ | ||
|
|
||
| **Concept**: Store all headers temporarily in a memory-mapped file, discard after body sync completes. | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class TemporaryHeaderStore implements AutoCloseable { | ||
| private static final int HEADER_SIZE = 80; | ||
| private final File tempFile; | ||
| private final RandomAccessFile raf; | ||
| private final MappedByteBuffer buffer; | ||
| private final Map<Sha256Hash, Integer> hashToOffset; | ||
|
|
||
| public TemporaryHeaderStore(int estimatedHeaders) throws IOException { | ||
| // Create temp file | ||
| tempFile = File.createTempFile("headers-", ".tmp"); | ||
| tempFile.deleteOnExit(); | ||
|
|
||
| // Map file to memory | ||
| raf = new RandomAccessFile(tempFile, "rw"); | ||
| long fileSize = (long) estimatedHeaders * HEADER_SIZE; | ||
| buffer = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, 0, fileSize); | ||
|
|
||
| hashToOffset = new HashMap<>(estimatedHeaders); | ||
| } | ||
|
|
||
| public void storeHeader(int height, Block header) throws IOException { | ||
| int offset = height * HEADER_SIZE; | ||
| buffer.position(offset); | ||
|
|
||
| byte[] headerBytes = header.bitcoinSerialize(); | ||
| buffer.put(headerBytes, 0, HEADER_SIZE); | ||
|
|
||
| hashToOffset.put(header.getHash(), offset); | ||
| } | ||
|
|
||
| public Block getHeader(int height) throws IOException { | ||
| int offset = height * HEADER_SIZE; | ||
| buffer.position(offset); | ||
|
|
||
| byte[] headerBytes = new byte[HEADER_SIZE]; | ||
| buffer.get(headerBytes); | ||
|
|
||
| return new Block(params, headerBytes); | ||
| } | ||
|
|
||
| public Block getHeaderByHash(Sha256Hash hash) throws IOException { | ||
| Integer offset = hashToOffset.get(hash); | ||
| if (offset == null) return null; | ||
|
|
||
| buffer.position(offset); | ||
| byte[] headerBytes = new byte[HEADER_SIZE]; | ||
| buffer.get(headerBytes); | ||
|
|
||
| return new Block(params, headerBytes); | ||
| } | ||
|
|
||
| @Override | ||
| public void close() { | ||
| buffer.clear(); | ||
| try { raf.close(); } catch (IOException e) {} | ||
| tempFile.delete(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Usage**: | ||
| ```java | ||
| // Phase 1: Download and store all headers | ||
| try (TemporaryHeaderStore headerStore = new TemporaryHeaderStore(1_400_000)) { | ||
| // Download all headers | ||
| for (Block header : downloadAllHeaders()) { | ||
| headerStore.storeHeader(currentHeight, header); | ||
| currentHeight++; | ||
| } | ||
|
|
||
| // Phase 2: Now fetch bodies in parallel using stored headers | ||
| ParallelBodyDownloader downloader = new ParallelBodyDownloader(headerStore); | ||
| downloader.downloadBodies(startHeight, endHeight, peers); | ||
|
|
||
| } // Auto-cleanup temp file | ||
| ``` | ||
|
|
||
| **Pros**: | ||
| - ✅ Enables full parallelization (random access to any header) | ||
| - ✅ Memory-mapped I/O is fast | ||
| - ✅ Auto-cleanup on close | ||
| - ✅ ~111MB disk usage (reasonable) | ||
|
|
||
| **Cons**: | ||
| - ❌ Requires 111MB temporary disk space | ||
| - ❌ Memory-mapped files may not work well on all Android versions | ||
| - ❌ Hash lookup requires in-memory HashMap (~50MB) | ||
|
|
||
| --- | ||
|
|
||
| #### Option 3: Sparse Header Storage with Checkpoints ⭐⭐⭐ | ||
|
|
||
| **Concept**: Store checkpoints (every 2,016 blocks) + recent headers + headers we need for current download. | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class SparseHeaderStore { | ||
| private static final int CHECKPOINT_INTERVAL = 2016; // ~2 weeks of Bitcoin blocks | ||
|
|
||
| // Permanent storage | ||
| private final Map<Integer, StoredBlock> checkpoints; // Every 2016 blocks | ||
| private final SPVBlockStore recentHeaders; // Last 5000 blocks | ||
|
|
||
| // Temporary active range (for current parallel download) | ||
| private final Map<Integer, StoredBlock> activeRange; | ||
| private int activeRangeStart = 0; | ||
| private int activeRangeEnd = 0; | ||
|
|
||
| public void downloadHeaders() { | ||
| int currentHeight = 0; | ||
|
|
||
| while (currentHeight < targetHeight) { | ||
| List<Block> headers = requestHeaders(currentHeight); | ||
|
|
||
| for (Block header : headers) { | ||
| // Always verify | ||
| verifyHeader(header); | ||
|
|
||
| // Store checkpoint? | ||
| if (currentHeight % CHECKPOINT_INTERVAL == 0) { | ||
| checkpoints.put(currentHeight, new StoredBlock(header, work, currentHeight)); | ||
| } | ||
|
|
||
| // Store recent? | ||
| if (targetHeight - currentHeight < 5000) { | ||
| recentHeaders.put(new StoredBlock(header, work, currentHeight)); | ||
| } | ||
|
|
||
| currentHeight++; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| public void loadRangeForDownload(int startHeight, int endHeight) { | ||
| activeRange.clear(); | ||
| activeRangeStart = startHeight; | ||
| activeRangeEnd = endHeight; | ||
|
|
||
| // Re-download just the headers we need for this range | ||
| List<Block> rangeHeaders = requestHeaders(startHeight, endHeight); | ||
| for (int i = 0; i < rangeHeaders.size(); i++) { | ||
| activeRange.put(startHeight + i, | ||
| new StoredBlock(rangeHeaders.get(i), work, startHeight + i)); | ||
| } | ||
| } | ||
|
|
||
| public StoredBlock getHeader(int height) { | ||
| // Check active range first | ||
| if (height >= activeRangeStart && height <= activeRangeEnd) { | ||
| return activeRange.get(height); | ||
| } | ||
|
|
||
| // Check recent headers | ||
| StoredBlock recent = recentHeaders.get(height); | ||
| if (recent != null) return recent; | ||
|
|
||
| // Check checkpoints | ||
| return checkpoints.get(height); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Usage**: | ||
| ```java | ||
| SparseHeaderStore headerStore = new SparseHeaderStore(); | ||
|
|
||
| // Phase 1: Download all headers, store checkpoints and recent | ||
| headerStore.downloadHeaders(); // Stores ~1400 checkpoints + 5000 recent | ||
|
|
||
| // Phase 2: Download bodies in ranges | ||
| for (int rangeStart = 0; rangeStart < targetHeight; rangeStart += 50000) { | ||
| int rangeEnd = Math.min(rangeStart + 50000, targetHeight); | ||
|
|
||
| // Load headers for this range (re-download if needed) | ||
| headerStore.loadRangeForDownload(rangeStart, rangeEnd); | ||
|
|
||
| // Download bodies for this range | ||
| downloadBodiesInRange(rangeStart, rangeEnd); | ||
|
|
||
| // Clear range to free memory | ||
| headerStore.clearActiveRange(); | ||
| } | ||
| ``` | ||
|
|
||
| **Pros**: | ||
| - ✅ Very low memory usage (~2MB: 1400 checkpoints + 5000 recent) | ||
| - ✅ Low disk usage (~200KB permanent) | ||
| - ✅ Enables range-based parallelization | ||
| - ✅ Excellent for mobile | ||
|
|
||
| **Cons**: | ||
| - ❌ Need to re-download headers for each range | ||
| - ❌ More complex logic | ||
| - ❌ Slightly slower overall (re-downloading headers) | ||
|
|
||
| --- | ||
|
|
||
| #### Option 4: SQLite Database (Production Quality) ⭐⭐⭐⭐ | ||
|
|
||
| **Concept**: Use SQLite for efficient, indexed header storage. | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class SQLiteHeaderStore { | ||
| private final Connection db; | ||
|
|
||
| public SQLiteHeaderStore(File dbFile) throws SQLException { | ||
| db = DriverManager.getConnection("jdbc:sqlite:" + dbFile.getAbsolutePath()); | ||
| createSchema(); | ||
| } | ||
|
|
||
| private void createSchema() throws SQLException { | ||
| db.createStatement().execute( | ||
| "CREATE TABLE IF NOT EXISTS headers (" + | ||
| " height INTEGER PRIMARY KEY," + | ||
| " hash BLOB NOT NULL," + | ||
| " header BLOB NOT NULL," + | ||
| " chainwork BLOB NOT NULL" + | ||
| ");" + | ||
| "CREATE INDEX IF NOT EXISTS idx_hash ON headers(hash);" | ||
| ); | ||
|
|
||
| // Use WAL mode for better concurrent access | ||
| db.createStatement().execute("PRAGMA journal_mode=WAL;"); | ||
|
|
||
| // Optimize for fast inserts during sync | ||
| db.createStatement().execute("PRAGMA synchronous=NORMAL;"); | ||
| } | ||
|
|
||
| public void storeHeaders(List<Block> headers, int startHeight) throws SQLException { | ||
| db.setAutoCommit(false); | ||
|
|
||
| try (PreparedStatement stmt = db.prepareStatement( | ||
| "INSERT OR REPLACE INTO headers (height, hash, header, chainwork) VALUES (?, ?, ?, ?)")) { | ||
|
|
||
| for (int i = 0; i < headers.size(); i++) { | ||
| Block header = headers.get(i); | ||
| int height = startHeight + i; | ||
|
|
||
| stmt.setInt(1, height); | ||
| stmt.setBytes(2, header.getHash().getBytes()); | ||
| stmt.setBytes(3, header.bitcoinSerialize()); | ||
| stmt.setBytes(4, calculateChainWork(header).toByteArray()); | ||
| stmt.addBatch(); | ||
| } | ||
|
|
||
| stmt.executeBatch(); | ||
| db.commit(); | ||
| } catch (SQLException e) { | ||
| db.rollback(); | ||
| throw e; | ||
| } | ||
| } | ||
|
|
||
| public StoredBlock getHeader(int height) throws SQLException { | ||
| try (PreparedStatement stmt = db.prepareStatement( | ||
| "SELECT header, chainwork FROM headers WHERE height = ?")) { | ||
|
|
||
| stmt.setInt(1, height); | ||
| ResultSet rs = stmt.executeQuery(); | ||
|
|
||
| if (rs.next()) { | ||
| byte[] headerBytes = rs.getBytes("header"); | ||
| byte[] chainwork = rs.getBytes("chainwork"); | ||
| Block header = new Block(params, headerBytes); | ||
| return new StoredBlock(header, new BigInteger(chainwork), height); | ||
| } | ||
| return null; | ||
| } | ||
| } | ||
|
|
||
| public StoredBlock getHeaderByHash(Sha256Hash hash) throws SQLException { | ||
| try (PreparedStatement stmt = db.prepareStatement( | ||
| "SELECT height, header, chainwork FROM headers WHERE hash = ?")) { | ||
|
|
||
| stmt.setBytes(1, hash.getBytes()); | ||
| ResultSet rs = stmt.executeQuery(); | ||
|
|
||
| if (rs.next()) { | ||
| int height = rs.getInt("height"); | ||
| byte[] headerBytes = rs.getBytes("header"); | ||
| byte[] chainwork = rs.getBytes("chainwork"); | ||
| Block header = new Block(params, headerBytes); | ||
| return new StoredBlock(header, new BigInteger(chainwork), height); | ||
| } | ||
| return null; | ||
| } | ||
| } | ||
|
|
||
| public void compact() throws SQLException { | ||
| // After body sync completes, remove old headers | ||
| // Keep only recent 5000 + checkpoints | ||
| db.createStatement().execute( | ||
| "DELETE FROM headers WHERE " + | ||
| " height < (SELECT MAX(height) - 5000 FROM headers) AND " + | ||
| " height % 2016 != 0" // Keep checkpoints | ||
| ); | ||
| db.createStatement().execute("VACUUM;"); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Usage**: | ||
| ```java | ||
| File headerDb = new File(walletDir, "headers.db"); | ||
| SQLiteHeaderStore headerStore = new SQLiteHeaderStore(headerDb); | ||
|
|
||
| // Phase 1: Download and store all headers | ||
| int height = 0; | ||
| while (height < targetHeight) { | ||
| List<Block> headers = downloadHeaders(height); | ||
| headerStore.storeHeaders(headers, height); | ||
| height += headers.size(); | ||
| } | ||
|
|
||
| // Phase 2: Parallel body download with random access | ||
| ParallelBodyDownloader downloader = new ParallelBodyDownloader(headerStore); | ||
| downloader.download(0, targetHeight, peers); | ||
|
|
||
| // Phase 3: Cleanup | ||
| headerStore.compact(); // Reduce to ~200KB | ||
| ``` | ||
|
|
||
| **Pros**: | ||
| - ✅ Full random access to any header | ||
| - ✅ Excellent performance with proper indexes | ||
| - ✅ Mature, battle-tested technology | ||
| - ✅ Built into Android (no extra dependencies) | ||
| - ✅ Can compact after sync completes | ||
| - ✅ Transactional integrity | ||
|
|
||
| **Cons**: | ||
| - ❌ Initial disk usage: ~150MB (compacts to ~200KB after) | ||
| - ❌ Slightly higher complexity | ||
|
|
||
| --- | ||
|
|
||
| ### Recommended Approach | ||
|
|
||
| **For Mobile/Android: Option 3 (Sparse Storage) + Option 1 (Streaming)** | ||
|
|
||
| ```java | ||
| public class MobileHeadersFirstSync { | ||
| private final SparseHeaderStore headerStore; | ||
|
|
||
| public void sync() { | ||
| // Phase 1: Stream headers, store checkpoints + recent | ||
| streamAndValidateHeaders(); // ~2MB storage | ||
|
|
||
| // Phase 2: Download bodies in ranges | ||
| for (BlockRange range : getRanges()) { | ||
| // Re-fetch headers for this range (cheap, headers are small) | ||
| headerStore.loadRangeForDownload(range.start, range.end); | ||
|
|
||
| // Download bodies in parallel (3-5 peers) | ||
| downloadBodiesInParallel(range, 3); | ||
|
|
||
| // Free range headers | ||
| headerStore.clearActiveRange(); | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **For Desktop: Option 4 (SQLite)** | ||
|
|
||
| Full-featured, reliable, and disk space is not a concern. | ||
|
|
||
| --- | ||
|
|
||
| ### Performance Comparison | ||
|
|
||
| | Strategy | Memory | Disk | Parallelization | Complexity | Mobile-Friendly | | ||
| |----------|--------|------|-----------------|------------|-----------------| | ||
| | Streaming (Option 1) | ~400KB | ~200KB | Limited | Low | ✅ Excellent | | ||
| | Temp File (Option 2) | ~50MB | ~111MB | Full | Medium | ⚠️ Moderate | | ||
| | Sparse (Option 3) | ~2MB | ~200KB | Range-based | Medium | ✅ Excellent | | ||
| | SQLite (Option 4) | ~5MB | ~150MB¹ | Full | Medium | ✅ Good | | ||
|
|
||
| ¹ Compacts to ~200KB after sync | ||
|
|
||
| --- | ||
|
|
||
| **Phase 2: Parallel Body Download** | ||
| ```java | ||
| // Now that we have all block hashes, fetch bodies in parallel | ||
| ParallelBlockDownloader downloader = new ParallelBlockDownloader(); | ||
| downloader.downloadBlockBodies( | ||
| allBlockHashes, | ||
| availablePeers, | ||
| blockChain | ||
| ); | ||
| ``` | ||
|
|
||
| ### Expected Impact | ||
| - Enable true parallelization | ||
| - Headers download: ~100-200s (much faster than full sync) | ||
| - Body download: Can use all available peers efficiently | ||
| - Potential total sync time: **800-1200s** (vs current 2887s) | ||
| - **60-70% reduction** in sync time | ||
|
|
||
| ### Risks & Considerations | ||
| - **Major architectural change** - requires significant refactoring | ||
| - Changes sync model from BIP37 filtered blocks to headers-first | ||
| - May require changes to wallet notification model | ||
| - Need to maintain bloom filters during body fetch | ||
| - More complex error handling | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Headers-first storage analysis is thorough but understates architectural scope.
The four header storage options are well-analyzed with realistic pros/cons. The recommendations (Sparse for mobile, SQLite for desktop) are sound.
However, Priority 6 involves a fundamental shift from BIP37 filtered blocks to headers-first synchronization. This isn't just an optimization—it's an architectural redesign that affects:
- Bloom filter lifecycle and updates
- Wallet transaction discovery timing
- Confidence/confirmation tracking
- Integration with existing SPV wallet implementations
- Testing and validation requirements
Consider elevating this to a separate design proposal rather than "Priority 6" in an optimization list, given its scope and risk.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
765-765: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1149-1149: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1174-1174: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
1193-1193: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
| # Reverse Block Synchronization for DashJ | ||
|
|
||
| ## Overview | ||
|
|
||
| This document explores the concept of **reverse block synchronization** - downloading filtered blocks in reverse chronological order (newest to oldest) rather than the traditional forward order. The goal is to prioritize recent transactions that are more likely to be relevant to the user, providing faster "time-to-first-transaction" in the wallet UI. | ||
|
|
||
| ### Motivation | ||
|
|
||
| Traditional blockchain sync downloads blocks from genesis (or fast-catchup point) forward to the chain tip. For users, this means: | ||
| - **Long wait time** before seeing recent transactions | ||
| - **Poor UX** during initial wallet setup (restoration) | ||
| - **Delayed gratification** - users can't see their most recent payments until full sync completes | ||
|
|
||
| Reverse sync would: | ||
| - **Show recent transactions first** - users see their latest balance quickly | ||
| - **Better user experience** - immediate feedback on wallet state | ||
| - **Incremental completion** - wallet becomes useful faster | ||
|
|
||
| ### Proposed Approach | ||
|
|
||
| Following DIP-16 headers-first synchronization: | ||
| 1. **HEADERS stage**: Download all headers forward (as normal) → Establishes chain tip | ||
| 2. **MNLIST stage**: Sync masternode lists and LLMQ quorums (as normal) → Required for validation | ||
| 3. **PREBLOCKS stage**: Optional preprocessing (as normal) | ||
| 4. **BLOCKS stage (MODIFIED)**: Download filtered blocks in **reverse** order, 500 blocks at a time | ||
| - Start from chain tip (headerChain.getChainHead()) | ||
| - Request blocks in batches: [tip-499, tip-498, ..., tip-1, tip] | ||
| - Work backwards to the fast-catchup point or genesis | ||
|
|
||
| --- | ||
|
|
||
| ## Key Advantage: Headers Already Downloaded (DIP-16) | ||
|
|
||
| **CRITICAL INSIGHT**: With DIP-16 headers-first synchronization, by the time we reach the BLOCKS stage, we already have: | ||
|
|
||
| ✅ **Complete header chain** (`headerChain`) from genesis to tip | ||
| ✅ **All block hashes** for every block in the canonical chain | ||
| ✅ **Block heights** mapped to hashes | ||
| ✅ **Parent-child relationships** (via `prevBlockHash` in headers) | ||
| ✅ **Cumulative chainwork** for the entire chain | ||
| ✅ **Checkpoint validation** already passed during HEADERS stage | ||
|
|
||
| This **fundamentally changes** the reverse sync feasibility because: | ||
|
|
||
| 1. **We know the canonical chain structure** - No ambiguity about which blocks to request | ||
| 2. **We can validate block-to-header matching** - Verify downloaded blocks match their headers | ||
| 3. **We can build accurate locators** - Reference blocks by header hash even without bodies | ||
| 4. **We avoid orphan handling complexity** - We know exactly where each block fits | ||
| 5. **We can defer only transaction validation** - Block structure is already validated | ||
|
|
||
| ### What Headers Enable | ||
|
|
||
| **From headerChain, we can access:** | ||
|
|
||
| ```java | ||
| // Get header for any height | ||
| StoredBlock headerAtHeight = headerChain.getBlockStore().get(targetHeight); | ||
|
|
||
| // Get block hash without having the block body | ||
| Sha256Hash blockHash = headerAtHeight.getHeader().getHash(); | ||
|
|
||
| // Get parent hash | ||
| Sha256Hash parentHash = headerAtHeight.getHeader().getPrevBlockHash(); | ||
|
|
||
| // Verify a downloaded block matches its expected header | ||
| boolean matches = downloadedBlock.getHash().equals(headerAtHeight.getHeader().getHash()); | ||
|
|
||
| // Get chainwork for validation | ||
| BigInteger chainWork = headerAtHeight.getChainWork(); | ||
| ``` | ||
|
|
||
| **This solves or mitigates many pitfalls discussed below!** | ||
|
|
||
| --- | ||
|
|
||
| ## Critical Pitfalls (Re-evaluated with Headers) | ||
|
|
||
| > **Note**: The following pitfalls are re-evaluated considering that we have complete headers from DIP-16. | ||
| ### 1. **Block Chain Validation Dependency** | ||
|
|
||
| **Problem**: Blocks validate against their parent blocks. Validation requires: | ||
| - Previous block's hash matches `block.getPrevBlockHash()` | ||
| - Cumulative difficulty/chainwork from genesis | ||
| - Transaction inputs spending outputs from previous blocks | ||
|
|
||
| **Impact**: Cannot validate blocks in reverse order without their parents. | ||
|
|
||
| **Severity**: 🔴 **CRITICAL** - Core blockchain invariant violated | ||
|
|
||
| **✅ MITIGATED BY HEADERS**: Can validate block hash matches header! Can skip PoW validation. | ||
|
|
||
| **With headers, we can**: | ||
| ```java | ||
| // Validate block matches its expected header | ||
| StoredBlock expectedHeader = headerChain.getBlockStore().get(blockHeight); | ||
| if (!downloadedBlock.getHash().equals(expectedHeader.getHeader().getHash())) { | ||
| throw new VerificationException("Block doesn't match header at height " + blockHeight); | ||
| } | ||
|
|
||
| // Verify parent relationship (even in reverse) | ||
| if (!downloadedBlock.getPrevBlockHash().equals(expectedHeader.getHeader().getPrevBlockHash())) { | ||
| throw new VerificationException("Block parent mismatch"); | ||
| } | ||
|
|
||
| // Skip PoW validation - already done on headers | ||
| // Just verify transactions match merkle root | ||
| ``` | ||
|
|
||
| **Remaining Issue**: Transaction input validation still requires forward order (outputs before spends). | ||
|
|
||
| **Severity After Headers**: 🟡 **MEDIUM** - Block structure validated, only transaction validation deferred | ||
|
|
||
| --- | ||
|
|
||
| ### 2. **SPVBlockStore Ring Buffer Design** | ||
|
|
||
| **Problem**: SPVBlockStore uses a ring buffer with forward-only assumptions: | ||
| - Ring cursor advances forward: `setRingCursor(buffer, buffer.position())` | ||
| - Capacity of 5000 blocks (DEFAULT_CAPACITY) | ||
| - Wraps around when full | ||
| - Get operations assume sequential forward insertion | ||
|
|
||
| **Impact**: | ||
| - Reverse insertion would corrupt the ring buffer ordering | ||
| - Chain head tracking assumes forward progression | ||
| - Ring cursor movement would be backwards | ||
|
|
||
| **From SPVBlockStore.java:184-200:** | ||
| ```java | ||
| public void put(StoredBlock block) throws BlockStoreException { | ||
| lock.lock(); | ||
| try { | ||
| int cursor = getRingCursor(buffer); | ||
| if (cursor == fileLength) { | ||
| cursor = FILE_PROLOGUE_BYTES; // Wrap around | ||
| } | ||
| buffer.position(cursor); | ||
| // Write block at cursor | ||
| setRingCursor(buffer, buffer.position()); // Advance forward | ||
| blockCache.put(hash, block); | ||
| } finally { | ||
| lock.unlock(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Severity**: 🔴 **CRITICAL** - Storage layer incompatible with reverse insertion | ||
|
|
||
| --- | ||
|
|
||
| ### 3. **Orphan Block Handling Reversal** | ||
|
|
||
| **Problem**: In forward sync, orphan blocks are blocks received before their parent. In reverse sync, **every block is initially an orphan** (its parent hasn't been downloaded yet). | ||
|
|
||
| **Impact**: | ||
| - Orphan block storage would explode in memory | ||
| - `tryConnectingOrphans()` assumes forward chain building | ||
| - Orphan eviction policies designed for rare edge cases, not normal operation | ||
|
|
||
| **From AbstractBlockChain.java:130,468:** | ||
| ```java | ||
| private final LinkedHashMap<Sha256Hash, OrphanBlock> orphanBlocks = new LinkedHashMap<>(); | ||
|
|
||
| // In normal sync: | ||
| orphanBlocks.put(block.getHash(), new OrphanBlock(block, filteredTxHashList, filteredTxn)); | ||
| tryConnectingOrphans(); // Tries to connect orphans to chain | ||
| ``` | ||
|
|
||
| **In reverse sync**: Every single block would be orphaned initially! | ||
|
|
||
| **Severity**: 🔴 **CRITICAL** - Memory exhaustion, wrong orphan semantics | ||
|
|
||
| **✅ COMPLETELY SOLVED BY HEADERS**: No orphan handling needed! | ||
|
|
||
| **With headers, we know**: | ||
| ```java | ||
| // We know exactly which block to request at each height | ||
| for (int height = tipHeight; height >= fastCatchupHeight; height -= 500) { | ||
| // Request blocks by height range - no orphans possible | ||
| StoredBlock headerAtHeight = headerChain.getBlockStore().get(height); | ||
| Sha256Hash expectedHash = headerAtHeight.getHeader().getHash(); | ||
|
|
||
| // When block arrives, we know exactly where it goes | ||
| // No orphan storage needed! | ||
| } | ||
| ``` | ||
|
|
||
| **Why this works**: | ||
| - Headers define the canonical chain | ||
| - We request blocks in a specific order (even if reverse) | ||
| - Each block's position is pre-determined by its header | ||
| - No ambiguity about block relationships | ||
|
|
||
| **Severity After Headers**: 🟢 **SOLVED** - Orphan handling not needed | ||
|
|
||
| --- | ||
|
|
||
| ### 4. **Transaction Input Validation** | ||
|
|
||
| **Problem**: SPV clients validate transactions by checking: | ||
| - Inputs reference outputs from bloom filter-matched transactions | ||
| - Outputs are created before being spent | ||
| - UTXO set consistency | ||
|
|
||
| **Impact**: In reverse order: | ||
| - Transaction spends appear **before** the outputs they're spending | ||
| - Cannot validate input scripts without the referenced output | ||
| - Bloom filter might not include outputs we discover later | ||
|
|
||
| **Example**: | ||
| ``` | ||
| Block 1000: TX_A creates output X | ||
| Block 1001: TX_B spends output X | ||
| Reverse sync receives: | ||
| 1. Block 1001 first → TX_B tries to spend X (doesn't exist yet!) | ||
| 2. Block 1000 later → TX_A creates X (now B makes sense) | ||
| ``` | ||
|
|
||
| **Severity**: 🔴 **CRITICAL** - Transaction validation impossible | ||
|
|
||
| --- | ||
|
|
||
| ### 5. **Bloom Filter Incompleteness** | ||
|
|
||
| **Problem**: Bloom filters are created based on: | ||
| - Known wallet addresses | ||
| - Known public keys | ||
| - Previously received outputs | ||
|
|
||
| **Impact**: In reverse sync: | ||
| - Filter may not include outputs we haven't discovered yet | ||
| - HD wallet key lookahead might miss transactions | ||
| - P2PK outputs wouldn't trigger filter updates properly | ||
|
|
||
| **From blockchain-sync-bip37.md**: Filter exhaustion handling assumes forward progression to detect missing keys. | ||
|
|
||
| **Severity**: 🟡 **HIGH** - May miss transactions, incorrect balance | ||
|
|
||
| --- | ||
|
|
||
| ### 6. **Masternode List State Consistency** | ||
|
|
||
| **Problem**: Deterministic masternode lists build forward from genesis: | ||
| - `mnlistdiff` messages are incremental forward deltas | ||
| - Quorum commitments reference historical block heights | ||
| - InstantSend/ChainLock validation requires correct quorum at block height | ||
|
|
||
| **Impact**: | ||
| - Cannot validate ChainLocks on blocks without knowing historical quorum state | ||
| - InstantSend locks reference quorums that we haven't validated yet (in reverse) | ||
| - Masternode list state would be inconsistent going backwards | ||
|
|
||
| **Severity**: 🔴 **CRITICAL** - Dash-specific features broken | ||
|
|
||
| --- | ||
|
|
||
| ### 7. **LLMQ Quorum Validation** | ||
|
|
||
| **Problem**: LLMQ quorums have lifecycle events: | ||
| - Formation at specific heights | ||
| - Rotation based on block count | ||
| - Signature aggregation across time | ||
|
|
||
| **Impact**: | ||
| - Quorum validation expects forward time progression | ||
| - ChainLock signatures reference future (in reverse) quorums | ||
| - Cannot verify quorum commitments in reverse | ||
|
|
||
| **From QuorumState.java**: Quorum state builds forward through block processing. | ||
|
|
||
| **Severity**: 🔴 **CRITICAL** - ChainLock/InstantSend validation broken | ||
|
|
||
| --- | ||
|
|
||
| ### 8. **Block Locator Construction** | ||
|
|
||
| **Problem**: Block locators assume forward chain building: | ||
| - Exponential backoff from chain head | ||
| - Last 100 blocks sequential | ||
|
|
||
| **Impact**: | ||
| - Reverse block locators would need to reference future blocks (not yet downloaded) | ||
| - Peer would be confused by requests that don't match chain topology | ||
|
|
||
| **From blockchain-sync-bip37.md**: | ||
| ``` | ||
| Build locator: [head, head-1, ..., head-99, head-101, head-105, ..., genesis] | ||
| ``` | ||
|
|
||
| **In reverse**: Head is known (from headers), but intermediate blocks aren't in blockChain yet. | ||
|
|
||
| **Severity**: 🟡 **HIGH** - Protocol incompatibility | ||
|
|
||
| **✅ COMPLETELY SOLVED BY HEADERS**: Can build perfect locators! | ||
|
|
||
| **With headers**: | ||
| ```java | ||
| // Build locator using headerChain (already has all headers) | ||
| private BlockLocator buildReverseBlockLocator(int targetHeight) { | ||
| BlockLocator locator = new BlockLocator(); | ||
|
|
||
| // Use headerChain, not blockChain | ||
| StoredBlock cursor = headerChain.getBlockStore().get(targetHeight); | ||
|
|
||
| // Standard locator construction works perfectly | ||
| for (int i = 0; i < 100 && cursor != null; i++) { | ||
| locator.add(cursor.getHeader().getHash()); | ||
| cursor = headerChain.getBlockStore().get(cursor.getHeight() - 1); | ||
| } | ||
|
|
||
| int step = 1; | ||
| while (cursor != null && cursor.getHeight() > 0) { | ||
| locator.add(cursor.getHeader().getHash()); | ||
| step *= 2; | ||
| cursor = headerChain.getBlockStore().get(cursor.getHeight() - step); | ||
| } | ||
|
|
||
| return locator; | ||
| } | ||
| ``` | ||
|
|
||
| **Severity After Headers**: 🟢 **SOLVED** - Headers enable perfect locators | ||
|
|
||
| --- | ||
|
|
||
| ### 9. **Checkpoint Validation** | ||
|
|
||
| **Problem**: Checkpoints validate forward progression: | ||
| - `params.passesCheckpoint(height, hash)` checks blocks connect to known checkpoints | ||
| - Assumes building up to checkpoints, not down from them | ||
|
|
||
| **Impact**: Checkpoint validation would fail or give false security in reverse order. | ||
|
|
||
| **Severity**: 🟡 **MEDIUM** - Security feature degraded | ||
|
|
||
| **✅ COMPLETELY SOLVED BY HEADERS**: Checkpoints already validated! | ||
|
|
||
| **With headers**: | ||
| - All headers passed checkpoint validation during HEADERS stage | ||
| - Blocks must match headers (which already passed checkpoints) | ||
| - No additional checkpoint validation needed during BLOCKS stage | ||
|
|
||
| **Severity After Headers**: 🟢 **SOLVED** - Checkpoints already enforced on headers | ||
|
|
||
| --- | ||
|
|
||
| ### 10. **Progress Tracking Inversion** | ||
|
|
||
| **Problem**: Download progress assumes forward sync: | ||
| - "Blocks left" calculation: `peer.getBestHeight() - blockChain.getChainHead().getHeight()` | ||
| - Progress percentage based on catching up to tip | ||
|
|
||
| **Impact**: Progress would appear to go backwards, confusing UX. | ||
|
|
||
| **Severity**: 🟢 **LOW** - UX issue only, fixable | ||
|
|
||
| --- | ||
|
|
||
| ### 11. **Reorganization Detection** | ||
|
|
||
| **Problem**: Reorgs detected by: | ||
| - New block has more chainwork than current chain head | ||
| - Finding split point going backwards from both heads | ||
|
|
||
| **Impact**: In reverse sync: | ||
| - Cannot detect reorgs properly (don't have the chain to compare against) | ||
| - Split point finding assumes forward-built chain exists | ||
|
|
||
| **Severity**: 🟡 **HIGH** - Cannot handle chain reorgs during sync | ||
|
|
||
| **✅ PARTIALLY SOLVED BY HEADERS**: Reorgs detected at header level! | ||
|
|
||
| **With headers**: | ||
| - If chain reorgs during BLOCKS stage, HEADERS stage would detect it first | ||
| - Headers chain is canonical - blocks just need to match | ||
| - Reorg during block download would manifest as header mismatch | ||
|
|
||
| **However**: | ||
| - Need to handle case where we're downloading blocks for a header chain that reorgs mid-download | ||
| - Solution: Validate blocks match current headerChain; restart if headerChain changes | ||
|
|
||
| **Severity After Headers**: 🟡 **MEDIUM** - Detectable, requires restart on reorg | ||
|
|
||
| --- | ||
|
|
||
| ### 12. **Fast Catchup Time Interaction** | ||
|
|
||
| **Problem**: Fast catchup downloads only headers before a timestamp, then switches to full blocks: | ||
| ```java | ||
| if (header.getTimeSeconds() >= fastCatchupTimeSecs) { | ||
| this.downloadBlockBodies = true; | ||
| } | ||
| ``` | ||
|
|
||
| **Impact**: In reverse sync, we'd start with full blocks (newest) and switch to headers-only (oldest) - opposite semantics. | ||
|
|
||
| **Severity**: 🟡 **MEDIUM** - Optimization strategy incompatible | ||
|
|
||
| --- | ||
|
|
||
| ### 13. **Wallet Transaction Dependency Order** | ||
|
|
||
| **Problem**: Wallets track: | ||
| - Transaction chains (tx A creates output, tx B spends it) | ||
| - Balance updates (credits before debits) | ||
| - Confidence building (confirmations increase forward) | ||
|
|
||
| **Impact**: In reverse: | ||
| - Debits appear before credits | ||
| - Transaction chains appear in reverse dependency order | ||
| - Confidence would decrease as we go back in time (confusing) | ||
|
|
||
| **Severity**: 🟡 **MEDIUM** - Wallet state confusion | ||
|
|
||
| --- | ||
|
|
||
| ### 14. **Peer Protocol Assumptions** | ||
|
|
||
| **Problem**: P2P protocol messages assume forward sync: | ||
| - `GetBlocksMessage` requests blocks after a locator (forward direction) | ||
| - `InvMessage` announces blocks in forward order | ||
| - Peers expect sequential requests | ||
|
|
||
| **Impact**: Would need to reverse the protocol semantics or work around peer expectations. | ||
|
|
||
| **Severity**: 🟡 **HIGH** - Protocol violation, peers may reject | ||
|
|
||
| --- | ||
|
|
||
| ### 15. **Memory Pressure During Reverse Accumulation** | ||
|
|
||
| **Problem**: In forward sync, blocks are validated and added to chain immediately. In reverse sync, blocks must be: | ||
| - Stored in memory until we have their parents | ||
| - Held for batch validation | ||
| - Queued for out-of-order processing | ||
|
|
||
| **Impact**: | ||
| - Memory usage proportional to number of unvalidated blocks | ||
| - 500 blocks × average size = significant memory | ||
| - Risk of OOM on mobile devices | ||
|
|
||
| **Severity**: 🟡 **MEDIUM** - Resource constraint on mobile | ||
|
|
||
| --- | ||
|
|
||
| ## Implementation Requirements | ||
|
|
||
| To implement reverse block synchronization safely, the following changes would be necessary: | ||
|
|
||
| ### Phase 1: Storage Layer Modifications | ||
|
|
||
| #### 1. **Dual-Mode SPVBlockStore** | ||
|
|
||
| **Requirement**: Extend SPVBlockStore to support reverse insertion without corrupting the ring buffer. | ||
|
|
||
| **Approach**: | ||
| - Add `putReverse(StoredBlock block)` method | ||
| - Maintain separate reverse ring cursor | ||
| - Use temporary storage for reverse blocks | ||
| - Preserve forward-only chain head semantics | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class SPVBlockStore { | ||
| // Existing forward cursor | ||
| private int forwardCursor; | ||
|
|
||
| // NEW: Reverse insertion cursor | ||
| private int reverseCursor; | ||
|
|
||
| // NEW: Temporary reverse block storage | ||
| private TreeMap<Integer, StoredBlock> reverseBlockBuffer; | ||
|
|
||
| public void putReverse(StoredBlock block) throws BlockStoreException { | ||
| // Store in temporary buffer, not ring | ||
| reverseBlockBuffer.put(block.getHeight(), block); | ||
| } | ||
|
|
||
| public void finalizeReverseBlocks() throws BlockStoreException { | ||
| // Once we have all blocks, insert them forward into ring buffer | ||
| for (StoredBlock block : reverseBlockBuffer.values()) { | ||
| put(block); // Use normal forward insertion | ||
| } | ||
| reverseBlockBuffer.clear(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Requires careful buffer management | ||
|
|
||
| --- | ||
|
|
||
| #### 2. **Temporary Reverse Chain Structure** | ||
|
|
||
| **Requirement**: Create a parallel chain structure to hold reverse-downloaded blocks until validation. | ||
|
|
||
| **Approach**: | ||
| - `ReverseBlockChain` class holds blocks by height | ||
| - Maps block hash → StoredBlock for lookup | ||
| - Ordered by height descending (tip to oldest) | ||
| - Not connected to main `blockChain` until finalized | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class ReverseBlockChain { | ||
| private final TreeMap<Integer, Block> blocksByHeight = new TreeMap<>(Collections.reverseOrder()); | ||
| private final Map<Sha256Hash, Block> blocksByHash = new HashMap<>(); | ||
| private final int startHeight; // Chain tip height | ||
| private final int endHeight; // Fast-catchup or genesis height | ||
|
|
||
| public void addBlock(Block block, int height) { | ||
| blocksByHeight.put(height, block); | ||
| blocksByHash.put(block.getHash(), block); | ||
| } | ||
|
|
||
| public boolean isComplete() { | ||
| // Check if we have all blocks from startHeight to endHeight | ||
| return blocksByHeight.size() == (startHeight - endHeight + 1); | ||
| } | ||
|
|
||
| public List<Block> getBlocksForwardOrder() { | ||
| return Lists.reverse(new ArrayList<>(blocksByHeight.values())); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟢 **LOW** - Straightforward data structure | ||
|
|
||
| --- | ||
|
|
||
| ### Phase 2: Validation Deferral | ||
|
|
||
| #### 3. **Deferred Block Validation** | ||
|
|
||
| **Requirement**: Skip validation during reverse download, batch validate after completion. | ||
|
|
||
| **Approach**: | ||
| - Add `deferValidation` flag to `AbstractBlockChain.add()` | ||
| - Store blocks without validation | ||
| - After reverse sync completes, validate in forward order | ||
| - Roll back on validation failure | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class AbstractBlockChain { | ||
| private boolean deferValidation = false; | ||
| private List<Block> deferredBlocks = new ArrayList<>(); | ||
|
|
||
| public void enableDeferredValidation() { | ||
| this.deferValidation = true; | ||
| } | ||
|
|
||
| public boolean add(Block block) throws VerificationException { | ||
| if (deferValidation) { | ||
| deferredBlocks.add(block); | ||
| return true; // Assume valid for now | ||
| } | ||
| // Normal validation | ||
| return addWithValidation(block); | ||
| } | ||
|
|
||
| public void validateDeferredBlocks() throws VerificationException { | ||
| deferValidation = false; | ||
| for (Block block : deferredBlocks) { | ||
| if (!addWithValidation(block)) { | ||
| throw new VerificationException("Deferred block failed validation: " + block.getHash()); | ||
| } | ||
| } | ||
| deferredBlocks.clear(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Requires careful state management | ||
|
|
||
| --- | ||
|
|
||
| #### 4. **Transaction Validation Queue** | ||
|
|
||
| **Requirement**: Queue transaction validations until we have the full block range. | ||
|
|
||
| **Approach**: | ||
| - Skip input validation during reverse sync | ||
| - Record transactions for later validation | ||
| - Validate transaction chains in forward order after completion | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class WalletTransactionValidator { | ||
| private Map<Sha256Hash, Transaction> pendingValidation = new HashMap<>(); | ||
|
|
||
| public void queueForValidation(Transaction tx) { | ||
| pendingValidation.put(tx.getTxId(), tx); | ||
| } | ||
|
|
||
| public void validateQueuedTransactions(Wallet wallet) throws VerificationException { | ||
| // Sort by block height (if known) or topologically | ||
| List<Transaction> sorted = topologicalSort(pendingValidation.values()); | ||
| for (Transaction tx : sorted) { | ||
| wallet.validateTransaction(tx); | ||
| } | ||
| pendingValidation.clear(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🔴 **HIGH** - Topological sorting, dependency tracking | ||
|
|
||
| --- | ||
|
|
||
| ### Phase 3: Protocol Adaptation | ||
|
|
||
| #### 5. **Reverse Block Locator** | ||
|
|
||
| **Requirement**: Create block locators that reference the tip (known) and work backwards. | ||
|
|
||
| **Approach**: | ||
| - Use headerChain (already complete) to build locators | ||
| - Reference blocks by header hash (not in blockChain yet) | ||
| - Peer responds with blocks going forward from locator match | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class Peer { | ||
| private BlockLocator buildReverseBlockLocator(int targetHeight) { | ||
| BlockLocator locator = new BlockLocator(); | ||
|
|
||
| // Use headerChain since it has all headers | ||
| StoredBlock cursor = headerChain.getBlockStore().get(targetHeight); | ||
|
|
||
| // Add 100 blocks going backward from target | ||
| for (int i = 0; i < 100 && cursor != null; i++) { | ||
| locator.add(cursor.getHeader().getHash()); | ||
| cursor = headerChain.getBlockStore().get(cursor.getHeight() - 1); | ||
| } | ||
|
|
||
| // Exponential backoff going further back | ||
| int step = 1; | ||
| while (cursor != null && cursor.getHeight() > 0) { | ||
| locator.add(cursor.getHeader().getHash()); | ||
| step *= 2; | ||
| cursor = headerChain.getBlockStore().get(cursor.getHeight() - step); | ||
| } | ||
|
|
||
| return locator; | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟢 **LOW** - Leverages existing headerChain | ||
|
|
||
| --- | ||
|
|
||
| #### 6. **Reverse GetBlocks Request** | ||
|
|
||
| **Requirement**: Request blocks in reverse order, 500 at a time. | ||
|
|
||
| **Approach**: | ||
| - Use `GetBlocksMessage` with locator pointing to (tip - 500) | ||
| - Request filtered blocks from (tip - 499) to tip | ||
| - Move backwards in 500-block chunks | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class Peer { | ||
| private void reverseBlockChainDownloadLocked(int startHeight) { | ||
| int endHeight = Math.max(startHeight - 500, fastCatchupHeight); | ||
|
|
||
| // Build locator pointing to endHeight | ||
| BlockLocator locator = buildReverseBlockLocator(endHeight); | ||
|
|
||
| // stopHash is the tip of this range | ||
| Sha256Hash stopHash = headerChain.getBlockStore().get(startHeight).getHeader().getHash(); | ||
|
|
||
| GetBlocksMessage message = new GetBlocksMessage(params, locator, stopHash); | ||
| sendMessage(message); | ||
|
|
||
| // Peer will respond with InvMessage containing blocks from endHeight to startHeight | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Protocol semantics adapted | ||
|
|
||
| --- | ||
|
|
||
| ### Phase 4: Dash-Specific Handling | ||
|
|
||
| #### 7. **Masternode List State Snapshot** | ||
|
|
||
| **Requirement**: Use already-synced masternode list from MNLIST stage (DIP-16). | ||
|
|
||
| **Approach**: | ||
| - Masternode list already synced to chain tip during MNLIST stage | ||
| - Use this state for all ChainLock/InstantSend validations | ||
| - Do NOT attempt to rebuild masternode list in reverse | ||
|
|
||
| **Rationale**: DIP-16 already solved this - we have the full masternode list before BLOCKS stage starts. | ||
|
|
||
| **Complexity**: 🟢 **LOW** - Already available from DIP-16 | ||
|
|
||
| --- | ||
|
|
||
| #### 8. **ChainLock Validation with Forward State** | ||
|
|
||
| **Requirement**: Validate ChainLocks using the quorum state from MNLIST stage. | ||
|
|
||
| **Approach**: | ||
| - Quorum state is already at chain tip (from MNLIST stage) | ||
| - Historical ChainLocks can be validated if we have quorum at that height | ||
| - May need to skip ChainLock validation for very old blocks | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class ChainLocksHandler { | ||
| public boolean validateChainLockInReverse(Block block, ChainLockSignature cls) { | ||
| // We have current quorum state from MNLIST stage | ||
| // Can we validate this historical ChainLock? | ||
| int quorumHeight = block.getHeight() - (block.getHeight() % LLMQParameters.interval); | ||
|
|
||
| if (quorumStateAtHeight(quorumHeight) != null) { | ||
| return verifyChainLockSignature(block, cls); | ||
| } else { | ||
| // Too old, quorum state not available | ||
| log.warn("Skipping ChainLock validation for old block: {}", block.getHeight()); | ||
| return true; // Assume valid | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - May lose some validation guarantees | ||
|
|
||
| --- | ||
|
|
||
| #### 9. **InstantSend Lock Handling** | ||
|
|
||
| **Requirement**: Handle InstantSend locks in reverse. | ||
|
|
||
| **Approach**: | ||
| - InstantSend locks reference transactions | ||
| - In reverse, transaction might appear before its lock | ||
| - Queue locks for validation after transaction appears | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Reverse dependency handling | ||
|
|
||
| --- | ||
|
|
||
| ### Phase 5: Wallet Integration | ||
|
|
||
| #### 10. **Wallet Notification Order** | ||
|
|
||
| **Requirement**: Notify wallet of transactions in reverse but maintain balance consistency. | ||
|
|
||
| **Approach**: | ||
| - Hold wallet notifications until batch is complete | ||
| - Sort transactions by height before notifying | ||
| - Update balance in forward order (oldest to newest) | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class Wallet { | ||
| private List<WalletTransaction> pendingNotifications = new ArrayList<>(); | ||
|
|
||
| public void queueReverseSyncTransaction(Transaction tx, int height) { | ||
| pendingNotifications.add(new WalletTransaction(tx, height)); | ||
| // Don't notify listeners yet | ||
| } | ||
|
|
||
| public void flushReverseSyncNotifications() { | ||
| // Sort by height ascending | ||
| pendingNotifications.sort(Comparator.comparingInt(WalletTransaction::getHeight)); | ||
|
|
||
| // Notify in forward order | ||
| for (WalletTransaction wtx : pendingNotifications) { | ||
| notifyTransactionListeners(wtx.tx); | ||
| } | ||
|
|
||
| pendingNotifications.clear(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟢 **LOW** - Straightforward batching | ||
|
|
||
| --- | ||
|
|
||
| #### 11. **Bloom Filter Pre-population** | ||
|
|
||
| **Requirement**: Ensure bloom filter includes outputs we'll discover in reverse. | ||
|
|
||
| **Approach**: | ||
| - Increase bloom filter lookahead depth | ||
| - Use larger filter initially | ||
| - Recalculate filter after each reverse batch completes | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class PeerGroup { | ||
| public void prepareForReverseSync() { | ||
| // Increase lookahead for all wallets | ||
| for (Wallet wallet : wallets) { | ||
| wallet.setKeyLookaheadSize(200); // Increased from 100 | ||
| } | ||
|
|
||
| // Force larger bloom filter | ||
| bloomFilterMerger.setBloomFilterFPRate(0.00001); // Lower FP rate = larger filter | ||
| recalculateFastCatchupAndFilter(FilterRecalculateMode.FORCE_SEND_FOR_REFRESH); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟢 **LOW** - Parameter tuning | ||
|
|
||
| --- | ||
|
|
||
| ### Phase 6: Progress & UX | ||
|
|
||
| #### 12. **Reverse Progress Tracking** | ||
|
|
||
| **Requirement**: Update progress calculation for reverse sync. | ||
|
|
||
| **Approach**: | ||
| - Track "blocks remaining" going backwards | ||
| - Show user recent transactions first (better UX) | ||
| - Reverse progress percentage calculation | ||
|
|
||
| **Implementation**: | ||
| ```java | ||
| public class DownloadProgressTracker { | ||
| private int reverseStartHeight; | ||
| private int reverseEndHeight; | ||
|
|
||
| public void startReverseSync(int startHeight, int endHeight) { | ||
| this.reverseStartHeight = startHeight; | ||
| this.reverseEndHeight = endHeight; | ||
| } | ||
|
|
||
| @Override | ||
| public void onBlocksDownloaded(Peer peer, Block block, @Nullable FilteredBlock fb, int blocksLeft) { | ||
| if (isReverseSync) { | ||
| int downloaded = reverseStartHeight - block.getHeight(); | ||
| int total = reverseStartHeight - reverseEndHeight; | ||
| double progress = (double) downloaded / total; | ||
|
|
||
| // Notify UI: "Syncing recent blocks: 65% (showing newest first)" | ||
| notifyProgress(progress, "recent-first"); | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟢 **LOW** - UX improvement | ||
|
|
||
| --- | ||
|
|
||
| #### 13. **Hybrid Sync Strategy** | ||
|
|
||
| **Requirement**: Combine reverse and forward sync for optimal UX. | ||
|
|
||
| **Approach**: | ||
| 1. Download last 500-1000 blocks in reverse (most recent transactions) | ||
| 2. Show wallet UI as "partially synced" | ||
| 3. Then download remaining blocks in forward order | ||
| 4. Finalize validation when complete | ||
|
|
||
| **Benefits**: | ||
| - User sees recent activity immediately | ||
| - Less memory pressure (smaller reverse batch) | ||
| - Still get full sync eventually | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Coordination logic | ||
|
|
||
| --- | ||
|
|
||
| ### Phase 7: Finalization & Validation | ||
|
|
||
| #### 14. **Batch Validation After Reverse Completion** | ||
|
|
||
| **Requirement**: Validate all reverse-downloaded blocks in forward order once complete. | ||
|
|
||
| **Approach**: | ||
| ```java | ||
| public class ReverseSyncCoordinator { | ||
| private ReverseBlockChain reverseChain; | ||
| private AbstractBlockChain blockChain; | ||
|
|
||
| public void finalizeReverseSync() throws BlockStoreException, VerificationException { | ||
| log.info("Reverse sync complete, validating {} blocks in forward order", | ||
| reverseChain.size()); | ||
|
|
||
| // Get blocks in forward order (oldest to newest) | ||
| List<Block> blocksForward = reverseChain.getBlocksForwardOrder(); | ||
|
|
||
| // Validate and add to main chain | ||
| for (Block block : blocksForward) { | ||
| if (!blockChain.add(block)) { | ||
| throw new VerificationException("Block failed validation during finalization: " | ||
| + block.getHash()); | ||
| } | ||
| } | ||
|
|
||
| // Flush wallet notifications | ||
| for (Wallet wallet : wallets) { | ||
| wallet.flushReverseSyncNotifications(); | ||
| } | ||
|
|
||
| log.info("Reverse sync finalization complete"); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Critical validation step | ||
|
|
||
| --- | ||
|
|
||
| #### 15. **Rollback on Validation Failure** | ||
|
|
||
| **Requirement**: Handle case where reverse-downloaded blocks fail validation. | ||
|
|
||
| **Approach**: | ||
| - Keep reverse chain separate until validation passes | ||
| - On failure, discard reverse chain | ||
| - Fall back to traditional forward sync | ||
| - Notify user of sync failure | ||
|
|
||
| **Complexity**: 🟡 **MEDIUM** - Error handling | ||
|
|
||
| --- | ||
|
|
||
| ## Summary of Complexity | ||
|
|
||
| | Category | Requirements | Complexity | Risk | | ||
| |----------|--------------|------------|------| | ||
| | **Storage** | Dual-mode SPVBlockStore, Reverse chain structure | 🟡 MEDIUM | 🟡 MEDIUM | | ||
| | **Validation** | Deferred validation, Transaction queuing | 🔴 HIGH | 🔴 HIGH | | ||
| | **Protocol** | Reverse locators, Adapted GetBlocks | 🟡 MEDIUM | 🟡 MEDIUM | | ||
| | **Dash-Specific** | Masternode state, ChainLock validation | 🟡 MEDIUM | 🔴 HIGH | | ||
| | **Wallet** | Notification order, Bloom filter | 🟢 LOW | 🟢 LOW | | ||
| | **UX** | Progress tracking, Hybrid strategy | 🟢 LOW | 🟢 LOW | | ||
| | **Finalization** | Batch validation, Rollback | 🟡 MEDIUM | 🔴 HIGH | | ||
|
|
||
| **Overall Assessment**: 🔴 **HIGH COMPLEXITY, HIGH RISK** | ||
|
|
||
| --- | ||
|
|
||
| ## Alternative: Hybrid Approach (Recommended) | ||
|
|
||
| Given the significant challenges of full reverse sync, a **hybrid approach** may be more practical: | ||
|
|
||
| ### Two-Phase Sync Strategy | ||
|
|
||
| **Phase 1: Reverse "Preview" Sync (500-1000 blocks)** | ||
| - Download ONLY the most recent 500-1000 blocks in reverse | ||
| - Use temporary storage (not SPVBlockStore) | ||
| - Show transactions to user as "preliminary" or "syncing" | ||
| - Skip full validation (rely on ChainLocks for recent blocks) | ||
|
|
||
| **Phase 2: Forward Historical Sync** | ||
| - After preview, download remaining blocks in forward order (traditional) | ||
| - Validate fully as normal | ||
| - Merge with preview data | ||
| - Mark wallet as "fully synced" | ||
|
|
||
| ### Benefits | ||
| - ✅ User sees recent transactions in ~30 seconds | ||
| - ✅ Avoids most validation issues (only 500 blocks held in memory) | ||
| - ✅ Reuses existing forward sync infrastructure | ||
| - ✅ Lower risk, easier to implement | ||
| - ✅ Graceful degradation (if preview fails, continue with forward sync) | ||
|
|
||
| ### Implementation Outline | ||
| ```java | ||
| public class HybridSyncStrategy { | ||
| private static final int PREVIEW_BLOCKS = 500; | ||
|
|
||
| public void syncBlockchain() { | ||
| // DIP-16 Stages 1-3 (as normal) | ||
| downloadHeaders(); | ||
| downloadMasternodeLists(); | ||
|
|
||
| // Phase 1: Reverse preview | ||
| List<Block> recentBlocks = downloadRecentBlocksReverse(PREVIEW_BLOCKS); | ||
| showPreviewToUser(recentBlocks); // "Syncing: showing recent activity" | ||
|
|
||
| // Phase 2: Forward historical | ||
| downloadRemainingBlocksForward(); // Traditional sync | ||
| finalizeAndValidate(); | ||
| markWalletFullySynced(); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Complexity**: 🟢 **MEDIUM** (much lower than full reverse) | ||
| **Risk**: 🟡 **MEDIUM** (acceptable for UX improvement) | ||
| **UX Gain**: 🟢 **HIGH** (fast initial feedback) | ||
|
|
||
| --- | ||
|
|
||
| ## Conclusion | ||
|
|
||
| Full reverse block synchronization presents **15 critical pitfalls** spanning storage, validation, protocol, and Dash-specific concerns. While theoretically possible, the implementation complexity and risk are substantial. | ||
|
|
||
| **Recommendations**: | ||
|
|
||
| 1. **For Production**: Implement the **Hybrid Approach** (reverse preview + forward historical) | ||
| - Achieves primary UX goal (fast recent transaction visibility) | ||
| - Manageable complexity and risk | ||
| - Reuses existing infrastructure | ||
|
|
||
| 2. **For Research**: Prototype full reverse sync as a proof-of-concept | ||
| - Validate feasibility of deferred validation | ||
| - Measure memory pressure with real data | ||
| - Test Dash-specific feature compatibility | ||
|
|
||
| 3. **Alternative UX Improvements** (lower hanging fruit): | ||
| - Show estimated balance based on headers + ChainLocks | ||
| - Display "syncing" state with partial data | ||
| - Parallel sync of multiple block ranges (multi-peer) | ||
| - Faster header validation with batch PoW checks | ||
|
|
||
| The **hybrid approach balances innovation with pragmatism**, delivering improved UX without the extreme engineering challenges of full reverse synchronization. | ||
|
|
||
| --- | ||
|
|
||
| ## References | ||
|
|
||
| - **blockchain-sync-bip37.md** - Current synchronization implementation | ||
| - **SPVBlockStore.java** (line 40-200) - Ring buffer storage constraints | ||
| - **AbstractBlockChain.java** (line 130, 468) - Orphan block handling | ||
| - **Peer.java** (line 1595-1775) - Block download protocol | ||
| - **DIP-16** - Headers-first synchronization stages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR title doesn't match document content.
The PR is titled "fix: track tx depth improvements" but this document (and the other design docs in the PR) focuses on blockchain synchronization architecture, peer networking, and optimization strategies. There's no mention of transaction depth tracking in any of the added documents.
Consider updating the PR title to accurately reflect the content, such as:
- "docs: add blockchain sync design documents"
- "docs: reverse-sync and network optimization proposals"
🧰 Tools
🪛 LanguageTool
[style] ~119-~119: This phrase is redundant. Consider writing “advances”.
Context: ...forward-only assumptions: - Ring cursor advances forward: `setRingCursor(buffer, buffer.position...
(ADVANCE_FORWARD)
[style] ~713-~713: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ...y need to skip ChainLock validation for very old blocks Implementation: ```java pub...
(EN_WEAK_ADJECTIVE)
[grammar] ~1018-~1018: Use a hyphen to join words.
Context: .... Alternative UX Improvements (lower hanging fruit): - Show estimated bala...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In @designdocs/proposals/reverse-sync.md around lines 1 - 1034, PR title is
misleading: it references "track tx depth improvements" but the changes add
design docs about reverse block synchronization and networking; update the PR
title and description to reflect the actual content. Change the PR title to
something like "docs: add reverse-sync and blockchain sync design proposals" (or
"docs: add blockchain sync design documents"), and update the PR
description/body to list the included documents (e.g., reverse-sync.md and
related network/proposal docs), summarizing key changes so reviewers know this
is documentation-only and not a tx-depth code fix; ensure any commit messages
referencing tx-depth are corrected or split into a separate PR if there are
actual code changes for transaction depth tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java (1)
508-520: Good approach to offload DSQueue processing; fix typo on line 513.Offloading DSQueue messages to a thread pool prevents blocking the network I/O thread — good design choice.
Minor issue: Line 513 has a typo:
"dsq meessages"→"dsq messages".📝 Fix typo
- // Return null as dsq meessages are only processed above + // Return null as dsq messages are only processed abovecore/src/main/java/org/bitcoinj/wallet/Wallet.java (1)
2658-2677: GuardappearedAtChainHeightbefore computing transaction depth in manual tracking mode.The depth computation at line 2667 (
lastBlockSeenHeight - confidence.getAppearedAtChainHeight() + 1) doesn't check ifappearedAtChainHeightis unset (default -1). This results in incorrect depths likelastBlockSeenHeight + 2when the field hasn't been initialized. Additionally,setDepthInBlocks()has no validation, allowing wrong values to propagate.While the code is currently protected by a lock against concurrent modification (unlike the implied risk), defensively using snapshot iteration is still a good practice.
Suggested fix
- for (Transaction tx : manualConfidenceChangeTransactions.keySet()) { + for (Transaction tx : new ArrayList<>(manualConfidenceChangeTransactions.keySet())) { if (ignoreNextNewBlock.contains(tx.getTxId())) { // tx was already processed in receive() due to it appearing in this block, so we don't want to // increment the tx confidence depth twice, it'd result in miscounting. ignoreNextNewBlock.remove(tx.getTxId()); } else { TransactionConfidence confidence = tx.getConfidence(); if (confidence.getConfidenceType() == ConfidenceType.BUILDING) { // Erase the set of seen peers once the tx is so deep that it seems unlikely to ever go // pending again. We could clear this data the moment a tx is seen in the block chain, but // in cases where the chain re-orgs, this would mean that wallets would perceive a newly // pending tx has zero confidence at all, which would not be right: we expect it to be // included once again. We could have a separate was-in-chain-and-now-isn't confidence type // but this way is backwards compatible with existing software, and the new state probably // wouldn't mean anything different to just remembering peers anyway. - confidence.setDepthInBlocks(lastBlockSeenHeight - confidence.getAppearedAtChainHeight() + 1); - if (confidence.getDepthInBlocks() > context.getEventHorizon()) + final int appearedAtHeight = confidence.getAppearedAtChainHeight(); + if (appearedAtHeight >= 0) { + final int depth = lastBlockSeenHeight - appearedAtHeight + 1; + if (depth > 0) + confidence.setDepthInBlocks(depth); + } + if (confidence.getDepthInBlocks() > context.getEventHorizon()) confidence.clearBroadcastBy(); confidenceChanged.put(tx, TransactionConfidence.Listener.ChangeReason.DEPTH); } } }
🤖 Fix all issues with AI agents
In `@core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java`:
- Around line 100-101: The messageProcessingExecutor is created once as a final
field and shut down in stop(), so subsequent start() calls or
preMessageReceivedEventListener submissions can hit RejectedExecutionException;
fix by either (A) moving initialization of messageProcessingExecutor into
start() (recreate a new ExecutorService there) and only shut it down in close(),
or (B) add a guard in preMessageReceivedEventListener that checks
messageProcessingExecutor.isShutdown()/isTerminated() and skips submission (or
logs and drops) if executor is not accepting tasks; update stop()/close() to
match the chosen lifecycle (ensure no double-shutdown) and add a brief JavaDoc
to start()/stop()/close() describing the non-restartable or restartable
contract, and correct the typo "meessages" → "messages" referenced near the
comment at line 513.
🧹 Nitpick comments (3)
core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientQueueManager.java (1)
114-116: LGTM! Consider applying the same lambda refactor toisTrySubmitDenominate.The lambda simplification is clean. However,
isTrySubmitDenominate(lines 130-135) still uses an anonymous inner class for the sameanyMatchpattern. Consider refactoring it for consistency:♻️ Suggested refactor for consistency
private boolean isTrySubmitDenominate(Masternode dmn) { - return coinJoinManager.coinJoinClientManagers.values().stream().anyMatch(new Predicate<CoinJoinClientManager>() { - `@Override` - public boolean test(CoinJoinClientManager coinJoinClientManager) { - return coinJoinClientManager.trySubmitDenominate(dmn.getService()); - } - }); + return coinJoinManager.coinJoinClientManagers.values().stream().anyMatch( + coinJoinClientManager -> coinJoinClientManager.trySubmitDenominate(dmn.getService()) + ); }core/src/main/java/org/bitcoinj/wallet/Wallet.java (2)
238-240: Prefer@GuardedBy("lock")+ consider keying manual tracking by txid (or canonicalize the tx instance).The ref-count map is a good direction, but using
Transactionas the key can be brittle if a tx is ever mutated in a way that changes its txid (hashCode/equals) and can also retain a non-canonicalTransactioninstance if callers pass a different object with the same txid. At minimum, I’d annotate this as lock-guarded to make the threading contract explicit.Proposed tweak (annotation + interface type)
- private final HashMap<Transaction, Integer> manualConfidenceChangeTransactions = Maps.newHashMap(); + `@GuardedBy`("lock") + private final Map<Transaction, Integer> manualConfidenceChangeTransactions = Maps.newHashMap();
6588-6611: Canonicalize the tracked tx (and simplify decrement) so you always update the wallet’s instance.If callers pass a different
Transactionobject with the same txid, the map may “pin” that non-wallet instance andnotifyNewBestBlockwill update the wrong object’s confidence. It’s safer to normalize to the wallet’s canonical transaction (when present). You can also simplify the decrement logic withcomputeIfPresent.Proposed fix (canonicalize + computeIfPresent)
public void addManualNotifyConfidenceChangeTransaction(Transaction tx) { lock.lock(); try { - manualConfidenceChangeTransactions.merge(tx, 1, Integer::sum); + checkNotNull(tx); + Transaction canonicalTx = transactions.get(tx.getTxId()); + if (canonicalTx != null) + tx = canonicalTx; + manualConfidenceChangeTransactions.merge(tx, 1, Integer::sum); } finally { lock.unlock(); } } public void removeManualNotifyConfidenceChangeTransaction(Transaction tx) { lock.lock(); try { - Integer count = manualConfidenceChangeTransactions.get(tx); - if (count != null) { - if (count == 1) { - manualConfidenceChangeTransactions.remove(tx); - } else { - manualConfidenceChangeTransactions.put(tx, count - 1); - } - } + checkNotNull(tx); + Transaction canonicalTx = transactions.get(tx.getTxId()); + if (canonicalTx != null) + tx = canonicalTx; + manualConfidenceChangeTransactions.computeIfPresent(tx, (k, v) -> v == 1 ? null : v - 1); } finally { lock.unlock(); } }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientManager.javacore/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientQueueManager.javacore/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.javacore/src/main/java/org/bitcoinj/wallet/Wallet.java
✅ Files skipped from review due to trivial changes (1)
- core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientManager.java
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-25T15:26:22.477Z
Learnt from: HashEngineering
Repo: dashpay/dashj PR: 284
File: core/src/main/java/org/bitcoinj/wallet/WalletEx.java:342-429
Timestamp: 2025-08-25T15:26:22.477Z
Learning: In WalletEx.java, the mapOutpointRoundsCache for CoinJoin rounds should only be cleared during blockchain reorganizations via the reorganize() method, not during normal wallet operations like adding transactions or moving transactions between pools. This design choice prioritizes performance for large wallets while ensuring correctness when the blockchain structure changes.
Applied to files:
core/src/main/java/org/bitcoinj/wallet/Wallet.javacore/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java
📚 Learning: 2025-09-18T22:12:09.613Z
Learnt from: HashEngineering
Repo: dashpay/dashj PR: 286
File: core/src/main/java/org/bitcoinj/wallet/Wallet.java:6371-6373
Timestamp: 2025-09-18T22:12:09.613Z
Learning: In bitcoinj's Wallet.java, methods returning List<ECKey> can safely return new LinkedList<>(detkeys) where detkeys is List<DeterministicKey>, because DeterministicKey extends ECKey and Java's type inference handles this covariance correctly during list construction.
Applied to files:
core/src/main/java/org/bitcoinj/wallet/Wallet.java
🧬 Code graph analysis (1)
core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java (1)
core/src/main/java/org/bitcoinj/utils/ContextPropagatingThreadFactory.java (1)
ContextPropagatingThreadFactory(26-62)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: JAVA 11 OS macOS-latest Gradle
- GitHub Check: JAVA 11 OS ubuntu-latest Gradle
- GitHub Check: Analyze (java)
🔇 Additional comments (2)
core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java (2)
214-226: Shutdown logic is well-implemented.The shutdown sequence is correct: graceful
shutdown(), awaiting termination with timeout, thenshutdownNow()if needed. TheInterruptedExceptionhandling correctly re-interrupts the thread.
283-286: Defensive shutdown check is acceptable.The
isShutdown()check beforeshutdown()is slightly racy but harmless sinceshutdown()is idempotent. This provides a reasonable safety net.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| private final ExecutorService messageProcessingExecutor = Executors.newFixedThreadPool(5, | ||
| new ContextPropagatingThreadFactory("CoinJoin-MessageProcessor")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the CoinJoinManager.java file to see the full lifecycle methods
fd -t f CoinJoinManager.java | head -1 | xargs wc -lRepository: dashpay/dashj
Length of output: 128
🏁 Script executed:
# Get the complete file to understand the lifecycle
fd -t f CoinJoinManager.java | head -1 | xargs cat -nRepository: dashpay/dashj
Length of output: 25617
🏁 Script executed:
# Search for start(), stop(), close() method definitions in CoinJoinManager
rg -n 'public.*(?:void\s+)?(start|stop|close)\(' core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java -A 20Repository: dashpay/dashj
Length of output: 2138
🏁 Script executed:
# Search for usage patterns of stop() and start() calls in the codebase
rg -n 'coinJoinManager\.(stop|start|close)\(' --type=java -B 2 -A 2Repository: dashpay/dashj
Length of output: 3053
🏁 Script executed:
# Check if there's any recreation or reinitialization of CoinJoinManager
rg -n 'new CoinJoinManager\(' --type=java -B 2 -A 2Repository: dashpay/dashj
Length of output: 782
Executor lifecycle constraint: reuse not supported after stop() is called.
The executor is declared final and initialized once at field declaration. The start() method does not reinitialize it. Once stop() shuts it down (lines 216-226), calling start() again—or receiving messages before close() removes listeners—will cause preMessageReceivedEventListener to submit tasks to a shutdown executor, resulting in RejectedExecutionException.
While the current codebase follows a safe pattern (start → stop → close without restart), this architectural constraint should be documented and enforced. Consider either:
- Recreating the executor in
start()and shutting it down only inclose() - Adding a guard in
preMessageReceivedEventListenerto check executor state before submission - Documenting that
start()must not be called afterstop()
Additionally, line 513 contains a typo: "meessages" should be "messages".
🤖 Prompt for AI Agents
In `@core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java` around
lines 100 - 101, The messageProcessingExecutor is created once as a final field
and shut down in stop(), so subsequent start() calls or
preMessageReceivedEventListener submissions can hit RejectedExecutionException;
fix by either (A) moving initialization of messageProcessingExecutor into
start() (recreate a new ExecutorService there) and only shut it down in close(),
or (B) add a guard in preMessageReceivedEventListener that checks
messageProcessingExecutor.isShutdown()/isTerminated() and skips submission (or
logs and drops) if executor is not accepting tasks; update stop()/close() to
match the chosen lifecycle (ensure no double-shutdown) and add a brief JavaDoc
to start()/stop()/close() describing the non-restartable or restartable
contract, and correct the typo "meessages" → "messages" referenced near the
comment at line 513.
Issue being fixed or feature implemented
What was done?
How Has This Been Tested?
Breaking Changes
Checklist:
For repository code-owners and collaborators only
Summary by CodeRabbit
Documentation
Improvements
Style
Deprecation
✏️ Tip: You can customize this high-level summary in your review settings.