Conversation
📝 WalkthroughWalkthroughThis PR refactors the dash-spv and dash-spv-ffi libraries to simplify internal concurrency patterns. Key changes include unwrapping the FFI client's inner mutex-option wrapper into a direct field with separate callback fields, converting shared state to Arc-wrapped locks, making select methods async for lock acquisition, removing the command-channel interface in favor of direct calls, and eliminating the command receiver parameter from run(). Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
1aecaf3 to
bd27a1b
Compare
ZocoLini
left a comment
There was a problem hiding this comment.
This is more or less what i wanted to try to implement today, is nice to see its already done, this left us with a much cleaner facade pattern for the SPVClient. TY
I saw some compile issues, will try to use it in iOS and will let you know how it went
All fields wrapped in `Arc` with locks. Callers hold a handle while the monitoring loop runs in a spawned task via `run()` getting rid of the command channel pattern.
bd27a1b to
9c48b45
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
dash-spv-ffi/src/client.rs (2)
575-601:⚠️ Potential issue | 🟠 Major
dash_spv_ffi_client_destroyraces withmonitor_network's own shutdown path.The order here is: cancel token → stop inner → join threads, but
stop_client_internal(which is correct) uses: cancel → join threads → stop inner. When the token is cancelled (line 580), the monitoring thread begins its own async shutdown insidemonitor_network(acquiringsync_coordinatorlock, callingshutdown()). Immediately callingclient.inner.stop().await(lines 583–585) on the tokio runtime races against that in-flight shutdown. Joining threads only after (lines 588–597) is too late.🐛 Proposed fix – match the order used by
stop_client_internalclient.shutdown_token.cancel(); - // Stop the SPV client - client.runtime.block_on(async { - let _ = client.inner.stop().await; - }); - - // Join all active threads to ensure clean shutdown + // Join threads first so monitor_network's shutdown completes before we call stop() let threads = { let mut threads_guard = client.active_threads.lock().unwrap(); std::mem::take(&mut *threads_guard) }; - for handle in threads { if let Err(e) = handle.join() { tracing::error!("Failed to join thread during cleanup: {:?}", e); } } + + // Safe to call stop() now that the monitoring thread is fully wound down + client.runtime.block_on(async { + let _ = client.inner.stop().await; + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 575 - 601, dash_spv_ffi_client_destroy currently cancels the shutdown token then immediately calls client.inner.stop(), racing with monitor_network which expects threads to be joined before stopping; reorder to match stop_client_internal: after cancelling client.shutdown_token, first take and join all handles from client.active_threads (using the same mutex/take pattern) and only after all threads have been joined call client.runtime.block_on(async { let _ = client.inner.stop().await; }); keep the shutdown_token.cancel() call first and retain the tracing/error logging around thread joins to preserve diagnostics.
547-568:⚠️ Potential issue | 🟠 Major
dash_spv_ffi_client_clear_storagedoesn't stop active monitoring threads before wiping storage.
client.inner.stop().awaitsets therunningflag and shuts down the sync coordinator, but the monitoring thread spawned bydash_spv_ffi_client_run(which holds a clone ofclient.inner) may be mid-tick — executing an in-flight storage write — whenclear_storage()runs. This can cause data corruption or crashes.The function takes an immutable reference (
&(*client)) and cannot callstop_client_internal, which correctly sequencescancel → join → stop. At minimum, the function should take&mut FFIDashSpvClientand callstop_client_internalbefore clearing:🐛 Proposed fix
pub unsafe extern "C" fn dash_spv_ffi_client_clear_storage(client: *mut FFIDashSpvClient) -> i32 { null_check!(client); - let client = &(*client); + let client = &mut (*client); + + // Safely stop all monitoring threads before wiping storage + if let Err(e) = stop_client_internal(client) { + tracing::warn!("Failed to stop client before clearing storage: {}", e); + } let result = client.runtime.block_on(async { - // Try to stop before clearing to ensure no in-flight writes race the wipe. - if let Err(e) = client.inner.stop().await { - tracing::warn!("Failed to stop client before clearing storage: {}", e); - } - client.inner.clear_storage().await });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 547 - 568, dash_spv_ffi_client_clear_storage currently takes an immutable FFIDashSpvClient reference and calls client.inner.stop().await, which can race with the monitoring thread; change the function to accept a mutable pointer (use &mut FFIDashSpvClient) and call the existing stop_client_internal routine (which performs cancel → join → stop) on the client before invoking client.inner.clear_storage().await so the monitor spawned by dash_spv_ffi_client_run is properly cancelled and joined prior to wiping storage; preserve the FFI error handling (set_last_error / FFIErrorCode) after clear_storage completes.dash-spv/src/client/sync_coordinator.rs (1)
71-96:⚠️ Potential issue | 🟠 Major
run()hangs permanently ifclient_taskexits without Ctrl-C being pressed.
tokio::join!waits for bothclient_taskandshutdown_task.shutdown_taskonly returns aftertokio::signal::ctrl_c()resolves. Ifclient_taskexits for any reason other than token cancellation (e.g., a network failure propagating out ofmonitor_network, an externalstop()call setting therunningflag, orctrl_c()returningErr),run()blocks indefinitely.The symmetric fix is to await
client_taskand abortshutdown_taskafterwards:🐛 Proposed fix
- let (client_result, _) = tokio::join!(client_task, shutdown_task); - client_result.map_err(|e| SpvError::General(format!("client_task panicked: {e}")))? + let client_result = client_task.await; + shutdown_task.abort(); + client_result.map_err(|e| SpvError::General(format!("client_task panicked: {e}")))?
shutdown_taskis still useful: it runs in the background on the tokio runtime, cancels the token on Ctrl-C, which causesclient_task(viatoken.cancelled()) to exit cleanly. Dropping theJoinHandledetaches the task, so it needs an explicitabort().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/sync_coordinator.rs` around lines 71 - 96, run() currently uses tokio::join! which waits for both client_task and shutdown_task, causing run to hang if client_task finishes first; change the logic to await client_task first, capture its Result, then abort the shutdown_task JoinHandle (call shutdown_task.abort()) and optionally await its abort completion, and finally propagate client_task's result (mapping panics to SpvError as before). Locate the spawned tasks named client_task and shutdown_task and adjust the control flow so monitor_network()/client_task is awaited first, stop() is still called on client exit, and shutdown_task is explicitly aborted to avoid permanent blocking.dash-spv/tests/wallet_integration_test.rs (1)
18-21:⚠️ Potential issue | 🟡 MinorKeep
TempDiralive for the duration of the test.
TempDir::new().unwrap().path()creates a temporary directory that is immediately dropped at the end of the statement. WhenTempDiris dropped, the underlying directory is deleted, so the path passed towith_storage_path()points to a non-existent directory. This pattern is used in bothwallet_integration_test.rs(line 20) andpeer_test.rs(line 21).While
fs::create_dir_all()inDiskStorageManager::new()will recreate the directory, it's inefficient and masks the underlying issue. The temporary directory should remain in scope for as long as it's needed.🛡️ Suggested fix
async fn create_test_client( ) -> DashSpvClient<WalletManager<ManagedWalletInfo>, PeerNetworkManager, DiskStorageManager> { + let temp_dir = TempDir::new().unwrap(); let config = ClientConfig::testnet() .without_filters() - .with_storage_path(TempDir::new().unwrap().path()) + .with_storage_path(temp_dir.path()) .without_masternodes();Note: The
temp_dirvariable must remain in scope for the entire duration of the test, or returned alongside the client.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/wallet_integration_test.rs` around lines 18 - 21, The test currently constructs a TempDir inline and immediately drops it, so the path passed to ClientConfig::testnet().with_storage_path(...) points to a removed directory; create a TempDir binding (e.g., let temp_dir = TempDir::new().unwrap()) before building the ClientConfig, pass temp_dir.path() into with_storage_path, and keep temp_dir in scope for the duration of wallet_integration_test (or return/store it alongside the client); ensure the same change is applied in peer_test to avoid premature deletion.
🧹 Nitpick comments (6)
dash-spv/src/client/transactions.rs (1)
14-27:network_guardis held across the asyncbroadcast()call, blocking other network-lock waiters.
self.networkis atokio::sync::Mutex, so holding its guard acrossnetwork.broadcast(message).awaitis safe, but any other task that needs the network lock while a broadcast is in flight (e.g.,subscribe_network_events(), peer-count queries) will be blocked for the full round-trip duration.For an infrequent operation like transaction broadcast this is acceptable today, but consider whether
broadcast()can be invoked without holding the top-level network mutex — e.g., by extracting a channel sender or aArc<PeerNetworkManager>handle before releasing the lock.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/transactions.rs` around lines 14 - 27, The code holds network_guard across the await on network.broadcast(...) which blocks other tasks waiting on self.network; to fix, acquire the guard, downcast to PeerNetworkManager and clone or extract a lightweight handle (e.g., an Arc<crate::network::manager::PeerNetworkManager> or a channel sender) and then drop the network_guard before calling broadcast so broadcast(...) runs without holding the tokio::sync::Mutex; update the transaction broadcast path where network_guard, network, network.peer_count(), and network.broadcast(...) are used (and consider using the same extracted handle in subscribe_network_events() / peer_count queries).dash-spv-ffi/src/client.rs (1)
335-394: Unnecessaryresubscribe()— use the original receivers directly.
subscribe_sync_events(),subscribe_network_events(), and the wallet receiver are obtained fresh inblock_on(lines 335–344). Each is then immediately discarded in favour ofreceiver.resubscribe()(lines 352, 363, 387).resubscribe()creates a second receiver at the current tail while the first is dropped, wasting a slot in each broadcast channel.♻️ Proposed fix – use original receivers directly
- let (sync_event_rx, network_event_rx, progress_rx, wallet_event_rx) = - client.runtime.block_on(async { - let wallet_rx = client.inner.wallet().read().await.subscribe_events(); - ( - client.inner.subscribe_sync_events().await, - client.inner.subscribe_network_events().await, - client.inner.subscribe_progress().await, - wallet_rx, - ) - }); + let (sync_event_rx, network_event_rx, progress_rx, wallet_event_rx) = + client.runtime.block_on(async { + let wallet_rx = client.inner.wallet().read().await.subscribe_events(); + ( + client.inner.subscribe_sync_events().await, + client.inner.subscribe_network_events().await, + client.inner.subscribe_progress().await, + wallet_rx, + ) + }); ... if client.sync_event_callbacks.lock().unwrap().is_some() { let handle = spawn_broadcast_monitor( "Sync event", - sync_event_rx.resubscribe(), + sync_event_rx, ... ); } if client.network_event_callbacks.lock().unwrap().is_some() { let handle = spawn_broadcast_monitor( "Network event", - network_event_rx.resubscribe(), + network_event_rx, ... ); } // progress_rx.clone() is already correct for watch receivers if client.wallet_event_callbacks.lock().unwrap().is_some() { let handle = spawn_broadcast_monitor( "Wallet event", - wallet_event_rx.resubscribe(), + wallet_event_rx, ... ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 335 - 394, The code unnecessarily calls .resubscribe() on freshly obtained receivers (sync_event_rx, network_event_rx, wallet_event_rx) which creates extra broadcast slots; instead pass the original receiver variables into spawn_broadcast_monitor (i.e. replace sync_event_rx.resubscribe(), network_event_rx.resubscribe(), wallet_event_rx.resubscribe() with sync_event_rx, network_event_rx, wallet_event_rx respectively) so the first receivers are used directly when calling spawn_broadcast_monitor (leave spawn_progress_monitor(progress_rx.clone(), ...) as-is).dash-spv/src/client/lifecycle.rs (2)
176-178: TwoMutexlocks held simultaneously across an.awaitboundary.
self.sync_coordinator.lock().awaitandself.network.lock().awaitare both held for the duration ofstart(...).await. If any code path insidestart()tries to acquiresync_coordinatorornetworkindependently, this will deadlock. This is likely safe during startup (no concurrent callers yet), but it's fragile. Consider acquiring the network lock first, then passing it in:♻️ Suggested restructure
- if let Err(e) = - self.sync_coordinator.lock().await.start(&mut *self.network.lock().await).await - { + let mut network_guard = self.network.lock().await; + if let Err(e) = + self.sync_coordinator.lock().await.start(&mut *network_guard).await + {This makes the lock acquisition order explicit and easier to audit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/lifecycle.rs` around lines 176 - 178, The code currently holds two MutexGuards across an await by calling self.sync_coordinator.lock().await.start(&mut *self.network.lock().await).await which can deadlock; change the order to acquire the network lock first (self.network.lock().await) then acquire or pass only the necessary sync_coordinator handle into start so you do not hold both guards across the await: obtain the network guard, create a mutable reference or wrapper for the network to pass into SyncCoordinator::start (or change start to accept a network guard/borrow), then call sync_coordinator.lock().await.start(...) while ensuring only one MutexGuard is held during the await. Ensure you update the SyncCoordinator::start signature or call site accordingly and keep references to the unique symbols sync_coordinator, network, and start to locate the fix.
88-90:expect()in library code — prefer propagating an error.Line 90 uses
.expect("Masternode list engine must exist if masternodes are enabled"). While the invariant is guaranteed by theifguard on line 87, the coding guidelines state to avoidunwrap()/expect()in library code. As per coding guidelines: "Avoidunwrap()andexpect()in library code; use explicit error types."♻️ Suggested fix
if config.enable_masternodes { let masternode_list_engine = masternode_engine .clone() - .expect("Masternode list engine must exist if masternodes are enabled"); + .ok_or_else(|| SpvError::Config("Masternode list engine not initialized but masternodes are enabled".to_string()))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/lifecycle.rs` around lines 88 - 90, The use of .expect on masternode_engine should be replaced by explicit error propagation: change the enclosing function to return Result<...> (or use the crate's error type), then replace let masternode_list_engine = masternode_engine.clone().expect(...) with something like let masternode_list_engine = masternode_engine.clone().ok_or_else(|| YourError::MissingMasternodeEngine)? (or map_err to convert to the appropriate error type); ensure you add a new error variant (e.g., MissingMasternodeEngine) or use an existing error type and update callers to handle the Result.dash-spv/src/client/mempool.rs (1)
122-134: DuplicateMempoolFilterconstruction logic withlifecycle.rs.Lines 126–132 here are identical to
lifecycle.rslines 143–149 (same config fields, sameHashSet::new(), samemempool_state.clone()). Consider extracting a helper likefn create_mempool_filter(&self, config: &ClientConfig) -> Arc<MempoolFilter>to avoid the duplication and reduce the risk of the two paths diverging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/mempool.rs` around lines 122 - 134, Duplicate construction of MempoolFilter in update_mempool_filter is error-prone; extract the shared logic into a helper like fn create_mempool_filter(&self, config: &ClientConfig) -> Arc<MempoolFilter> that builds Arc::new(MempoolFilter::new(...)) using config.mempool_strategy, config.max_mempool_transactions, self.mempool_state.clone(), HashSet::new(), and config.network, then replace the inline construction in update_mempool_filter with a call to create_mempool_filter(&config) and update the identical call site in lifecycle.rs to use the same helper so both paths share one implementation.dash-spv/src/client/core.rs (1)
107-118: Document lock acquisition order instart()method to prevent future deadlock hazards.The struct uses
RwLockfor read-heavy fields (config,wallet) andMutexfor exclusive-access fields (network,storage,sync_coordinator). While the current lock acquisition pattern instart()(lifecycle.rs lines 138–189) is safe, there is a nested lock at line 177 (self.sync_coordinator.lock().await.start(&mut *self.network.lock().await)) that should be documented. As the codebase evolves, explicit documentation of the expected lock acquisition order will help prevent future deadlock bugs.Consider whether
Arc<Mutex<N>>andArc<Mutex<S>>are the most appropriate wrappers for network and storage managers in an async context, or if interior mutability patterns might better support concurrent access elsewhere.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/core.rs` around lines 107 - 118, Add explicit documentation describing the lock acquisition order used by DashSpvClient to prevent deadlocks: document that callers should always acquire locks in the canonical order (e.g., first acquire RwLock fields like config and wallet for read, then acquire Mutex fields like network and storage, then sync_coordinator; specifically note the nested acquisition at self.sync_coordinator.lock().await.start(&mut *self.network.lock().await) in start()), and record why this ordering is required and that any new code must follow it; also add a note next to the struct definition about reconsidering Arc<Mutex<N>> / Arc<Mutex<S>> for async-friendly interior mutability if concurrent access patterns change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv-ffi/src/platform_integration.rs`:
- Around line 184-190: The match block in platform_integration.rs that sets
`height` from `network` currently hardcodes activation heights (notably
`dashcore::Network::Dash => 1_888_888`), which violates the "no hardcoded
network parameters" rule; update the code to read activation heights from the
configuration instead: add an Option<u32> field `platform_activation_height` to
`ClientConfig` and use `client_config.platform_activation_height` when present,
otherwise fall back to a documented default sourced from a central
`NetworkParams` constant (or return an error/require explicit opt-in for
placeholder values) instead of embedding literals in the `match` block that
includes `dashcore::Network::Dash`, `Testnet`, `Devnet` etc. Ensure callers can
supply authoritative values via `ClientConfig` and remove the placeholder
literal `1_888_888` from the FFI function.
In `@dash-spv/src/client/core.rs`:
- Around line 195-210: The update_config method currently blocks network changes
unconditionally but returns an error message implying a running-state check; fix
by either (A) enforcing the running check: read the client's running state
(self.running) inside update_config and only return SpvError::Config("Cannot
change network on running client") when the client is actually running, then
allow network updates when not running, or (B) if network must never change
after construction, update the error text to a clear invariant (e.g., "Cannot
change network after client creation") and any relevant docs; locate function
update_config and the self.running field to implement the chosen fix.
In `@dash-spv/src/client/queries.rs`:
- Around line 26-33: Remove the redundant async method get_peer_count() from the
type that also defines peer_count(), since it is an unused duplicate and its doc
comment is misleading; delete the get_peer_count() function and its doc comment,
and run the test/build to ensure no callers exist (if any call sites are found,
update them to use peer_count()). Reference: peer_count() and get_peer_count().
---
Outside diff comments:
In `@dash-spv-ffi/src/client.rs`:
- Around line 575-601: dash_spv_ffi_client_destroy currently cancels the
shutdown token then immediately calls client.inner.stop(), racing with
monitor_network which expects threads to be joined before stopping; reorder to
match stop_client_internal: after cancelling client.shutdown_token, first take
and join all handles from client.active_threads (using the same mutex/take
pattern) and only after all threads have been joined call
client.runtime.block_on(async { let _ = client.inner.stop().await; }); keep the
shutdown_token.cancel() call first and retain the tracing/error logging around
thread joins to preserve diagnostics.
- Around line 547-568: dash_spv_ffi_client_clear_storage currently takes an
immutable FFIDashSpvClient reference and calls client.inner.stop().await, which
can race with the monitoring thread; change the function to accept a mutable
pointer (use &mut FFIDashSpvClient) and call the existing stop_client_internal
routine (which performs cancel → join → stop) on the client before invoking
client.inner.clear_storage().await so the monitor spawned by
dash_spv_ffi_client_run is properly cancelled and joined prior to wiping
storage; preserve the FFI error handling (set_last_error / FFIErrorCode) after
clear_storage completes.
In `@dash-spv/src/client/sync_coordinator.rs`:
- Around line 71-96: run() currently uses tokio::join! which waits for both
client_task and shutdown_task, causing run to hang if client_task finishes
first; change the logic to await client_task first, capture its Result, then
abort the shutdown_task JoinHandle (call shutdown_task.abort()) and optionally
await its abort completion, and finally propagate client_task's result (mapping
panics to SpvError as before). Locate the spawned tasks named client_task and
shutdown_task and adjust the control flow so monitor_network()/client_task is
awaited first, stop() is still called on client exit, and shutdown_task is
explicitly aborted to avoid permanent blocking.
In `@dash-spv/tests/wallet_integration_test.rs`:
- Around line 18-21: The test currently constructs a TempDir inline and
immediately drops it, so the path passed to
ClientConfig::testnet().with_storage_path(...) points to a removed directory;
create a TempDir binding (e.g., let temp_dir = TempDir::new().unwrap()) before
building the ClientConfig, pass temp_dir.path() into with_storage_path, and keep
temp_dir in scope for the duration of wallet_integration_test (or return/store
it alongside the client); ensure the same change is applied in peer_test to
avoid premature deletion.
---
Nitpick comments:
In `@dash-spv-ffi/src/client.rs`:
- Around line 335-394: The code unnecessarily calls .resubscribe() on freshly
obtained receivers (sync_event_rx, network_event_rx, wallet_event_rx) which
creates extra broadcast slots; instead pass the original receiver variables into
spawn_broadcast_monitor (i.e. replace sync_event_rx.resubscribe(),
network_event_rx.resubscribe(), wallet_event_rx.resubscribe() with
sync_event_rx, network_event_rx, wallet_event_rx respectively) so the first
receivers are used directly when calling spawn_broadcast_monitor (leave
spawn_progress_monitor(progress_rx.clone(), ...) as-is).
In `@dash-spv/src/client/core.rs`:
- Around line 107-118: Add explicit documentation describing the lock
acquisition order used by DashSpvClient to prevent deadlocks: document that
callers should always acquire locks in the canonical order (e.g., first acquire
RwLock fields like config and wallet for read, then acquire Mutex fields like
network and storage, then sync_coordinator; specifically note the nested
acquisition at self.sync_coordinator.lock().await.start(&mut
*self.network.lock().await) in start()), and record why this ordering is
required and that any new code must follow it; also add a note next to the
struct definition about reconsidering Arc<Mutex<N>> / Arc<Mutex<S>> for
async-friendly interior mutability if concurrent access patterns change.
In `@dash-spv/src/client/lifecycle.rs`:
- Around line 176-178: The code currently holds two MutexGuards across an await
by calling self.sync_coordinator.lock().await.start(&mut
*self.network.lock().await).await which can deadlock; change the order to
acquire the network lock first (self.network.lock().await) then acquire or pass
only the necessary sync_coordinator handle into start so you do not hold both
guards across the await: obtain the network guard, create a mutable reference or
wrapper for the network to pass into SyncCoordinator::start (or change start to
accept a network guard/borrow), then call
sync_coordinator.lock().await.start(...) while ensuring only one MutexGuard is
held during the await. Ensure you update the SyncCoordinator::start signature or
call site accordingly and keep references to the unique symbols
sync_coordinator, network, and start to locate the fix.
- Around line 88-90: The use of .expect on masternode_engine should be replaced
by explicit error propagation: change the enclosing function to return
Result<...> (or use the crate's error type), then replace let
masternode_list_engine = masternode_engine.clone().expect(...) with something
like let masternode_list_engine = masternode_engine.clone().ok_or_else(||
YourError::MissingMasternodeEngine)? (or map_err to convert to the appropriate
error type); ensure you add a new error variant (e.g., MissingMasternodeEngine)
or use an existing error type and update callers to handle the Result.
In `@dash-spv/src/client/mempool.rs`:
- Around line 122-134: Duplicate construction of MempoolFilter in
update_mempool_filter is error-prone; extract the shared logic into a helper
like fn create_mempool_filter(&self, config: &ClientConfig) ->
Arc<MempoolFilter> that builds Arc::new(MempoolFilter::new(...)) using
config.mempool_strategy, config.max_mempool_transactions,
self.mempool_state.clone(), HashSet::new(), and config.network, then replace the
inline construction in update_mempool_filter with a call to
create_mempool_filter(&config) and update the identical call site in
lifecycle.rs to use the same helper so both paths share one implementation.
In `@dash-spv/src/client/transactions.rs`:
- Around line 14-27: The code holds network_guard across the await on
network.broadcast(...) which blocks other tasks waiting on self.network; to fix,
acquire the guard, downcast to PeerNetworkManager and clone or extract a
lightweight handle (e.g., an Arc<crate::network::manager::PeerNetworkManager> or
a channel sender) and then drop the network_guard before calling broadcast so
broadcast(...) runs without holding the tokio::sync::Mutex; update the
transaction broadcast path where network_guard, network, network.peer_count(),
and network.broadcast(...) are used (and consider using the same extracted
handle in subscribe_network_events() / peer_count queries).
All fields wrapped in
Arcwith locks. Callers hold a handle while the monitoring loop runs in a spawned task viarun()getting rid of the command channel pattern.Summary by CodeRabbit
Refactor
Breaking Changes
network(),peer_count(),subscribe_progress(),progress(),subscribe_sync_events(), andsubscribe_network_events()now require.await.start(),stop(),shutdown(),clear_storage(), andupdate_config()now accept immutable references (&self) instead of mutable references (&mut self).DashSpvClientInterfaceand command-based query API; use direct client methods instead.run()method signature changed: now accepts onlyshutdown_token, removing the command receiver parameter.Chores
futuresdependency.