Redstone stellar oracle contract relayer support#359
Conversation
…act-relayer-support
|
@coderabbitai Review this PR. |
|
✅ Actions performedReview triggered.
|
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request introduces a new RedStone oracle adapter contract for NEAR with accompanying integration into the relayer service. It abstracts oracle functionality through a generic Spec trait, implements both Pyth and RedStone oracle specifications, adds a Node.js bridge service for RedStone price feed fetching, and updates the relayer to support optional multi-oracle configuration with a multi-stage Docker build. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 28
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
service/relayer/src/route/universal_account/relay.rs (1)
227-237:⚠️ Potential issue | 🟠 MajorGuard optional Pyth handle to prevent panic
app.pyth.as_ref().unwrap()will panic when Pyth is not configured, crashing the relayer. Collect the updates first, skip when empty, and return a clear error when Pyth isn't configured but is required.🛡️ Safer handling
- let request_price_updates = interacted_price_identifiers - .intersection(&update_price_feeds) - .copied(); - - if let Err(e) = app - .pyth - .as_ref() - .unwrap() - .update(request_price_updates.collect()) - .await - { + let request_price_updates: Box<[PriceIdentifier]> = interacted_price_identifiers + .intersection(&update_price_feeds) + .copied() + .collect(); + + if !request_price_updates.is_empty() { + let Some(pyth) = app.pyth.as_ref() else { + return SimpleResponse::Failure { + error: "Pyth oracle is not configured".to_string(), + }; + }; + if let Err(e) = pyth.update(request_price_updates).await { tracing::error!(error = ?e, "Failed to update requested Pyth prices"); return SimpleResponse::Failure { error: e.to_string(), }; - } + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/relayer/src/route/universal_account/relay.rs` around lines 227 - 237, The code currently calls app.pyth.as_ref().unwrap() which can panic if Pyth isn't configured; first collect request_price_updates into a Vec via request_price_updates.collect::<Vec<_>>(), return/skip early if that Vec is empty, then check app.pyth.as_ref().ok_or(...) (or is_some()) before calling update; if Pyth is missing return a clear error indicating Pyth is required for these updates, otherwise call app.pyth.as_ref().unwrap().update(collected_vec).await and handle the Result as before. Use the symbols request_price_updates, app.pyth, and the update(...) method to locate and change the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Cargo.toml`:
- Line 54: Remove the workspace-level soroban-sdk entry and instead declare it
as a dev-dependency inside the specific crate that uses it: delete the
soroban-sdk = "23.0.3" line from the workspace Cargo.toml and add soroban-sdk
(or the actual crate name needed, e.g., soroban-client/soroban-sdk matching the
import) under [dev-dependencies] in the universal-account crate's Cargo.toml,
since only test code in universal-account/src/encoding/stellar/public_key.rs
(which imports soroban_client::keypair::{Keypair, KeypairBehavior}) requires it.
In `@contract/redstone-adapter/Cargo.toml`:
- Line 15: The crate redstone-adapter currently pins hex-literal = "1.0.0" which
diverges from the workspace-managed version; change the dependency entry for
hex-literal in the redstone-adapter Cargo.toml to use the workspace-sourced
version by replacing the literal version with hex-literal.workspace = true (or
alternatively, update the workspace's hex-literal version if you truly need
1.0.0) so it matches the other workspace members (lst-oracle, fuzz, test-utils).
In `@contract/redstone-adapter/src/config/mod.rs`:
- Around line 36-51: The recover_public_key function currently uses
Signature::try_from(...).unwrap(), RecoveryId::try_from(...).unwrap(), and
VerifyingKey::recover_from_prehash(...).unwrap() which can panic; replace those
unwraps with proper error propagation returning redstone::CryptoError variants
(e.g., map Signature::try_from failures to CryptoError::InvalidSignatureLen,
RecoveryId::try_from failures to CryptoError::RecoveryByte, and
VerifyingKey::recover_from_prehash failures to CryptoError::RecoverPreHash)
using ? or map_err so the function returns Err(...) instead of panicking; update
the result construction to propagate errors from Signature::try_from,
RecoveryId::try_from, and VerifyingKey::recover_from_prehash within
recover_public_key.
In `@contract/redstone-adapter/src/lib.rs`:
- Line 95: Remove the development debug macros from the contract by deleting the
eprintln! and dbg! calls (specifically the
eprintln!("{timestamp_verification:?}"); and the dbg! usages) in lib.rs so they
don't consume gas in production; search for any remaining dbg! or eprintln!
invocations (e.g., around the timestamp_verification and related verification
logic) and remove them or replace them with a no-cost production-safe
alternative (such as conditional debug feature gating) if runtime diagnostics
are required.
- Around line 64-75: The get_prices method currently calls
get_prices_from_payload(...).unwrap() and calls
templar_common::panic_with_message on mismatched lengths; replace both panics
with a proper error-returning flow: change get_prices to return Result<GetPrice,
_> (or use the contract's existing error type) and propagate errors from
get_prices_from_payload using map_err to convert parsing failures into
descriptive contract errors, and validate prices.len() against feed_ids.len()
returning a clear Err instead of calling templar_common::panic_with_message;
update callers/tests accordingly to handle the Result.
- Around line 141-152: The read_prices method currently panics via expect() and
unwrap() (calls: self.db.get in read_prices, self.check_price_data, and
primitive_types::U256::from_str) so change its signature to return a
Result<Vec<U256>, E> (or Result with a custom error type) and replace
expect()/unwrap() with proper error handling/propagation: map missing feed from
self.db.get into an error, propagate check_price_data errors instead of
unwrapping, and convert U256 parsing errors into the Result; ensure callers of
read_prices are updated to handle the Result.
- Around line 203-208: Remove the dbg! macros wrapping values during config
creation and payload processing: in the block that builds the config via
self.config.redstone_config::<redstone::network::StdEnv>((), dbg!(feed_ids),
block_timestamp.into(),) and the subsequent call dbg!(process_payload(&mut
config, payload.to_vec()))?, replace the dbg! wrappers with the original
expressions (pass feed_ids directly and call process_payload(&mut config,
payload.to_vec())?), removing any debug printing; if you need runtime visibility
keep structured logging instead of dbg! so signatures around redstone_config and
process_payload remain unchanged.
- Line 73: The current mapping that builds prices uses price.low_u128() inside
Vec::from_iter(...map(|(_, price)| price.low_u128().into())), which silently
truncates U256 values larger than u128::MAX; instead validate or convert safely
by checking each price fits into U128 (e.g., compare price >
U256::from(u128::MAX) or use price.try_into() if available) and return/propagate
an error when overflow would occur, or switch the destination type to a larger
integer to avoid truncation; update the map closure in the prices construction
to perform that check and produce a Result or larger-type value accordingly.
In `@contract/redstone-adapter/src/utils.rs`:
- Line 13: The code uses String::from_utf8_lossy(trimmed).to_string(), which
silently replaces invalid UTF-8; change this to validate explicitly by using
String::from_utf8 (or equivalent) on the byte buffer (the variable named
trimmed) and handle the Result error path: return an Err or propagate a clear
error if feed IDs must be valid UTF‑8, or document/convert deliberately if you
choose lossy behavior; update the function in utils.rs that performs this
conversion to either return Result<String, Utf8Error> (or map the error into
your crate error type) or log/handle the invalid-UTF8 case instead of silently
replacing bytes.
- Around line 36-39: Replace the weak assert_ne(...) check with explicit
assertions that the conversion functions produce the expected string outputs:
call convert_btc_feed_id_to_string(...) and
convert_non_btc_feed_id_to_string(...) with the test inputs used in the diff and
assert_equals (or assert_eq!) against the known expected strings for those
inputs; keep the existing assert_ne if you want to also ensure they differ but
primarily add assert_eq! checks for the exact expected outputs to catch
regressions in convert_btc_feed_id_to_string and
convert_non_btc_feed_id_to_string.
In `@contract/redstone-adapter/tests/test.rs`:
- Line 15: The test instantiates RedStoneAdapter with production config; update
the call site to use the test configuration by replacing config::prod() with
config::test() when creating RedStoneAdapter (RedStoneAdapter::new), ensuring
the test uses test() instead of prod().
- Around line 21-23: The test currently only prints price_data and has no
assertions, so add explicit assertions after calling
ra.read_price_data(prices.clone()) to validate correctness: assert the returned
price_data length matches the input, assert key fields (e.g., price_data entries
for expected asset symbols and their price or timestamp fields) match the values
produced by write_prices/read_price_data, and include an assertion that
ra.write_prices (if invoked earlier in this test) succeeded or returned expected
result; use the price_data variable and the read_price_data function name to
locate where to insert these checks.
In `@service/relayer/build.rs`:
- Around line 6-10: The script currently runs npm run build via the Command
stored in the npm variable without installing dependencies first, which fails in
clean checkouts/CI; add a prior Command invocation that runs "npm install" in
the same directory (current_dir("redstone-bridge")) and wait for it to complete
before calling the existing
npm.arg("run").arg("build").spawn().unwrap().wait().unwrap(); ensure the install
Command uses spawn().unwrap().wait().unwrap() (or equivalent status check) so
the build only starts after dependencies are installed.
In `@service/relayer/Dockerfile`:
- Around line 32-34: The Dockerfile RUN line that performs package installation
should add the --no-install-recommends flag to apt-get install to avoid
installing extra recommended packages; update the RUN command that currently
reads "apt-get install -y ca-certificates libssl3" (in the Dockerfile RUN step)
to include --no-install-recommends so it becomes "apt-get install -y
--no-install-recommends ca-certificates libssl3", keeping the surrounding
apt-get update and cleanup (rm -rf /var/lib/apt/lists/*) intact.
- Around line 22-40: The Redstone bridge runtime in the Dockerfile currently
copies only the transpiled files (COPY --from=node-builder
/usr/src/redstone-bridge/dist/* /app/redstone-bridge) but not its production
node_modules, so runtime deps like `@redstone-finance/protocol`,
`@redstone-finance/sdk` and zod will be missing; fix by ensuring the final image
contains either a bundled artifact or the production dependencies: update the
node-builder stage (where package*.json and npm install/run build are executed)
to run npm ci --production (or npm prune --production) and copy node_modules
into the final image, or adjust the build to produce a self-contained bundle and
copy the full dist/ directory instead of only dist/* so the relayer can require
the Redstone bridge at runtime.
In `@service/relayer/redstone-bridge/package.json`:
- Line 5: The package.json "main" entry currently points to index.js in the
project root but should reference the compiled output; update the "main" field
in package.json to point to the compiled entry (e.g., "dist/index.js") and
ensure the build step actually emits that file (adjust tsconfig/outDir or build
scripts if needed) so imports resolve to the compiled module.
In `@service/relayer/redstone-bridge/src/args.ts`:
- Around line 4-15: The DataServiceId schema currently uses z.literal with an
array which is invalid; replace it with z.enum to define a union of string
literals: update the DataServiceId definition (symbol DataServiceId) to use
z.enum([...]) passing the same string values, and adjust any imports if needed
so the type and validation behave correctly (ensure downstream uses of
DataServiceId still work or change to DataServiceId.parse/type as appropriate).
In `@service/relayer/redstone-bridge/src/handle.ts`:
- Around line 13-33: Add a default branch to the switch on message.method to
make it exhaustive: add a `default` case that uses an exhaustiveness assertion
(e.g., take the unreachable value as `never` and throw/assert) so TypeScript
will error if `Request` gains new variants; reference the existing
`message.method` switch and the `Request` type when adding the `default` case
(or a small helper like `assertUnreachable(x: never): never`) to enforce
compile-time coverage.
In `@service/relayer/redstone-bridge/src/index.ts`:
- Around line 10-24: The TCP client in index.ts lacks error handling and will
crash on connection or parsing failures; add a client.on("error", ...) handler
to log the error and exit or attempt reconnect as appropriate, and wrap the data
handler logic (the callback using Request.parse and await handle) in a try/catch
to catch JSON.parse/Request.parse and handle() exceptions — on catch, log the
error, optionally write an error response back over client.write (e.g., a
structured error message or status) and avoid unhandled rejections so the
process doesn't crash; reference the client object, the "data" event handler,
Request.parse, and handle(...) when implementing these changes.
- Around line 12-19: The data handler assumes each "data" event contains one
complete JSON message which is unsafe; change the client.on("data", ...) logic
to buffer incoming bytes and parse newline-delimited JSON frames instead:
accumulate incoming chunk(s) into a buffer, split by "\n" to extract complete
lines, for each non-empty line run JSON.parse + Request.parse and call
handle(args, message) and client.write for that response, and keep any trailing
partial fragment in the buffer for the next data event (alternatively use a Node
readline interface on the socket to emit complete lines). Ensure Request.parse,
handle, and client.write are used per complete framed message rather than per
raw data event.
In `@service/relayer/src/client/oracle/mod.rs`:
- Around line 58-60: The worker and Handle use of unwrap() on channel operations
can panic; add a ChannelClosed variant to UpdateError and replace all unwraps
with proper map_err conversions to that variant: in the worker loop replace
send.send(client.update(&feed_ids).await).unwrap() with sending that maps a
SendError into UpdateError::ChannelClosed (and handle the Result returned by
send without panicking); in Handle::update replace
self.send.send(...).await.unwrap() so the await's error is mapped via
map_err(|_| UpdateError::ChannelClosed) and returned as Err, and replace the
response recv.await.unwrap() with awaiting the response and mapping a
None/closed case into UpdateError::ChannelClosed (or map_err as appropriate) so
no channel send/recv uses panic-inducing unwraps. Ensure all references to
client.update, feed_ids, send (the Sender), Handle::update, and the response
receiver are updated to propagate ChannelClosed instead of panicking.
In `@service/relayer/src/client/oracle/spec/pyth.rs`:
- Around line 123-145: The test update_actions() in pyth.rs performs network
calls via PythSpec::new() -> update_actions() -> latest_vaa() using a real
reqwest::Client, making it flaky in CI; fix by either marking the test #[ignore]
(or #[tokio::test] #[ignore]) to skip network runs in CI, or refactor PythSpec
to accept an injectable HTTP client (or a trait) and update this test to provide
a mock client that returns deterministic responses so update_actions() can run
offline reliably. Ensure references to update_actions, PythSpec::new, and
latest_vaa are used to locate and change the test or constructor to accept the
mock.
- Around line 48-110: latest_vaa currently deserializes binary.data as a fixed
[Data; 1] which fails when Hermes returns multiple VAAs; change the
ResponseBody/Binary/Data definitions so Binary.data is Vec<Data> (not [Data; 1])
and replace the single-element destructuring (let [vaa] = body.binary.data;
Ok(vaa.0)) with code that handles multiple entries — for example
collect/concatenate all Data.0 vectors into one Vec<u8> (e.g.
body.binary.data.into_iter().flat_map(|d| d.0).collect()) before returning;
touch latest_vaa, ResponseBody, Binary, Data and adjust update_actions usage
accordingly if needed.
In `@service/relayer/src/client/oracle/spec/redstone.rs`:
- Around line 127-131: The Unix socket bind in start_messenger using
UnixListener::bind(SOCKET_PATH) can fail if a stale socket file exists; before
calling UnixListener::bind, check for and remove the existing SOCKET_PATH (e.g.,
std::fs::remove_file(SOCKET_PATH) if it exists), ignore NotFound errors and
handle/remove other errors appropriately, then proceed to bind the listener (or
propagate/log the bind error) so the relayer can start reliably.
- Around line 146-151: The current read loop unconditionally unwraps the
read_line result, JSON parse, and pending removal, which will panic on
EOF/malformed data or unknown IDs: change the block handling read.read_line(&mut
buf) to match on the Result and the bytes read (treat 0 as EOF and break/log),
then attempt serde_json::from_str(&buf) with a Result match to log parse errors
and continue, and finally use pending.remove(&received.id) and handle the Option
(log a warning if None instead of unwrap, and only call send when Some(sender)).
Keep using IpcResponse, received.id, buf, pending.remove and tracing for
logging.
- Around line 48-59: Replace the panicking unwraps in the tokio::spawn task:
handle the result of cmd.spawn() instead of calling cmd.spawn().unwrap() (log
the error with tracing::error and signal shutdown via kill.send(()) on Err), and
replace process.kill().await.unwrap() with a match that logs any Err from
process.kill().await and signals shutdown (use let _ = kill.send(()) to avoid
secondary panics); also handle the Result from process.wait() so the Err branch
logs the error and signals shutdown rather than panicking. Target the spawn
closure containing cmd.spawn(), the process variable, process.kill().await,
process.wait(), on_kill.changed(), and kill.send() when making these changes.
- Around line 251-254: Replace the global tracing initialization call that uses
tracing_subscriber::fmt().with_max_level(tracing::Level::INFO).init() with the
non-panicking variant by calling try_init() instead; locate the init call in the
redstone.rs file (the tracing_subscriber::fmt()...with_max_level(...)
invocation) and swap .init() for .try_init(), handling the Result (e.g., ignore
Err or log it) so tests don’t panic when tracing is already initialized.
- Around line 222-226: Replace the unwraps with proper error propagation so
bridge/send/recv and hex decoding failures return Spec::Error (backed by
std::io::Error) instead of panicking: in the code around Request::fetch(...) and
the subsequent self.bridge_send.send(req).await, await on recv, and
hex::decode(&payload_string_hex) handle each Result/Option by mapping the
bridge/send errors and recv timeouts into std::io::Error (e.g. using map_err or
.map(|v| v.ok_or_else(...)?).and_then(...)) and propagate via the function's
Result so the Spec::Error is returned; update the path that constructs
payload_vec to return Err(std::io::Error::new(...)) on hex::decode failures and
propagate that error out of the surrounding method.
---
Outside diff comments:
In `@service/relayer/src/route/universal_account/relay.rs`:
- Around line 227-237: The code currently calls app.pyth.as_ref().unwrap() which
can panic if Pyth isn't configured; first collect request_price_updates into a
Vec via request_price_updates.collect::<Vec<_>>(), return/skip early if that Vec
is empty, then check app.pyth.as_ref().ok_or(...) (or is_some()) before calling
update; if Pyth is missing return a clear error indicating Pyth is required for
these updates, otherwise call
app.pyth.as_ref().unwrap().update(collected_vec).await and handle the Result as
before. Use the symbols request_price_updates, app.pyth, and the update(...)
method to locate and change the logic.
…act-relayer-support
There was a problem hiding this comment.
Actionable comments posted: 24
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
mock/oracle/src/lib.rs (1)
140-142:⚠️ Potential issue | 🟡 MinorPotential buffer size mismatch in
custom_getrandom.
env::random_seed_array()returns a 32-byte array, butbufcan be any size. Ifbuf.len() != 32, this will panic at runtime. Consider handling variable buffer sizes:🛡️ Proposed fix to handle variable buffer sizes
#[allow(clippy::unnecessary_wraps)] pub fn custom_getrandom(buf: &mut [u8]) -> Result<(), Error> { - buf.copy_from_slice(&env::random_seed_array()); + let seed = env::random_seed_array(); + for (i, byte) in buf.iter_mut().enumerate() { + *byte = seed[i % seed.len()]; + } Ok(()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mock/oracle/src/lib.rs` around lines 140 - 142, custom_getrandom currently assumes buf is 32 bytes and will panic when buf.len() != 32; update the function to safely handle variable-length buffers by obtaining the 32-byte seed from env::random_seed_array(), copying only the minimum of buf.len() and 32 bytes into buf (e.g. let seed = env::random_seed_array(); let n = buf.len().min(seed.len()); buf[..n].copy_from_slice(&seed[..n]);), and then deterministically fill any remaining bytes (if buf.len() > 32) with zeros or another defined value, or return an Err when buf.len() > 32 depending on the intended semantics; make this change inside custom_getrandom to avoid slice panics.service/relayer/src/app/mod.rs (1)
78-79:⚠️ Potential issue | 🟡 MinorDatabase initialization failure will panic without meaningful error context.
The
#[allow(clippy::unwrap_used)]suppresses the lint, butDatabase::newfailing would cause a panic with limited diagnostic information. Consider propagating the error or adding context.♻️ Proposed improvement
- #[allow(clippy::unwrap_used)] - let database = Database::new(&args.database_url, kill.clone()).unwrap(); + let database = Database::new(&args.database_url, kill.clone()) + .expect("Failed to initialize database connection");Or better, have
App::newreturn aResultto allow proper error handling by callers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/relayer/src/app/mod.rs` around lines 78 - 79, Change App::new to return Result<App, E> (propagating errors) and replace the unwrap on Database::new(&args.database_url, kill.clone()).unwrap() with a fallible call that uses the ? operator (or map_err/anyhow::Context) so the database creation error is returned with added context (include args.database_url and a brief message). Specifically, modify the App::new signature and its call sites to propagate the Result, and inside App::new replace the unwrap on Database::new with a contextualized error propagation referencing Database::new, args.database_url, and kill.
♻️ Duplicate comments (3)
contract/redstone-adapter/tests/test.rs (1)
37-39:⚠️ Potential issue | 🟠 MajorTest lacks assertions - will pass regardless of correctness.
The parameterized
payloadtest only prints results viaeprintln!but never asserts expected values for the success cases (stellarandjs_sdk). The#[should_panic]cases are properly validated, but the success paths need assertions similar to theoutput()test.💚 Proposed fix to add meaningful assertions
let price_data = ra.read_price_data(prices.clone()); - eprintln!("{price_data:#?}"); + assert_eq!(price_data.len(), 2, "Expected 2 price entries"); + for (feed_id, data) in &price_data { + assert!(!data.price.is_zero(), "Price should not be zero"); + assert!(data.package_timestamp.0 > 0, "Package timestamp should be set"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contract/redstone-adapter/tests/test.rs` around lines 37 - 39, The test currently only prints price_data from ra.read_price_data(prices.clone()) and lacks assertions for the success cases; update the parameterized payload test to replace the eprintln!("{price_data:#?}") with explicit assertions (e.g., assert_eq! or assert_matches!) that compare price_data to the expected value used in the output() test or an equivalent expected structure for the "stellar" and "js_sdk" cases, ensuring failing cases remain covered by #[should_panic] and that you reference the read_price_data result (price_data) and the input prices/prices.clone() to build the expected assertions.service/relayer/src/client/oracle/spec/redstone.rs (1)
175-199:⚠️ Potential issue | 🟡 MinorHandle EOF condition from
read_line.The
read_linebranch doesn't check the return value. When the bridge closes the connection,read_linereturnsOk(0)(EOF) but the code continues looping with an empty buffer, potentially causing spurious parse errors.🛡️ Proposed fix to handle EOF
- _ = read.read_line(&mut line) => { + result = read.read_line(&mut line) => { + match result { + Ok(0) => { + tracing::warn!("Bridge closed connection (EOF)"); + let _ = kill.send(()); + break; + } + Ok(_) => {} + Err(e) => { + tracing::error!(error = ?e, "Failed to read from socket"); + continue; + } + } tracing::debug!(line, "Received IPC message");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/relayer/src/client/oracle/spec/redstone.rs` around lines 175 - 199, The read_line branch must inspect the Result<usize> from read.read_line(&mut line): match the call, handle Err by logging and continue, handle Ok(0) as EOF by logging that the bridge closed and breaking the loop (so we don't try to parse an empty buffer), and only proceed to serde_json::from_str and pending.remove(&received.id) when read returned Ok(n) with n > 0; adjust the read.read_line usage in redstone.rs (the read.read_line(&mut line) branch) and keep existing symbols like IpcResponse, pending.remove, line, and on_kill.changed.service/relayer/src/client/oracle/mod.rs (1)
93-103:⚠️ Potential issue | 🟠 MajorHandle channel errors gracefully instead of panicking.
The
Handle::updatemethod (lines 250-258) uses.unwrap()on channel operations that can fail during shutdown or if the worker task panics. This will crash the caller. The past review suggested adding aChannelClosedvariant toUpdateError.🐛 Proposed fix
#[derive(thiserror::Error, Debug)] pub enum UpdateError { + #[error("Oracle service channel closed")] + ChannelClosed, #[error("Failed to construct update transaction: {0}")] UpdateActions(Box<dyn std::error::Error + Send + Sync>),And in
Handle::update:- #[allow(clippy::unwrap_used)] #[tracing::instrument(skip(self))] pub async fn update( &self, oracle_id: AccountId, feed_ids: Box<[S::FeedId]>, ) -> Result<Option<CryptoHash>, Arc<UpdateError>> { let (send, recv) = oneshot::channel(); self.send .send(Request::Update { oracle_id, feed_ids, send, }) .await - .unwrap(); - recv.await.unwrap() + .map_err(|_| Arc::new(UpdateError::ChannelClosed))?; + recv.await.map_err(|_| Arc::new(UpdateError::ChannelClosed))? }Also applies to: 242-258
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/relayer/src/client/oracle/mod.rs` around lines 93 - 103, Add a ChannelClosed variant to UpdateError (e.g., ChannelClosed) and update Handle::update to avoid unwraps on channel send/recv; instead map send/recv failures to UpdateError::ChannelClosed (or wrap appropriately) and return Err(UpdateError::ChannelClosed) when the worker channel is closed or the response receive fails. Specifically modify the channel send call(s) and the awaiting of the response in Handle::update to handle Result::Err from sending and receiving and convert those errors into the new UpdateError::ChannelClosed variant so callers don't panic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@common/src/oracle/price_transformer.rs`:
- Around line 109-125: The two structs PriceTransformer and
ProxyPriceTransformer are identical except for the price_id type; refactor by
introducing a generic struct GenericPriceTransformer<P> with fields price_id: P,
call: Call, action: Action, then replace PriceTransformer and
ProxyPriceTransformer with type aliases PriceTransformer =
GenericPriceTransformer<PriceIdentifier> and ProxyPriceTransformer =
GenericPriceTransformer<OraclePriceId>; ensure to apply the same derives/near
serializers to GenericPriceTransformer so serialization behaviour remains
unchanged and update the lst constructor function signature to impl
GenericPriceTransformer<OraclePriceId> or provide an associated fn on the Proxy
type alias as needed.
In `@common/src/oracle/proxy.rs`:
- Around line 63-76: The Oracle enum and OracleIds struct are
feature-inconsistent: gate the RedStone variant with the same cfg used for the
field (add #[cfg(feature = "redstone")] to the Oracle::RedStone variant) so
Oracle and OracleIds match, and update any matches or uses of Oracle::RedStone
to be conditionally compiled (or add #[cfg(feature = "redstone")] blocks) to
avoid referencing the variant when the feature is disabled; alter the Oracle
enum declaration (Oracle::RedStone) and any match arms that reference it to use
the feature gate.
In `@common/src/oracle/redstone/adapter.rs`:
- Around line 36-40: The FeedPrice struct is marked with an unclear TODO; either
remove it or document/track why it remains: search for references to FeedPrice
(and types FeedId, U256) and if there are no usages delete the struct and run
cargo build/tests to ensure nothing breaks; if it must stay, replace the `//
TODO: Remove` with a clear comment that references an issue or PR number (create
one if none) explaining why it remains and when it can be removed, and ensure
the comment names the responsible owner or condition for removal.
In `@common/src/oracle/redstone/config/config_test.rs`:
- Around line 10-31: The constant REDSTONE_PRIMARY_PROD_ALLOWED_SIGNERS is
misnamed because it contains Hardhat/Anvil test addresses; rename it to
something like REDSTONE_PRIMARY_TEST_ALLOWED_SIGNERS (and update any other
related constants if needed) and update all references—particularly where it's
used in test()—to the new name to avoid implying these are production signers;
ensure any documentation/comments reflect they are test addresses.
In `@common/src/oracle/redstone/feed_data.rs`:
- Around line 64-65: Remove the debug output and panic-causing check: delete the
`eprintln!("10^{e} = {modulus}");` and the `assert_eq!(modulus,
U256::exp10(e));` lines in feed_data.rs (the block that computes `modulus` and
compares to `U256::exp10`). If you still want a development-only check, replace
the `assert_eq!` with `debug_assert_eq!(modulus, U256::exp10(e));` or move the
comparison into a unit test targeting the `u256_exp10` logic instead of keeping
runtime debug statements.
- Around line 28-44: The u256_exp10 function duplicates existing functionality;
replace its custom exponentiation with the primitive-types provided U256::exp10
by removing u256_exp10 and calling U256::exp10(exponent as usize) where needed
(or change the parameter to usize) to use the library implementation; update
references to u256_exp10 accordingly and ensure any type conversion from u32 to
usize is handled to keep signatures consistent.
In `@contract/proxy-oracle/src/lib.rs`:
- Around line 336-341: The code silently treats negative price.publish_time as 0
by using u64::try_from(...).unwrap_or(0), hiding corrupted timestamps; change
this to explicitly detect and handle negative publish_time (e.g., check if
price.publish_time < 0 or handle the Result from u64::try_from and on Err/log
and return None) so price_age_ms calculation uses only valid non-negative
timestamps and the failure is logged or returned instead of silently falling
back to epoch; update the logic around now_ms, price_age_ms and the max_age_ms
check to use that explicit handling.
- Around line 392-394: custom_getrandom currently unconditionally copies the
32-byte env::random_seed_array() into buf which may be a different length;
update custom_getrandom to retrieve the seed via env::random_seed_array(), check
lengths and handle mismatches instead of panicking: if buf.len() != seed.len()
return an Err(Error) (or otherwise return a clear error variant used in this
crate), otherwise call buf.copy_from_slice(&seed); reference function
custom_getrandom and env::random_seed_array() when making the change.
In `@contract/proxy-oracle/tests/proxy_oracle.rs`:
- Around line 20-29: The test helper norm_price silently maps negative
pyth::Price values to 0 via u64::try_from(price.price.0).ok().map_or(0, ...),
which can hide bugs; change it to fail loudly by replacing the silent fallback
with an explicit panic/assertion or expect so negative values surface in tests
(e.g., assert or use u64::try_from(price.price.0).expect(...) inside
norm_price), keeping the rest of the scaling logic (price.expo handling) intact.
In `@service/relayer/Dockerfile`:
- Around line 34-36: The RUN command installing system packages in the
Dockerfile currently uses floating tags (RUN apt-get update && apt-get install
-y --no-install-recommends ca-certificates libssl3 ...); to pin versions for
reproducibility, change that install line to specify explicit package versions
(e.g., ca-certificates=<version> libssl3=<version>) and update your build
workflow to fetch or document the chosen versions (you can use apt-cache policy
or apt-get -s to determine available versions); keep the rest of the RUN chain
(apt-get update and rm -rf /var/lib/apt/lists/*) intact and ensure the selected
versions exist in the base image repositories to avoid build failures.
In `@service/relayer/redstone-bridge/jest.config.js`:
- Around line 1-12: The Jest config is using the CommonJS ts-jest preset;
replace createDefaultPreset() with createDefaultEsmPreset() so ESM tests are
handled correctly: import and call createDefaultEsmPreset(), assign its
.transform to tsJestTransformCfg (same variable), and remove the manual
extensionsToTreatAsEsm override so the ESM preset can manage ESM settings for
the transform and config.
In `@service/relayer/redstone-bridge/package.json`:
- Around line 15-25: The devDependencies entry for `@types/node` is referencing a
non-existent 25.x release; update the `@types/node` version in package.json's
"devDependencies" from "^25.2.3" to a valid 24.x release such as "^24.10.2"
(keep the caret), save the file, and run your package manager (npm install or
yarn) to refresh lockfiles; target the "devDependencies" block and the
"@types/node" key to locate the change.
In `@service/relayer/redstone-bridge/src/args.test.ts`:
- Around line 5-15: The test and parseArgs default use a static socket
"/tmp/templar_redstone_bridge.sock" which can clash for concurrent relayer
instances; update the parseArgs implementation (function parseArgs and the Args
type) to use a dynamic default socket (e.g., include process.pid or a generated
UUID like `/tmp/templar_redstone_bridge_${process.pid}.sock`) or require the
socket arg be provided, and then adjust this test to expect the new dynamic
default (or remove the default expectation and assert that parseArgs([])
throws/requests the socket). Ensure you update the unit test in args.test.ts to
reference the new behavior (parseArgs and Args) so tests reflect the
dynamic-or-required socket semantics.
In `@service/relayer/redstone-bridge/src/args.ts`:
- Line 29: The ["authorized-signers"] CLI/arg definition is using DataServiceId
(a union of RedStone service IDs) but should accept Ethereum addresses or signer
public keys; change the schema for ["authorized-signers"] in args.ts from
DataServiceId.optional() to an appropriate type such as an Address or PublicKey
array (e.g., a comma-separated string parsed into an array or a specific
AddressList/Address.type), validate each entry with the existing address
validator (or a new isValidEthAddress helper), and update any parsing logic that
reads ["authorized-signers"] to split/trim and normalize addresses before use.
In `@service/relayer/redstone-bridge/src/handle.ts`:
- Around line 34-41: The catch block in handle.ts currently stringifies the
caught value with e + "" and logs it via console.error; change it to preserve
full error context by detecting if e is an Error (or has stack/message), log the
full error (including stack) using console.error("Unknown error", e) and set the
returned message field to a cleaner string (e.message when e instanceof Error or
String(e) otherwise); ensure the returned object still includes id and
status:"failure" while avoiding leaking full stacks to clients.
In `@service/relayer/redstone-bridge/src/msg.test.ts`:
- Around line 4-10: The test "can deserialize Rust request" currently only calls
Request.parse(rustMessage) without verifying the result; update the test to
capture the return value (e.g., const req = Request.parse(rustMessage)) and add
assertions that req.id === 123, req.method === "fetch" (or the enum/value your
Request type uses), and req.params equals ["ETH","BTC"] (deep equality), and
optionally assert the parsed type/class (instanceof Request or shape). Ensure
you reference Request.parse and the test "can deserialize Rust request" when
making the changes.
- Line 2: The import Response in msg.test.ts is unused; either remove Response
from the import line that currently reads "import { Request, Response } from
\"./msg\"" or add a unit test exercising Response (e.g., create a representative
Response object and call Response.parse or whatever Response's deserializer is
named) so the import is used; update the test file accordingly and run tests to
ensure no unused-import lint/test failures remain.
In `@service/relayer/redstone-bridge/tsconfig.json`:
- Line 36: The tsconfig contains an unnecessary JSX option: remove the "jsx":
"react-jsx" entry from tsconfig.json (or delete the "jsx" property entirely)
because this is a pure Node.js IPC process with no React/JSX usage; locate the
"jsx" setting to remove it so the config accurately reflects the project's
runtime.
In `@service/relayer/src/app/args.rs`:
- Around line 28-30: Validate in App::new() that the parsed args include at
least one oracle by checking the Option fields pyth and redstone on the args
struct (fields named pyth and redstone). If both are None, return an error or
exit with a clear message (e.g., "must configure at least one oracle: pyth or
redstone"); perform this check immediately after parsing the args in App::new()
so callers cannot construct the relayer without an oracle configured.
In `@service/relayer/src/client/near.rs`:
- Around line 570-605: The query_oracle_type function currently treats any Err
from self.view(...) as "method not supported" because it only checks is_ok(),
which will misclassify RPC/network errors as PythDirect; change the logic to
match the Result from each self.view call, inspect the Err and only treat the
interface as absent when the RPC error indicates a missing method (e.g., using
an is_method_not_found helper that looks for MethodNotFound/MethodNotFound error
codes or message), otherwise propagate the Err upward (return Err(ViewError) for
transient RPC/network failures); update the checks around the first test_proxy
and test_lst calls and the paths that call oracle_ids and oracle_id so they
follow this pattern and still return Ok(OracleType::Proxy/ PythLst) when the
view succeeds.
- Around line 647-650: The code currently logs an error when proxy.0 is empty
but returns Ok((oracle_id, price_identifier.into())), masking misconfiguration;
change this to return an error instead: add a new error variant (e.g.,
ProxyDefinitionEmpty or ConfigError::EmptyProxyDefinition) to the function's
error enum/result type and replace the fallback return with Err(that_error) when
proxy.0.first() is None, keeping the tracing::error! call for logging and
include oracle_id and price_identifier in the error context; update call sites
or propagation to handle the new error variant as needed.
In `@service/relayer/src/client/oracle/mod.rs`:
- Line 89: The unconditional tokio::time::sleep(Duration::from_millis(5)).await
at the end of the loop introduces unnecessary latency; remove it or move it into
the specific branch that requires rate-limiting (e.g., inside the
batch_timer.tick() arm) so that select! can block normally for events; update or
delete the sleep call accordingly near where batch_timer.tick() and request
processing are handled to ensure only the intended branch applies the 5ms delay.
In `@service/relayer/src/route/universal_account/relay.rs`:
- Around line 242-264: The fold uses or_insert_with(HashSet::new) to initialize
HashSet entries; replace those with the simpler or_default() to improve
readability in the closure that builds pyth and redstone from
interacted_price_identifiers (filtered by update_price_feeds) and matching on
OraclePriceId::Pyth / OraclePriceId::RedStone; keep the same variables (pyth,
redstone) and overall fold logic but call
.entry(oracle_id).or_default().insert(id) for each branch.
In `@service/relayer/tests/near.rs`:
- Around line 17-19: The tracing subscriber initialization currently calls
tracing_subscriber::fmt().with_max_level(tracing::Level::DEBUG).init(), which
can panic if called multiple times (e.g., across parallel tests); change the
call to use try_init() instead of init(), handling the Result (e.g., ignore
Err(InitError) or log it) so repeated initializations don't panic—update the
invocation of init() in the test setup where tracing_subscriber::fmt() and
with_max_level(...) are used to call try_init() and handle/propagate the Result
appropriately.
---
Outside diff comments:
In `@mock/oracle/src/lib.rs`:
- Around line 140-142: custom_getrandom currently assumes buf is 32 bytes and
will panic when buf.len() != 32; update the function to safely handle
variable-length buffers by obtaining the 32-byte seed from
env::random_seed_array(), copying only the minimum of buf.len() and 32 bytes
into buf (e.g. let seed = env::random_seed_array(); let n =
buf.len().min(seed.len()); buf[..n].copy_from_slice(&seed[..n]);), and then
deterministically fill any remaining bytes (if buf.len() > 32) with zeros or
another defined value, or return an Err when buf.len() > 32 depending on the
intended semantics; make this change inside custom_getrandom to avoid slice
panics.
In `@service/relayer/src/app/mod.rs`:
- Around line 78-79: Change App::new to return Result<App, E> (propagating
errors) and replace the unwrap on Database::new(&args.database_url,
kill.clone()).unwrap() with a fallible call that uses the ? operator (or
map_err/anyhow::Context) so the database creation error is returned with added
context (include args.database_url and a brief message). Specifically, modify
the App::new signature and its call sites to propagate the Result, and inside
App::new replace the unwrap on Database::new with a contextualized error
propagation referencing Database::new, args.database_url, and kill.
---
Duplicate comments:
In `@contract/redstone-adapter/tests/test.rs`:
- Around line 37-39: The test currently only prints price_data from
ra.read_price_data(prices.clone()) and lacks assertions for the success cases;
update the parameterized payload test to replace the
eprintln!("{price_data:#?}") with explicit assertions (e.g., assert_eq! or
assert_matches!) that compare price_data to the expected value used in the
output() test or an equivalent expected structure for the "stellar" and "js_sdk"
cases, ensuring failing cases remain covered by #[should_panic] and that you
reference the read_price_data result (price_data) and the input
prices/prices.clone() to build the expected assertions.
In `@service/relayer/src/client/oracle/mod.rs`:
- Around line 93-103: Add a ChannelClosed variant to UpdateError (e.g.,
ChannelClosed) and update Handle::update to avoid unwraps on channel send/recv;
instead map send/recv failures to UpdateError::ChannelClosed (or wrap
appropriately) and return Err(UpdateError::ChannelClosed) when the worker
channel is closed or the response receive fails. Specifically modify the channel
send call(s) and the awaiting of the response in Handle::update to handle
Result::Err from sending and receiving and convert those errors into the new
UpdateError::ChannelClosed variant so callers don't panic.
In `@service/relayer/src/client/oracle/spec/redstone.rs`:
- Around line 175-199: The read_line branch must inspect the Result<usize> from
read.read_line(&mut line): match the call, handle Err by logging and continue,
handle Ok(0) as EOF by logging that the bridge closed and breaking the loop (so
we don't try to parse an empty buffer), and only proceed to serde_json::from_str
and pending.remove(&received.id) when read returned Ok(n) with n > 0; adjust the
read.read_line usage in redstone.rs (the read.read_line(&mut line) branch) and
keep existing symbols like IpcResponse, pending.remove, line, and
on_kill.changed.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockservice/relayer/redstone-bridge/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (53)
Cargo.tomlclippy.tomlcommon/Cargo.tomlcommon/src/oracle/mod.rscommon/src/oracle/price_transformer.rscommon/src/oracle/proxy.rscommon/src/oracle/redstone/adapter.rscommon/src/oracle/redstone/config/config_prod.rscommon/src/oracle/redstone/config/config_test.rscommon/src/oracle/redstone/config/mod.rscommon/src/oracle/redstone/event.rscommon/src/oracle/redstone/feed_data.rscommon/src/oracle/redstone/feed_id.rscommon/src/oracle/redstone/mod.rscommon/src/oracle/redstone/serializable_u256.rscontract/market/tests/liquidation.rscontract/proxy-oracle/Cargo.tomlcontract/proxy-oracle/src/lib.rscontract/proxy-oracle/tests/proxy_oracle.rscontract/redstone-adapter/Cargo.tomlcontract/redstone-adapter/src/lib.rscontract/redstone-adapter/tests/test.rscontract/registry/tests/deployment.rsmock/oracle/Cargo.tomlmock/oracle/src/lib.rsservice/relayer/Cargo.tomlservice/relayer/Dockerfileservice/relayer/redstone-bridge/jest.config.jsservice/relayer/redstone-bridge/package.jsonservice/relayer/redstone-bridge/src/args.test.tsservice/relayer/redstone-bridge/src/args.tsservice/relayer/redstone-bridge/src/handle.test.tsservice/relayer/redstone-bridge/src/handle.tsservice/relayer/redstone-bridge/src/index.tsservice/relayer/redstone-bridge/src/msg.test.tsservice/relayer/redstone-bridge/tsconfig.jsonservice/relayer/src/app/args.rsservice/relayer/src/app/mod.rsservice/relayer/src/client/near.rsservice/relayer/src/client/oracle/mod.rsservice/relayer/src/client/oracle/spec/mod.rsservice/relayer/src/client/oracle/spec/pyth.rsservice/relayer/src/client/oracle/spec/redstone.rsservice/relayer/src/lib.rsservice/relayer/src/route/universal_account/relay.rsservice/relayer/tests/near.rsservice/relayer/tests/relayer.rstest-utils/src/controller/market.rstest-utils/src/controller/mock_oracle.rstest-utils/src/controller/mod.rstest-utils/src/controller/proxy_oracle.rstest-utils/src/controller/redstone_adapter.rstest-utils/src/lib.rs
This change is