-
Notifications
You must be signed in to change notification settings - Fork 49
cuprated: RPC handlers #355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| for (_, index_vec) in outputs { | ||
| for (_, out) in index_vec { | ||
| let out_key = OutKeyBin { | ||
| key: out.key.map_or([0; 32], |e| e.compress().0), | ||
| mask: out.commitment.compress().0, | ||
| unlocked: helper::timelock_is_unlocked(&mut state, out.time_lock).await?, | ||
| height: usize_to_u64(out.height), | ||
| txid: if request.get_txid { out.txid } else { [0; 32] }, | ||
| }; | ||
|
|
||
| outs.push(out_key); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1/2
Fixed timelock check, although now it is for due to async.
TODO: optimized async iterators?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outputs unlock time is not just done by looking at the timestamp/block height, it has specific rules, here is the function in Cuprate:
cuprate/consensus/rules/src/transactions.rs
Lines 220 to 223 in 9842535
| /// Checks if an outputs unlock time has passed. | |
| /// | |
| /// <https://monero-book.cuprate.org/consensus_rules/transactions/unlock_time.html> | |
| pub const fn output_unlocked( |
You will need to get the blockchian context for the time for unlock & chain height:
cuprate/consensus/context/src/lib.rs
Lines 138 to 141 in 9842535
| /// Returns the timestamp the should be used when checking locked outputs. | |
| /// | |
| /// ref: <https://cuprate.github.io/monero-book/consensus_rules/transactions/unlock_time.html#getting-the-current-time> | |
| pub fn current_adjusted_timestamp_for_time_lock(&self) -> u64 { |
That only has to be done once, you could reuse the context for each output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Boog900 ready for review again, although WASM CI for cuprate-rpc-types still needs fixing.
| async fn get_blocks( | ||
| mut state: CupratedRpcHandler, | ||
| request: GetBlocksRequest, | ||
| ) -> Result<GetBlocksResponse, Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
putting the comment here for a lack of a better place, but because the request type uses Bytes internally it is good to drop it as soon as you can.
What this means is we should extract all the data needed out of the request before doing any async/blocking operations
|
|
||
| let block_hashes: Vec<[u8; 32]> = (&request.block_ids).into(); | ||
|
|
||
| let (blocks, missing_hashes, blockchain_height) = | ||
| blockchain::block_complete_entries(&mut state.blockchain_read, block_hashes).await?; | ||
|
|
||
| if !missing_hashes.is_empty() { | ||
| return Err(anyhow!("Missing blocks")); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems to be incorrect, monerod seems to be getting the next hashes in the chain, using the incoming hashes as a representation of the clients current chain, and giving the blocks for those future hashes instead: https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L733
| // FIXME: is there a cheaper way to get this? | ||
| let difficulty = blockchain_context::batch_get_difficulties( | ||
| &mut state.blockchain_context, | ||
| vec![(height, hardfork)], | ||
| ) | ||
| .await? | ||
| .first() | ||
| .copied() | ||
| .ok_or_else(|| anyhow!("Failed to get block difficulty"))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hinto-janai can you add a link to this comment in the code please.
| /// [`cuprate_types::blockchain::BlockchainResponse::ChainHeight`] minus 1. | ||
| pub(super) async fn top_height(state: &mut CupratedRpcHandler) -> Result<(u64, [u8; 32]), Error> { | ||
| let (chain_height, hash) = blockchain::chain_height(&mut state.blockchain_read).await?; | ||
| let height = chain_height.saturating_sub(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather this just be a normal subtraction/checked with unwrap. Saturating is confusing IMO
| } | ||
|
|
||
| let too_many_blocks = || { | ||
| request.end_height.saturating_sub(request.start_height) + 1 > RESTRICTED_BLOCK_HEADER_RANGE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this needs to be saturating_sub here
| // This method can be a bit heavy, so rate-limit restricted use. | ||
| if state.is_restricted() { | ||
| tokio::time::sleep(Duration::from_millis(100)).await; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This IMO is not a great way to do rate limiting, it should be handled with a Semaphore or even better just rate restrict the whole RPC interface. When we have performance characteristics we can look at rate limiting further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, I should have noted that this is temporary. I don't think this endpoint is high priority, is a FIXME for later ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be handled with a Semaphore or even better just rate restrict the whole RPC interface.
I would ideally prefer this to be a Layer as I would like tower-governor to be implemented at some point. So the whole RPC interface is better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is a FIXME for later ok
yes
| impl EpeeObjectBuilder<Transaction> for () { | ||
| fn add_field<B: Buf>(&mut self, _: &str, _: &mut B) -> error::Result<bool> { | ||
| unreachable!() | ||
| } | ||
|
|
||
| fn finish(self) -> error::Result<Transaction> { | ||
| unreachable!() | ||
| } | ||
| } | ||
|
|
||
| #[cfg(feature = "epee")] | ||
| impl EpeeObject for Transaction { | ||
| type Builder = (); | ||
|
|
||
| fn number_of_fields(&self) -> u64 { | ||
| unreachable!() | ||
| } | ||
|
|
||
| fn write_fields<B: BufMut>(self, _: &mut B) -> error::Result<()> { | ||
| unreachable!() | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this a TODO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is a cuprate_types::json type, it only has epee impl due to:
Lines 83 to 88 in 3c86c5e
| # (De)serialization invariants | |
| Due to how types are defined in this library internally (all through a single macro), | |
| most types implement both `serde` and `epee`. | |
| However, some of the types will panic with [`unimplemented`] | |
| or will otherwise have undefined implementation in the incorrect context. |
AFAICT only the JSON representation for Transaction is used in the (daemon) RPC API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some docs on this.
storage/blockchain/src/ops/output.rs
Outdated
| let height = u32_to_usize(rct_output.height); | ||
| let tx_idx = u64_to_usize(rct_output.tx_idx); | ||
| if let Some(hash) = table_block_txs_hashes.get(&height)?.get(tx_idx) { | ||
| *hash | ||
| } else { | ||
| let miner_tx_id = table_block_infos.get(&height)?.mining_tx_index; | ||
| let tx_blob = table_tx_blobs.get(&miner_tx_id)?; | ||
| Transaction::read(&mut tx_blob.0.as_slice())?.hash() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pretty sure this isn't correct the get(tx_idx) should use th local index of the tx in the block, right? this currently used the global index.
| let height = u32_to_usize(rct_output.height); | |
| let tx_idx = u64_to_usize(rct_output.tx_idx); | |
| if let Some(hash) = table_block_txs_hashes.get(&height)?.get(tx_idx) { | |
| *hash | |
| } else { | |
| let miner_tx_id = table_block_infos.get(&height)?.mining_tx_index; | |
| let tx_blob = table_tx_blobs.get(&miner_tx_id)?; | |
| Transaction::read(&mut tx_blob.0.as_slice())?.hash() | |
| } | |
| let height = u32_to_usize(rct_output.height); | |
| let tx_idx = u64_to_usize(rct_output.tx_idx); | |
| let miner_tx_id = table_block_infos.get(&height)?.mining_tx_index; | |
| if miner_tx_id == tx_idx { | |
| let tx_blob = table_tx_blobs.get(&miner_tx_id)?; | |
| Transaction::read(&mut tx_blob.0.as_slice())?.hash() | |
| } else { | |
| table_block_txs_hashes.get(&height)?[tx_idx - miner_tx_id - 1)] | |
| } |
storage/blockchain/src/ops/output.rs
Outdated
| let txid = { | ||
| let height = u32_to_usize(output.height); | ||
| let tx_idx = u64_to_usize(output.tx_idx); | ||
| if let Some(hash) = table_block_txs_hashes.get(&height)?.get(tx_idx) { | ||
| *hash | ||
| } else { | ||
| let miner_tx_id = table_block_infos.get(&height)?.mining_tx_index; | ||
| let tx_blob = table_tx_blobs.get(&miner_tx_id)?; | ||
| Transaction::read(&mut tx_blob.0.as_slice())?.hash() | ||
| } | ||
| }; | ||
|
|
||
| Ok(OutputOnChain { | ||
| height: output.height as usize, | ||
| time_lock, | ||
| key, | ||
| commitment, | ||
| txid, | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I think it should be, this is called ot get outputs when syncing so hashing miner txs/extra lookups is just wasted work.
|
|
||
| SendRawTransaction, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the line numbers above this are incorrect: https://github.com/monero-project/monero/blob/8d6aff95908e029d6b131638fbbf845e8cff04fc/src/rpc/core_rpc_server_commands_defs.h#L631-L683
| //! # 64-bit invariant | ||
| //! This module is available on 32-bit arches although panics | ||
| //! will occur between lossy casts, e.g. [`u64_to_usize`] where | ||
| //! the input is larger than [`u32::MAX`]. | ||
| //! | ||
| //! On 64-bit arches, all functions are lossless. | ||
| // TODO: | ||
| // These casting functions are heavily used throughout the codebase | ||
| // yet it is not enforced that all usages are correct in 32-bit cases. | ||
| // Panicking may be a short-term solution - find a better fix for 32-bit arches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue for wasm32-unknown-unknown builds was:
cuprate_rpc_typesdepends oncuprate_typeswithjsonfeaturejsonfeature requirescuprate_helper::cast
To fix this, I am making cuprate_helper::cast work on 32-bit targets, yet panic when the cast is lossy. I don't think this is the correct long-term solution but it does work for now.
|
Approved as this has undergone enough review + an extensive test suite is being worked on (#422) which should catch any mismatches in behavior. I would like that to be completed before any integration in |
What
Implements the
{json-rpc, binary, json}handlers incuprated; adds various types and changes some misc things as needed.How
See below review.