Conversation
|
I think this is about done. This was a massive task but the outcome is great, the database is now small and faster both reading and writing (practically, some operations on some tables may be slower). This is probably going to take some time to review, the old docs have all been removed as I didn't want to spend the time updating it all while everything is still changing. Its the same with the lower level tests, the higher level tests are still present. The format of the code is still roughly the same. |
Otherwise, this will error with debug builds
| #[inline] | ||
| pub fn remove_tx(tx_hash: &TxHash, tables: &mut impl TablesMut) -> DbResult<(TxId, Transaction)> { | ||
| //------------------------------------------------------ Transaction data | ||
| let tx_id = tables.tx_ids_mut().take(tx_hash)?; |
There was a problem hiding this comment.
main used to have .take() which deletes the tx_hash, i dont see the remove_tx_from_dynamic_tables deleting the tx_hash from tx_ids - only key images and outputs
edit: looks like it impacts add_alt_transaction_blob in src/ops/alt_block/tx.rs as well - sees tx hash exists, skips adding the blob
There was a problem hiding this comment.
Oops forgot to add the explicit removal when I move to a write batch instead of an atomic tx, should be fixed now
| monero_oxide::io::VarInt::write(&block_txs.len(), &mut block) | ||
| .expect("The number of txs per block will not exceed u64::MAX"); | ||
| let cumulative_rct_outs = tapes | ||
| .read_entry(&db.block_infos, block_height as u64 - 1)? |
There was a problem hiding this comment.
nit, to match L215
| .read_entry(&db.block_infos, block_height as u64 - 1)? | |
| .read_entry(&db.block_infos, block_height.saturating_sub(1) as u64)? |
| let top_block_height = tapes | ||
| .fixed_sized_tape_len(&db.block_infos) | ||
| .expect("Required tape not open") | ||
| - 1; |
There was a problem hiding this comment.
style nit: could port the same defensive check from chain_height
unnecessary as genesis block always gets added but good for consistency
| let top_block_height = tapes | |
| .fixed_sized_tape_len(&db.block_infos) | |
| .expect("Required tape not open") | |
| - 1; | |
| let chain_height = tapes | |
| .fixed_sized_tape_len(&db.block_infos) | |
| .expect("Required tape not open"); | |
| if chain_height == 0 { | |
| return Err(BlockchainError::NotFound); | |
| } | |
| let top_block_height = chain_height - 1; |
| block_height(db, &tx_ro, &first_known_block_hash)?.ok_or(BlockchainError::NotFound)?; | ||
|
|
||
| let chain_height = crate::ops::blockchain::chain_height(table_block_heights)?; | ||
| let chain_height = crate::ops::blockchain::chain_height(db, &tapes)?; |
There was a problem hiding this comment.
same here
| let chain_height = crate::ops::blockchain::chain_height(db, &tapes)?; | |
| let chain_height = crate::ops::blockchain::chain_height(db, &tapes)?; | |
| if chain_height == 0 { | |
| return Err(BlockchainError::NotFound): | |
| } | |
| let height = u64_to_usize( | ||
| tapes | ||
| .fixed_sized_tape_len(&db.block_infos) | ||
| .expect("Require tape not found") |
| table_block_heights.len().map(|height| height as usize) | ||
| Ok(tapes | ||
| .fixed_sized_tape_len(&db.block_infos) | ||
| .expect("Required tape must exists") as usize) |
There was a problem hiding this comment.
here too
| .expect("Required tape must exists") as usize) | |
| .expect("Required tape must exist") as usize) |
| tapes.commit(Persistence::Buffer)?; | ||
|
|
||
| let mut pre_rct_numb_outputs_cache = db.pre_rct_numb_outputs_cache.lock().unwrap(); | ||
|
|
||
| let mut tx_rw = db.fjall.batch().durability(Some(PersistMode::Buffer)); | ||
|
|
||
| for block in blocks { | ||
| crate::ops::block::add_block_to_dynamic_tables( | ||
| db, | ||
| &block.block, | ||
| &block.block_hash, | ||
| block.txs.iter().map(|tx| Cow::Borrowed(&tx.tx)), | ||
| &mut numb_transactions, | ||
| &mut tx_rw, | ||
| &mut pre_rct_numb_outputs_cache, | ||
| )?; | ||
| } | ||
|
|
||
| tx_rw.commit()?; |
There was a problem hiding this comment.
note: with graceful shutdown all errors after tapes commit, but before fjall successfully commits should be distinguished due to us being out of sync after that. BlockchainError::DatabaseOutOfSync?
handler.rs does panic on error now so not a issue rn, but graceful shutdown changes this
idea: recovery function that replays only the out of sync blocks from tapes into fjall when we get that kind of error instead instead of a full chain nuke on desync
There was a problem hiding this comment.
Yeah I was going to do that originally, however full fjall nuke is much easier to handle + is not too slow. You can test it by just deleting the fjall DB but keeping the tapes.
| /// - The blockchain has 0 blocks => this returns `Err(BlockchainError::KeyNotFound)` | ||
| /// - The blockchain has 1 block (height 0) => this returns `Ok(0)` | ||
| /// - The blockchain has 2 blocks (height 1) => this returns `Ok(1)` | ||
| /// | ||
| /// Note that in cases where no blocks have been written to the | ||
| /// database yet, an error is returned: `Err(RuntimeError::KeyNotFound)`. | ||
| /// database yet, an error is returned: `Err(BlockchainError::KeyNotFound)`. |

No description provided.