Update/extend/transfer Spark name without paying twice for leftover time#1780
Update/extend/transfer Spark name without paying twice for leftover time#1780
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new consensus start-block field nSparkNamesV21StartBlock and wires V2.1-specific Spark Name extension/fee logic across core validation, state, scripts, and Qt UI (extend flows, selection, receive dialog). Tests for transfer and V2.1 extension behavior are added; a generated Overview UI header was removed. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant User
participant AddressBook as AddressBookPage (UI)
participant CreatePage as CreateSparkNamePage (UI)
participant SparkMgr as SparkNameManager
participant Chain as Blockchain/chainActive
participant Consensus as Consensus.Params
end
User->>AddressBook: select Spark name and click Extend
AddressBook->>CreatePage: open with setExtendMode(name,address)
CreatePage->>SparkMgr: query existing record -> get expiryHeight
CreatePage->>Chain: request current block height
CreatePage->>Consensus: read nSparkNamesV21StartBlock
alt currentHeight >= nSparkNamesV21StartBlock
CreatePage->>CreatePage: compute remainingBlocks, totalValidity = newYearsBlocks + remainingBlocks
else
CreatePage->>CreatePage: compute totalValidity = newYearsBlocks (no preservation)
end
CreatePage->>User: show fee and new expiration estimate
User->>CreatePage: confirm extend -> submit tx
CreatePage->>SparkMgr: submit extend transaction
SparkMgr->>Chain: include tx in block -> state update applies V2.1 validation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/qt/addressbookpage.cpp`:
- Around line 290-308: The code in AddressBookPage::on_extendAddress_clicked
assumes the selected label always begins with '@' by using .mid(1), which can
corrupt the name if the prefix is missing; update the selectionLabel handling to
check if the string startsWith('@') and only remove the first character in that
case (otherwise keep the whole string), then pass that validated name to
dialog->setExtendMode(name, address); apply this change where name is
constructed from selectionLabel.at(0).data(...).
In `@src/sparkname.cpp`:
- Around line 343-350: ValidateSparkNameData reads chainActive.Height() without
holding cs_main which can race with block-processing; protect that access by
acquiring LOCK(cs_main) around the call to chainActive.Height() (and the
subsequent use of ::Params().GetConsensus() if that relies on chain state) or
alternatively change the ValidateSparkNameData signature to accept an nHeight
parameter (as done in CheckSparkNameTx) and use the caller-provided nHeight;
update references to nHeight and consensusParams in the block that checks
sparkNameData.sparkNameValidityBlocks accordingly so no chainActive access
occurs without cs_main.
- Around line 177-203: The reads of sparkNames in CheckSparkNameTx (the
existence check, sparkAddress compare and existingExpirationHeight read) must be
protected by the same mutex used elsewhere; wrap those accesses in the
cs_spark_name lock (e.g. add LOCK(cs_spark_name) or equivalent RAII guard before
checking sparkNames and reading sparkNames[ToUpper(...)]), or copy the needed
fields while holding LOCK(cs_spark_name) into local variables and then release
the lock; ensure you reference the sparkNames container and
existingExpirationHeight/fUpdateExistingRecord logic inside that locked region
so access is consistent with AddBlock/RemoveBlock which use cs_spark_name.
🧹 Nitpick comments (6)
src/spark/state.cpp (1)
344-356: Prefer catchingstd::exceptioninstead ofcatch (...).Using a bare
catch (...)can silently swallow unexpected errors (e.g.,std::bad_alloc). SinceGetSparkNameBlockHeightthrowsstd::runtime_errorwhen the name isn't found (seesrc/sparkname.cppLine 93-100), catching that specific type (or at leaststd::exception) makes the intent clearer and avoids masking real failures.♻️ Suggested fix
- } catch (...) { - // name doesn't exist yet, no adjustment needed - } + } catch (const std::runtime_error&) { + // name doesn't exist yet, no adjustment needed + }src/qt/createsparknamepage.cpp (2)
112-134: Magic constantnBlocksPerHour = 24is duplicated across files.This constant also appears implicitly in
sparkname.cpp(line 191:365*24*24). A single shared constant (e.g., insparkname.h) would prevent future drift between the UI estimate and the consensus logic.
184-189: Confirmation dialog text says "register" even when extending a Spark name.In extend mode, the confirmation dialog still reads "Are you sure you want to register spark name?" Consider adjusting the text to reflect the extend operation for better UX clarity.
src/qt/addressbookpage.cpp (3)
91-107:extendActionis created and connected but never added to the context menu.The action at line 91 is wired to
on_extendAddress_clicked(line 107) but isn't added tocontextMenu(lines 94–99). If this is intentional (extend only via button), consider removing the unused action and connection to avoid dead code. Otherwise, add it to the context menu conditionally (e.g., only for SparkNameMine entries).
437-457: Intentional fallthrough from case 3 to case 2 should use[[fallthrough]]attribute.The comment at line 442 documents intent, but compilers (with
-Wimplicit-fallthrough) may still warn. Use the C++17[[fallthrough]]attribute for proper suppression.Proposed fix
case 3: ui->newAddress->setVisible(false); ui->extendAddress->setVisible(true); - // fallthrough + [[fallthrough]]; case 2:
166-174: Pre-existing Qt5 version check should be removed.The
#if QT_VERSION < 0x050000block is dead code since the project no longer supports Qt5. While not introduced in this PR, since you're already modifying this file, consider cleaning it up.Based on learnings: "Qt5-specific code blocks in the src/qt directory are no longer needed in this repository. When reviewing files under src/qt/, audit for Qt5-only code paths."
| return; | ||
|
|
||
| if (ui->addressType->currentText() == AddressTableModel::SparkName) { | ||
| if (ui->addressType->currentData() == SparkName || ui->addressType->currentData() == SparkNameMine) { |
There was a problem hiding this comment.
updateSpark() missing SparkName and SparkNameMine options
Medium Severity
The updateSpark() function was not updated to match the changes made to setModel(). While setModel() now adds SparkName and SparkNameMine options to the address type dropdown, updateSpark() only adds Spark and Transparent options. When updateAddressbook() is called from WalletView, the spark names options will disappear from the dropdown, breaking the extend functionality.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/qt/addressbookpage.cpp`:
- Around line 290-309: AddressBookPage::on_extendAddress_clicked currently
dereferences model via model->getWalletModel() without a null check, risking a
crash; add the same guard used in on_newAddress_clicked by verifying model is
non-null (and optionally that model->getWalletModel() is non-null) before
calling dialog->setModel(...) and dialog->setExtendMode(...), returning early if
the check fails to avoid dereferencing a null pointer.
In `@src/sparkname.cpp`:
- Around line 346-357: ValidateSparkNameData currently compares only
sparkNameData.sparkNameValidityBlocks to the 10/15-year caps, but
CheckSparkNameTx adds any leftover time from an existing registration before
enforcing the cap; update ValidateSparkNameData to mirror that leftover-aware
logic (sum existing registration's remaining blocks with
sparkNameData.sparkNameValidityBlocks before comparing to the year caps) using
the same consensus thresholds (Params().GetConsensus().nSparkNamesV21StartBlock
and the 365*24*24*10/15 limits) and the same lookup used by CheckSparkNameTx to
obtain the existing registration's remaining blocks (i.e., reference the same
functions/fields that CheckSparkNameTx uses to compute leftover time) so wallet
validation rejects transactions that would exceed the cap at consensus.
- Around line 346-357: ValidateSparkNameData currently holds cs_spark_name and
then acquires cs_main when calling chainActive.Height(), creating a deadlock
with CheckSparkNameTx which acquires cs_main then cs_spark_name; fix by reading
nHeight from chainActive.Height() before taking cs_spark_name (or modify
ValidateSparkNameData to accept an nHeight parameter). Locate the block
referencing sparkNameData.sparkNameValidityBlocks and the call to
chainActive.Height() inside the cs_main lock, and move that height-read (nHeight
= chainActive.Height()) out to occur prior to acquiring cs_spark_name (or add an
nHeight argument and pass it in from callers) so locking order is consistent
with CheckSparkNameTx.
🧹 Nitpick comments (2)
src/qt/addressbookpage.cpp (2)
91-107:extendActionis wired but never added tocontextMenu.The action is created (Line 91) and its
triggeredsignal is connected (Line 107), but it is never added tocontextMenu(Lines 93–100). When the extend feature is unhidden, right-click won't offer "Extend". Consider adding it conditionally (e.g., only when the address type isSparkNameMine), or removing the dead wiring until the feature is ready.
438-458: Fragile: switch on positional combo index instead of item data.The cases
2and3assume a fixed combo-item order (Spark, Transparent, SparkName, SparkNameMine). If items are reordered, conditionally omitted, or new items inserted, this silently breaks. Usingui->addressType->itemData(idx).toInt()would be resilient to layout changes—matching whatcontextualMenualready does on Line 409.Proposed refactor
void AddressBookPage::chooseAddressType(int idx) { if(!proxyModel) return; - switch (idx) - { - case 3: - ui->newAddress->setVisible(false); - ui->extendAddress->setVisible(true); - // fallthrough - case 2: - model->ProcessPendingSparkNameChanges(); - ui->deleteAddress->setEnabled(false); - deleteAction->setEnabled(false); - break; - - default: - selectionChanged(); - break; - } - - if (idx != 3) { - ui->extendAddress->setVisible(false); - ui->newAddress->setVisible(true); + int type = ui->addressType->itemData(idx).toInt(); + + if (type == (int)SparkNameMine) { + ui->newAddress->setVisible(false); + ui->extendAddress->setVisible(true); + } else { + ui->extendAddress->setVisible(false); + ui->newAddress->setVisible(true); + } + + if (type == (int)SparkName || type == (int)SparkNameMine) { + model->ProcessPendingSparkNameChanges(); + ui->deleteAddress->setEnabled(false); + deleteAction->setEnabled(false); + } else { + selectionChanged(); } fproxyModel->setTypeFilter( ui->addressType->itemData(idx).toInt()); }
|
bugbot run |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/qt/addressbookpage.cpp (1)
92-108:⚠️ Potential issue | 🟡 Minor
extendActionis connected but never added tocontextMenu.The action is created (line 92) and its signal is connected (line 108), but it is never added to
contextMenubetween lines 94–101. This means the "Extend" option will never appear in the right-click context menu. If this is intentional (only via the toolbar button), the action creation and connection are dead code. If it should appear in the context menu for Spark name entries, it needs to be added.If the context menu entry is intended, add it conditionally
if(tab == SendingTab) contextMenu->addAction(deleteAction); + contextMenu->addAction(extendAction); contextMenu->addSeparator();You may also want to show/hide
extendActiondynamically incontextualMenu()based on whether a Spark name is selected, similar to howdeleteActionis toggled.
🧹 Nitpick comments (1)
src/qt/addressbookpage.cpp (1)
465-489: Fragile:chooseAddressTypeswitches on combo-box index rather than the semantic enum value.The
switch(idx)uses hardcoded positional indices (case 2,case 3) that assume the combo box is always populated in the exact order Spark/Transparent/SparkName/SparkNameMine. If items are reordered, conditionally omitted, or new items inserted, these cases silently break. Consider switching on the item's data value instead:Proposed refactor
- switch (idx) - { - case 3: - ui->newAddress->setVisible(false); - ui->extendAddress->setVisible(true); - // fallthrough - case 2: - model->ProcessPendingSparkNameChanges(); - ui->deleteAddress->setEnabled(false); - deleteAction->setEnabled(false); - break; - - default: - selectionChanged(); - break; - } - - if (idx != 3) { - ui->extendAddress->setVisible(false); - ui->newAddress->setVisible(true); - } + int type = ui->addressType->itemData(idx).toInt(); + switch (type) + { + case SparkNameMine: + ui->newAddress->setVisible(false); + ui->extendAddress->setVisible(true); + model->ProcessPendingSparkNameChanges(); + ui->deleteAddress->setEnabled(false); + deleteAction->setEnabled(false); + break; + case SparkName: + ui->extendAddress->setVisible(false); + ui->newAddress->setVisible(true); + model->ProcessPendingSparkNameChanges(); + ui->deleteAddress->setEnabled(false); + deleteAction->setEnabled(false); + break; + default: + ui->extendAddress->setVisible(false); + ui->newAddress->setVisible(true); + selectionChanged(); + break; + }This also eliminates the implicit fallthrough between
case 3andcase 2, which can confuse readers and may produce compiler warnings.
| validityBlocks = std::max(validityBlocks, existingExpirationHeight - nHeight + validityBlocks); | ||
| // after nSparkNamesV21StartBlock, max validity is 15 years | ||
| if (validityBlocks > nBlockPerYear * 15) | ||
| return state.DoS(100, error("CheckSparkNameTx: can't be valid for more than 15 years")); |
There was a problem hiding this comment.
V2.1 validity cap has one-block mismatch
Low Severity
CheckSparkNameTx computes preserved validity using existingExpirationHeight - nHeight, but mempool calls pass current chainActive.Height() while block connect uses next-block height. Near the 15-year cap, this one-block difference can reject transactions in mempool that would be valid when mined.
| if (spark) { | ||
| ui->addressType->addItem(tr("Spark"), Spark); | ||
| } | ||
| ui->addressType->addItem(tr("Transparent"), Transparent); |
There was a problem hiding this comment.
Spark-name-mine bypasses edit guard
Low Severity
Adding SparkNameMine to addressType introduces a path where onEditAction no longer treats spark-name rows as non-editable, because its guard only checks AddressTableModel::SparkName text. The new “My own spark names” option can open edit flow for entries not backed by normal editable address-book records.
| connect(copyLabelAction, &QAction::triggered, this, &AddressBookPage::onCopyLabelAction); | ||
| connect(editAction, &QAction::triggered, this, &AddressBookPage::onEditAction); | ||
| connect(deleteAction, &QAction::triggered, this, &AddressBookPage::on_deleteAddress_clicked); | ||
| connect(extendAction, &QAction::triggered, this, &AddressBookPage::on_extendAddress_clicked); |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/test/sparkname_tests.cpp (2)
435-460: MissinghashFailsaferetry loop aroundScalar::SetHex.
ModifySparkNameTx(line 124) uses a retry loop with an incrementinghashFailsafenonce in caseSetHexthrows for an invalid scalar value. The transfer proof generation here (and the duplicate at line 549) callsSetHexwithout a try/catch, so the test would crash if the hash is not representable as a valid scalar. While extremely unlikely, it's inconsistent with the pattern used elsewhere in this file.Suggested approach
Extract a helper (e.g.,
GenerateTransferProof(...)) that mirrors the retry pattern fromModifySparkNameTx, and reuse it in both proof-generation blocks (lines 435-460 and 534-558) to eliminate the duplication as well.
605-608: Line 608 assertion is tautological given line 606.Since line 606 already asserts
expirationAfterPreV21Extend == extendHeightPre + nBlockPerYear, line 608 reduces to0 < remainingBeforeExtendwhich is already verified on line 596. Consider replacing it with a comment explaining the implication, or removing it to avoid giving a false sense of additional coverage.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Autofix Details
Bugbot Autofix prepared fixes for all 3 issues found in the latest run.
- ✅ Fixed: Overly broad catch swallows all exceptions silently
- Narrowed
catch (...)tocatch (const std::runtime_error&)so only the expected 'name not found' exception is caught, preventing silent swallowing of unrelated errors in consensus-critical code.
- Narrowed
- ✅ Fixed: UI caps years at 10 despite V2.1 allowing 15
- Raised the spin box max to 15 and made
updateFee()dynamically check the V2.1 activation height to determine whether the max is 10 or 15 years.
- Raised the spin box max to 15 and made
- ✅ Fixed: Missing null check on model before dereferencing
- Added
if(!model) return;guard at the top ofon_extendAddress_clicked()consistent with the siblingon_newAddress_clicked()function.
- Added
Or push these changes by commenting:
@cursor push 456100d128
Preview (456100d128)
diff --git a/src/qt/addressbookpage.cpp b/src/qt/addressbookpage.cpp
--- a/src/qt/addressbookpage.cpp
+++ b/src/qt/addressbookpage.cpp
@@ -310,6 +310,9 @@
void AddressBookPage::on_extendAddress_clicked()
{
+ if(!model)
+ return;
+
if (!ui->tableView || !ui->tableView->selectionModel())
return;
diff --git a/src/qt/createsparknamepage.cpp b/src/qt/createsparknamepage.cpp
--- a/src/qt/createsparknamepage.cpp
+++ b/src/qt/createsparknamepage.cpp
@@ -26,7 +26,7 @@
feeText = ui->feeTextLabel->text();
ui->numberOfYearsEdit->setValue(1);
- ui->numberOfYearsEdit->setRange(1, 10);
+ ui->numberOfYearsEdit->setRange(1, 15);
updateFee();
}
@@ -101,7 +101,14 @@
QString sparkName = ui->sparkNameEdit->text();
int numberOfYears = ui->numberOfYearsEdit->value();
- if (sparkName.isEmpty() || cmp::greater(sparkName.length(), CSparkNameManager::maximumSparkNameLength) || numberOfYears == 0 || numberOfYears > 10) {
+ int maxYears;
+ {
+ LOCK(cs_main);
+ const auto& consensusParams = Params().GetConsensus();
+ maxYears = chainActive.Height() >= consensusParams.nSparkNamesV21StartBlock ? 15 : 10;
+ }
+
+ if (sparkName.isEmpty() || cmp::greater(sparkName.length(), CSparkNameManager::maximumSparkNameLength) || numberOfYears == 0 || numberOfYears > maxYears) {
ui->feeTextLabel->setText(feeText.arg("?"));
return;
}
diff --git a/src/spark/state.cpp b/src/spark/state.cpp
--- a/src/spark/state.cpp
+++ b/src/spark/state.cpp
@@ -350,7 +350,7 @@
int remainingBlocks = existingExpirationHeight - pindexNew->nHeight;
if (remainingBlocks > 0)
validityBlocks += remainingBlocks;
- } catch (...) {
+ } catch (const std::runtime_error&) {
// name doesn't exist yet, no adjustment needed
}
}| validityBlocks += remainingBlocks; | ||
| } catch (...) { | ||
| // name doesn't exist yet, no adjustment needed | ||
| } |
There was a problem hiding this comment.
Overly broad catch swallows all exceptions silently
Medium Severity
The catch (...) block is intended to handle the case where GetSparkNameBlockHeight throws std::runtime_error because the name doesn't exist yet. However, it silently swallows all exceptions, including unrelated ones like std::bad_alloc or lock failures. This could mask real errors during block connection, a consensus-critical code path.
| dialog->setModel(model->getWalletModel()); | ||
| dialog->setExtendMode(name, address); | ||
| dialog->show(); | ||
| } |
There was a problem hiding this comment.
Missing null check on model before dereferencing
Low Severity
on_extendAddress_clicked dereferences model at model->getWalletModel() without a null check. The sibling function on_newAddress_clicked guards with if(!model) return; at the top. While unlikely to be null in practice (the button is only visible after model is set), the inconsistency creates a potential null pointer dereference.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/sparkname.cpp (1)
212-226:⚠️ Potential issue | 🟠 MajorZero-purchase updates/transfers still fail here.
nYearsis derived only from newly purchased blocks, so a pure carry-forward update/transfer can legitimately producenameFee == 0. This branch still requirespayoutFound, though, which means those transactions are rejected unless they add a synthetic fee-address output.Proposed fix
- bool payoutFound = false; + bool payoutFound = nameFee == 0; // Up until stage 4.1, the fee is paid to the development fund address. Afterwards, it is paid to the community fund address. // Graceful period allows to register spark names with the old address for the payment if (nHeight < consensusParams.stage41StartBlockDevFundAddressChange + consensusParams.stage41SparkNamesGracefulPeriod) payoutFound = CheckPaymentToTransparentAddress(tx, consensusParams.stage3DevelopmentFundAddress, nameFee); if (nHeight >= consensusParams.stage41StartBlockDevFundAddressChange) payoutFound = payoutFound || CheckPaymentToTransparentAddress(tx, consensusParams.stage3CommunityFundAddress, nameFee);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sparkname.cpp` around lines 212 - 226, The validation rejects pure carry-forward updates because nYears and nameFee can be zero yet the code still requires a payout; modify the logic in the CheckSparkNameTx flow to skip payment verification when nameFee == 0 (i.e. treat no-fee updates as valid without requiring CheckPaymentToTransparentAddress). Concretely, after computing nYears and nameFee, if nameFee == 0 set payoutFound = true (or bypass the subsequent checks) so the stage41 address checks using CheckPaymentToTransparentAddress against consensusParams.stage3DevelopmentFundAddress and consensusParams.stage3CommunityFundAddress only run when nameFee > 0; preserve existing graceful-period logic for nonzero fees.
♻️ Duplicate comments (1)
src/sparkname.cpp (1)
376-387:⚠️ Potential issue | 🔴 Critical
ValidateSparkNameDatais still out of sync with consensus and lock ordering.This still takes
cs_spark_namebeforecs_main, preserving the ABBA deadlock againstCheckSparkNameTx, and it still enforces the max-validity cap on purchased blocks only instead of purchased-plus-leftover time. That means wallet validation can both hang and green-light renewals that consensus later rejects.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sparkname.cpp` around lines 376 - 387, ValidateSparkNameData currently locks cs_spark_name before taking cs_main (creating ABBA with CheckSparkNameTx) and only checks sparkNameData.sparkNameValidityBlocks against the max cap; change the lock ordering to acquire cs_main (or avoid holding cs_spark_name) before accessing chainActive.Height(), and update the validity check to consider purchased-plus-leftover time (sum existing leftover time with sparkNameData.sparkNameValidityBlocks) when comparing to consensusParams.nSparkNamesV21StartBlock thresholds (and when setting errorDescription), using the same consensus symbols (nSparkNamesV21StartBlock and the 15-year/10-year caps) so wallet validation matches consensus and avoids deadlocks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/script/interpreter.cpp`:
- Around line 356-358: The new case for OP_SPARKNAMEID currently turns 0xe1 into
a live NOP unconditionally; instead gate its execution behind a script-verify
flag that the caller sets at the fork height (do not rely on EvalScript() having
a height). Modify the OP_SPARKNAMEID handling in EvalScript() so that if the
fork flag (e.g., a new SCRIPT_VERIFY_SPARKNAMEID or a helper
IsSparkNameEnabled(flags)) is not set it behaves as BAD_OPCODE (or remains
non-executed marker), and only when the flag is present treat it as a no-op/tag;
reference OP_SPARKNAMEID, EvalScript(), and nSparkNamesV21StartBlock in your
change to ensure the feature is enabled by the external script flags rather than
unconditionally.
In `@src/script/script.cpp`:
- Around line 364-375: CScript::IsSparkNameFee currently only checks for P2PKH +
OP_SPARKNAMEID + min length, allowing trailing garbage; update it to fully parse
the suffix and require an exact format: after confirming the P2PKH prefix in
IsSparkNameFee, verify byte 25 == OP_SPARKNAMEID, then parse two consecutive
pushdata operations (accepting legitimate push opcodes and their data lengths)
each immediately followed by OP_DROP, and ensure the script iterator (pc)
reaches end() with no extra bytes; implement this by walking the script after
OP_SPARKNAMEID using existing push-data decoding logic (or inline handling of
OP_PUSHDATA1/2/4 and small pushes) and checking the two OP_DROP bytes and final
pc == end().
In `@src/script/standard.cpp`:
- Around line 96-103: The classification of Spark name fee scripts
(IsSparkNameFee / TX_SPARKNAMEFEE handling in the scriptPubKey parsing) is
incomplete: it only checks the P2PKH prefix and OP_SPARKNAMEID but does not
validate the full suffix (pushdata opcodes for name, the subsequent OP_DROP, the
spark address push, trailing OP_DROP) nor ensure the opcode parser consumes to
pc==end(), allowing malformed tails to be accepted. Fix by extending the
validation in the TX_SPARKNAMEFEE branch to parse and validate the full expected
sequence using safe GetOp/pushdata checks: require the name be a valid push
opcode and length, require the next opcode to be OP_DROP, require the spark
address to be parsed as a valid push and then require the final opcode to be
OP_DROP and that pc == end() before accepting and pushing the extracted
hashBytes into vSolutionsRet; reference IsSparkNameFee, OP_SPARKNAMEID, OP_DROP,
GetOp, scriptPubKey, vSolutionsRet, and TX_SPARKNAMEFEE when locating the code
to change.
- Around line 96-103: Solver() currently returns TX_SPARKNAMEFEE but consumers
(sign.cpp: functions handling signing) and ismine.cpp (ownership checks) lack a
TX_SPARKNAMEFEE branch; either change Solver() to classify these scripts as
TX_PUBKEYHASH instead, or update the consumers to handle TX_SPARKNAMEFEE by
calling GetBaseScriptFromSparkNameFee() to strip the
OP_SPARKNAMEID/<name>/suffix and then proceed as usual (use the base script for
signing and ismine checks). Locate Solver(), TX_SPARKNAMEFEE, sign-related
functions in sign.cpp and ownership checks in ismine.cpp and implement the
branch that maps TX_SPARKNAMEFEE -> base script via
GetBaseScriptFromSparkNameFee() before existing logic, or revert the Solver()
classification to TX_PUBKEYHASH so existing code continues to work.
In `@src/script/standard.h`:
- Around line 89-91: The header declares functions using std::string
(GetScriptForSparkNameFee, ExtractSparkNameFromScript,
GetBaseScriptFromSparkNameFee) but doesn't include <string>; add `#include`
<string> at the top of src/script/standard.h so the header is self-contained and
std::string is defined for those declarations.
In `@src/validation.cpp`:
- Around line 730-737: The activation-height gate for Spark-name fee outputs
currently lives inside CheckTransaction (using isVerifyDB and nTxHeight) which
relies on ad-hoc height semantics (INT_MAX/mempool and pindexDelete->pprev
during DisconnectTip) and thus rejects the first post-activation tx too late or
too early; move this check into callers that have the canonical candidate block
height (e.g., block validation path that calls CheckSparkTransaction and mempool
admission/DisconnectTip call sites) or add an explicit mode/parameter to
CheckTransaction (e.g., bool enforceActivationAtHeight or an explicit int
candidateHeight) so the code that checks
::Params().GetConsensus().nSparkNamesV21StartBlock is performed by the caller
using the real candidate block height and not by CheckTransaction using
isVerifyDB/nTxHeight. Ensure references to vout.scriptPubKey.IsSparkNameFee(),
CheckSparkTransaction, and block.sparkTxInfo are updated accordingly and remove
the ad-hoc activation check from CheckTransaction.
---
Outside diff comments:
In `@src/sparkname.cpp`:
- Around line 212-226: The validation rejects pure carry-forward updates because
nYears and nameFee can be zero yet the code still requires a payout; modify the
logic in the CheckSparkNameTx flow to skip payment verification when nameFee ==
0 (i.e. treat no-fee updates as valid without requiring
CheckPaymentToTransparentAddress). Concretely, after computing nYears and
nameFee, if nameFee == 0 set payoutFound = true (or bypass the subsequent
checks) so the stage41 address checks using CheckPaymentToTransparentAddress
against consensusParams.stage3DevelopmentFundAddress and
consensusParams.stage3CommunityFundAddress only run when nameFee > 0; preserve
existing graceful-period logic for nonzero fees.
---
Duplicate comments:
In `@src/sparkname.cpp`:
- Around line 376-387: ValidateSparkNameData currently locks cs_spark_name
before taking cs_main (creating ABBA with CheckSparkNameTx) and only checks
sparkNameData.sparkNameValidityBlocks against the max cap; change the lock
ordering to acquire cs_main (or avoid holding cs_spark_name) before accessing
chainActive.Height(), and update the validity check to consider
purchased-plus-leftover time (sum existing leftover time with
sparkNameData.sparkNameValidityBlocks) when comparing to
consensusParams.nSparkNamesV21StartBlock thresholds (and when setting
errorDescription), using the same consensus symbols (nSparkNamesV21StartBlock
and the 15-year/10-year caps) so wallet validation matches consensus and avoids
deadlocks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: c77a1318-e328-4409-ab29-4f864709b0ec
📒 Files selected for processing (8)
src/script/interpreter.cppsrc/script/script.cppsrc/script/script.hsrc/script/standard.cppsrc/script/standard.hsrc/sparkname.cppsrc/sparkname.hsrc/validation.cpp
| case OP_SPARKNAMEID: | ||
| // NOP - used only to tag fee outputs with spark name data | ||
| break; |
There was a problem hiding this comment.
Gate OP_SPARKNAMEID before making it executable.
Lines 356-358 convert 0xe1 from BAD_OPCODE into a valid no-op unconditionally. EvalScript() has no height input, so nSparkNamesV21StartBlock cannot affect this branch; any executed script containing OP_SPARKNAMEID becomes consensus-valid as soon as this binary is deployed. Please tie this to a new script-verify flag activated at the fork height, or avoid introducing a brand-new executed opcode for the tag at all.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/script/interpreter.cpp` around lines 356 - 358, The new case for
OP_SPARKNAMEID currently turns 0xe1 into a live NOP unconditionally; instead
gate its execution behind a script-verify flag that the caller sets at the fork
height (do not rely on EvalScript() having a height). Modify the OP_SPARKNAMEID
handling in EvalScript() so that if the fork flag (e.g., a new
SCRIPT_VERIFY_SPARKNAMEID or a helper IsSparkNameEnabled(flags)) is not set it
behaves as BAD_OPCODE (or remains non-executed marker), and only when the flag
is present treat it as a no-op/tag; reference OP_SPARKNAMEID, EvalScript(), and
nSparkNamesV21StartBlock in your change to ensure the feature is enabled by the
external script flags rather than unconditionally.
| bool CScript::IsSparkNameFee() const { | ||
| // Minimum: 25 (P2PKH) + 1 (OP_SPARKNAMEID) + 2 (min push + OP_DROP) + 2 (min push + OP_DROP) | ||
| if (this->size() < 30) | ||
| return false; | ||
| // Must start with standard P2PKH pattern | ||
| if ((*this)[0] != OP_DUP || (*this)[1] != OP_HASH160 || (*this)[2] != 0x14 || | ||
| (*this)[23] != OP_EQUALVERIFY || (*this)[24] != OP_CHECKSIG) | ||
| return false; | ||
| // Byte after OP_CHECKSIG must be OP_SPARKNAMEID | ||
| if ((*this)[25] != OP_SPARKNAMEID) | ||
| return false; | ||
| return true; |
There was a problem hiding this comment.
Tighten the fee-script matcher to the exact suffix format.
Lines 364-375 only prove “P2PKH + OP_SPARKNAMEID at byte 25 + minimum length”. That means malformed scripts with arbitrary trailing opcodes will still be classified as Spark Name fee outputs. For consensus/policy paths, this helper should parse the full script and require the exact <P2PKH><OP_SPARKNAMEID><push><OP_DROP><push><OP_DROP> shape with pc == end().
Suggested direction
bool CScript::IsSparkNameFee() const {
- // Minimum: 25 (P2PKH) + 1 (OP_SPARKNAMEID) + 2 (min push + OP_DROP) + 2 (min push + OP_DROP)
- if (this->size() < 30)
- return false;
- // Must start with standard P2PKH pattern
- if ((*this)[0] != OP_DUP || (*this)[1] != OP_HASH160 || (*this)[2] != 0x14 ||
- (*this)[23] != OP_EQUALVERIFY || (*this)[24] != OP_CHECKSIG)
- return false;
- // Byte after OP_CHECKSIG must be OP_SPARKNAMEID
- if ((*this)[25] != OP_SPARKNAMEID)
- return false;
- return true;
+ const_iterator pc = begin();
+ opcodetype opcode;
+ std::vector<unsigned char> data;
+
+ if (!GetOp(pc, opcode) || opcode != OP_DUP) return false;
+ if (!GetOp(pc, opcode) || opcode != OP_HASH160) return false;
+ if (!GetOp(pc, opcode, data) || data.size() != 20) return false;
+ if (!GetOp(pc, opcode) || opcode != OP_EQUALVERIFY) return false;
+ if (!GetOp(pc, opcode) || opcode != OP_CHECKSIG) return false;
+ if (!GetOp(pc, opcode) || opcode != OP_SPARKNAMEID) return false;
+ if (!GetOp(pc, opcode, data)) return false;
+ if (!GetOp(pc, opcode) || opcode != OP_DROP) return false;
+ if (!GetOp(pc, opcode, data)) return false;
+ if (!GetOp(pc, opcode) || opcode != OP_DROP) return false;
+
+ return pc == end();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| bool CScript::IsSparkNameFee() const { | |
| // Minimum: 25 (P2PKH) + 1 (OP_SPARKNAMEID) + 2 (min push + OP_DROP) + 2 (min push + OP_DROP) | |
| if (this->size() < 30) | |
| return false; | |
| // Must start with standard P2PKH pattern | |
| if ((*this)[0] != OP_DUP || (*this)[1] != OP_HASH160 || (*this)[2] != 0x14 || | |
| (*this)[23] != OP_EQUALVERIFY || (*this)[24] != OP_CHECKSIG) | |
| return false; | |
| // Byte after OP_CHECKSIG must be OP_SPARKNAMEID | |
| if ((*this)[25] != OP_SPARKNAMEID) | |
| return false; | |
| return true; | |
| bool CScript::IsSparkNameFee() const { | |
| const_iterator pc = begin(); | |
| opcodetype opcode; | |
| std::vector<unsigned char> data; | |
| if (!GetOp(pc, opcode) || opcode != OP_DUP) return false; | |
| if (!GetOp(pc, opcode) || opcode != OP_HASH160) return false; | |
| if (!GetOp(pc, opcode, data) || data.size() != 20) return false; | |
| if (!GetOp(pc, opcode) || opcode != OP_EQUALVERIFY) return false; | |
| if (!GetOp(pc, opcode) || opcode != OP_CHECKSIG) return false; | |
| if (!GetOp(pc, opcode) || opcode != OP_SPARKNAMEID) return false; | |
| if (!GetOp(pc, opcode, data)) return false; | |
| if (!GetOp(pc, opcode) || opcode != OP_DROP) return false; | |
| if (!GetOp(pc, opcode, data)) return false; | |
| if (!GetOp(pc, opcode) || opcode != OP_DROP) return false; | |
| return pc == end(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/script/script.cpp` around lines 364 - 375, CScript::IsSparkNameFee
currently only checks for P2PKH + OP_SPARKNAMEID + min length, allowing trailing
garbage; update it to fully parse the suffix and require an exact format: after
confirming the P2PKH prefix in IsSparkNameFee, verify byte 25 == OP_SPARKNAMEID,
then parse two consecutive pushdata operations (accepting legitimate push
opcodes and their data lengths) each immediately followed by OP_DROP, and ensure
the script iterator (pc) reaches end() with no extra bytes; implement this by
walking the script after OP_SPARKNAMEID using existing push-data decoding logic
(or inline handling of OP_PUSHDATA1/2/4 and small pushes) and checking the two
OP_DROP bytes and final pc == end().
| // Spark name fee: P2PKH + OP_SPARKNAMEID <name> OP_DROP <sparkaddr> OP_DROP | ||
| if (scriptPubKey.IsSparkNameFee()) | ||
| { | ||
| typeRet = TX_SPARKNAMEFEE; | ||
| std::vector<unsigned char> hashBytes(scriptPubKey.begin()+3, scriptPubKey.begin()+23); | ||
| vSolutionsRet.push_back(hashBytes); | ||
| return true; | ||
| } |
There was a problem hiding this comment.
Validate the full suffix before classifying a script as TX_SPARKNAMEFEE.
Lines 364-376 in src/script/script.cpp show that IsSparkNameFee() only checks the P2PKH prefix and the OP_SPARKNAMEID byte. The parser below then uses GetOp() without requiring pushdata opcodes, never consumes the last OP_DROP, and never checks pc == end(). That means malformed tails like ... OP_TRUE OP_DROP OP_TRUE OP_DROP OP_1 still classify/parse as Spark-name fee outputs. With the current standard script flags, that trailing nonzero can change the final stack result after CHECKSIG, so this is a spend-condition bug, not just a parsing nit.
Proposed hardening
- if (scriptPubKey.IsSparkNameFee())
+ std::string sparkName;
+ std::string sparkAddress;
+ if (ExtractSparkNameFromScript(scriptPubKey, sparkName, sparkAddress))
{
typeRet = TX_SPARKNAMEFEE;
std::vector<unsigned char> hashBytes(scriptPubKey.begin()+3, scriptPubKey.begin()+23);
vSolutionsRet.push_back(hashBytes);
return true;
@@
- if (!scriptPubKey.GetOp(pc, opcode, nameData))
+ if (!scriptPubKey.GetOp(pc, opcode, nameData) || opcode > OP_PUSHDATA4)
return false;
@@
- if (!scriptPubKey.GetOp(pc, opcode, addrData))
+ if (!scriptPubKey.GetOp(pc, opcode, addrData) || opcode > OP_PUSHDATA4)
return false;
if (pc >= scriptPubKey.end() || *pc != OP_DROP)
return false;
+ ++pc;
+ if (pc != scriptPubKey.end())
+ return false;Also applies to: 403-427
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/script/standard.cpp` around lines 96 - 103, The classification of Spark
name fee scripts (IsSparkNameFee / TX_SPARKNAMEFEE handling in the scriptPubKey
parsing) is incomplete: it only checks the P2PKH prefix and OP_SPARKNAMEID but
does not validate the full suffix (pushdata opcodes for name, the subsequent
OP_DROP, the spark address push, trailing OP_DROP) nor ensure the opcode parser
consumes to pc==end(), allowing malformed tails to be accepted. Fix by extending
the validation in the TX_SPARKNAMEFEE branch to parse and validate the full
expected sequence using safe GetOp/pushdata checks: require the name be a valid
push opcode and length, require the next opcode to be OP_DROP, require the spark
address to be parsed as a valid push and then require the final opcode to be
OP_DROP and that pc == end() before accepting and pushing the extracted
hashBytes into vSolutionsRet; reference IsSparkNameFee, OP_SPARKNAMEID, OP_DROP,
GetOp, scriptPubKey, vSolutionsRet, and TX_SPARKNAMEFEE when locating the code
to change.
Finish the TX_SPARKNAMEFEE plumbing before returning it from Solver().
Lines 67-129 in src/script/sign.cpp still have no TX_SPARKNAMEFEE branch, and Lines 61-104 in src/script/ismine.cpp still only special-case TX_PUBKEYHASH/TX_EXCHANGEADDRESS. As soon as one of these outputs hits the wallet, it won't be recognized as ours or signed again. Either keep the generic path as TX_PUBKEYHASH, or teach those consumers to strip the suffix via the GetBaseScriptFromSparkNameFee() helper added below.
Also applies to: 296-300
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/script/standard.cpp` around lines 96 - 103, Solver() currently returns
TX_SPARKNAMEFEE but consumers (sign.cpp: functions handling signing) and
ismine.cpp (ownership checks) lack a TX_SPARKNAMEFEE branch; either change
Solver() to classify these scripts as TX_PUBKEYHASH instead, or update the
consumers to handle TX_SPARKNAMEFEE by calling GetBaseScriptFromSparkNameFee()
to strip the OP_SPARKNAMEID/<name>/suffix and then proceed as usual (use the
base script for signing and ismine checks). Locate Solver(), TX_SPARKNAMEFEE,
sign-related functions in sign.cpp and ownership checks in ismine.cpp and
implement the branch that maps TX_SPARKNAMEFEE -> base script via
GetBaseScriptFromSparkNameFee() before existing logic, or revert the Solver()
classification to TX_PUBKEYHASH so existing code continues to work.
| CScript GetScriptForSparkNameFee(const CTxDestination& dest, const std::string& sparkName, const std::string& sparkAddress); | ||
| bool ExtractSparkNameFromScript(const CScript& scriptPubKey, std::string& sparkName, std::string& sparkAddress); | ||
| CScript GetBaseScriptFromSparkNameFee(const CScript& scriptPubKey); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "std::string declarations:"
rg -n 'std::string' src/script/standard.h
echo
echo "<string> includes:"
rg -n '#include <string>' src/script/standard.h || trueRepository: firoorg/firo
Length of output: 345
🏁 Script executed:
# Check the includes section of standard.h
head -50 src/script/standard.h | cat -nRepository: firoorg/firo
Length of output: 1819
🏁 Script executed:
# Get the includes and the relevant function declarations
head -100 src/script/standard.h | tail -60 | cat -nRepository: firoorg/firo
Length of output: 2759
Include <string> in this public header.
Lines 89-91 declare std::string parameters but the header does not explicitly include <string>. The code currently relies on transitive includes from other headers, which is fragile and violates C++ header self-containment principles.
Proposed fix
`#include` "script/interpreter.h"
`#include` "uint256.h"
`#include` <boost/variant.hpp>
+#include <string>
`#include` <stdint.h>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/script/standard.h` around lines 89 - 91, The header declares functions
using std::string (GetScriptForSparkNameFee, ExtractSparkNameFromScript,
GetBaseScriptFromSparkNameFee) but doesn't include <string>; add `#include`
<string> at the top of src/script/standard.h so the header is self-contained and
std::string is defined for those declarations.
| // Spark name fee outputs (with OP_SPARKNAMEID) are only allowed after nSparkNamesV21StartBlock | ||
| for (const auto &vout : tx.vout) { | ||
| if (vout.scriptPubKey.IsSparkNameFee()) { | ||
| if (!isVerifyDB && nTxHeight < ::Params().GetConsensus().nSparkNamesV21StartBlock) | ||
| return state.DoS(100, false, REJECT_INVALID, "bad-sparkname-fee-output"); | ||
| break; | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't tie this activation check to CheckTransaction()'s ad-hoc height semantics.
CheckTransaction() is called from mempool admission with INT_MAX on Line 1047, which resolves to chainActive.Height() here, so the first post-activation Spark-name transaction is rejected one block late. It's also called from DisconnectTip() with pindexDelete->pprev->nHeight on Lines 3487-3488, so disconnecting the activation block returns here before CheckSparkTransaction() repopulates block.sparkTxInfo. This gate needs to live in the callers that know the candidate block height, or CheckTransaction() needs a mode that doesn't reuse current-tip/prev-block shortcuts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/validation.cpp` around lines 730 - 737, The activation-height gate for
Spark-name fee outputs currently lives inside CheckTransaction (using isVerifyDB
and nTxHeight) which relies on ad-hoc height semantics (INT_MAX/mempool and
pindexDelete->pprev during DisconnectTip) and thus rejects the first
post-activation tx too late or too early; move this check into callers that have
the canonical candidate block height (e.g., block validation path that calls
CheckSparkTransaction and mempool admission/DisconnectTip call sites) or add an
explicit mode/parameter to CheckTransaction (e.g., bool
enforceActivationAtHeight or an explicit int candidateHeight) so the code that
checks ::Params().GetConsensus().nSparkNamesV21StartBlock is performed by the
caller using the real candidate block height and not by CheckTransaction using
isVerifyDB/nTxHeight. Ensure references to vout.scriptPubKey.IsSparkNameFee(),
CheckSparkTransaction, and block.sparkTxInfo are updated accordingly and remove
the ad-hoc activation check from CheckTransaction.
| .arg(expirationDate.toString("MMMM d, yyyy")); | ||
| } catch (...) { | ||
| label = feeText.arg(QString::number(fee)); | ||
| } |
There was a problem hiding this comment.
Overly broad catch-all in UI fee calculation
Low Severity
The catch (...) in updateFee() silently swallows all exceptions — including std::bad_alloc and other critical errors — when computing the extend-mode expiration estimate. Only std::runtime_error (from GetSparkNameBlockHeight when a name is not found) is expected here, and catching that specifically would avoid masking unrelated failures.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 11 total unresolved issues (including 10 from previous reviews).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| proxyModel->setFilterRole(AddressTableModel::TypeRole); | ||
| proxyModel->setFilterFixedString(AddressTableModel::Send); | ||
| break; | ||
| } |
There was a problem hiding this comment.
Proxy model filter permanently skipped after SparkName init
Medium Severity
When initialAddressType is SparkName or SparkNameMine, the proxyModel TypeRole filter (Send vs Receive) is permanently skipped. If the user switches the address type dropdown to Spark or Transparent in the ReceivingTab, both Send-type and Receive-type entries will appear because chooseAddressType never updates the proxyModel filter. Only the fproxyModel filter is updated on type change, but the upstream proxyModel remains unfiltered.



PR intention
This PR allows the fee for the spark name to be used as an extension to the already paid period of registration
Note
Medium Risk
Touches consensus/validation and script classification by adding a new opcode and new Spark name rules gated behind
nSparkNamesV21StartBlock; bugs here could affect block/tx acceptance once activated.Overview
Adds Spark Names v2.1 activation (
nSparkNamesV21StartBlock) and updates Spark name renewal/transfer semantics. After v2.1, renewals/transfers preserve remaining validity (no longer discarding leftover time) while enforcing a 15-year max total validity.Introduces a tagged Spark name fee output format. A new
OP_SPARKNAMEIDNOP suffix marks P2PKH fee outputs with embedded spark name + spark address; the node classifies these asTX_SPARKNAMEFEE, validates them inCheckSparkNameTx(required after v2.1), and rejects tagged outputs before activation.Updates wallet/UI and tests for the new behavior. RPC and Qt flows allow up to 15-year purchases, add an (currently hidden) “Extend” spark name action, add “My Spark names” selection in Receive dialog (using
@namelabels), and expand unit tests to cover transfers, v2.1 extension semantics, and max-validity limits.Written by Cursor Bugbot for commit 06ceafc. This will update automatically on new commits. Configure here.