-
Notifications
You must be signed in to change notification settings - Fork 3.8k
CASSANDRA-21066 Full Repair for Tracked Keyspaces #4565
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: cep-45-mutation-tracking
Are you sure you want to change the base?
CASSANDRA-21066 Full Repair for Tracked Keyspaces #4565
Conversation
9edc6c2 to
d74f11b
Compare
9ce9758 to
673e86f
Compare
f876476 to
7781a20
Compare
|
|
||
| import org.junit.Test; | ||
|
|
||
| public class TrackedNonZeroCopyRepairTransferTest extends TrackedRepairTransferTest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests, and their ZCS equivalents could probably just share a cluster and create a keysapce per test.
…aces This comes with lots of comments and a handful of integration tests, but the tests aren't passing and it coul use more unit test coverage. There's a refactor (introducing SyncTasks) that's only partially complete. The rest of this commit message is a description of how full repair is intended to work for tracked keyspaces. Tracked keyspaces cannot accept new data without first registering it in the log. Any unreconciled data that isn't present in the log will break read monotonicity, since mutation tracking uses a single data read and can only read reconcile mutation IDs that are present in the log. For more information about how bulk transfers work on tracked keyspaces, see TrackedImportTransfer. Full repair sync tasks also deliver data to replicas, and require integration with the log just like imports do. For more details on a read anomaly that could happen without integration with the bulk transfer machinery, see TrackedKeyspaceRepairSupportTest#testFullRepairPartiallyCompleteAnomaly. The general design of this integration is to give repair SyncTasks the same two-phase commit as import transfers, where we stream SSTables to a pending/ directory, then once sufficient streams complete successfully, we "activate" those streams and move them out of the pending directory and into the live set. The first step is to ensure that each SyncTask is aligned to a single Mutation Tracking shard, by splitting SyncTasks along the shard boundaries. Each SyncTask will then stream data within a single shard, and permit us to assign a single transfer ID to each SyncTask. Each participant in a repair may receive different SyncTasks (or none at all, if they're already in-sync). This means that TransferActivation needs to be made more flexible, and support a single TransferActivation with multiple plan IDs, or no plan IDs at all. This increase in flexibility has not yet been implemented. patch by Abe Ratnofsky; reviewed by Caleb Rackliffe and ? for CASSANDRA-21066
…and AbstractCoordinatedBulkTransfer
…or are different nodes
…when nodes are on the receiving end of multiple sync tasks - minor renaming and cleanup
- stop trying to reconcile not-yet-committed non-ZCS streams
…now described in TrackedRepairTransferTest
- fix for the case where a remote sync task is split along shard boundaries
- removed the unused alignedToShardBoundaries() method in MTS
a5fe898 to
4492a7f
Compare
| Preconditions.checkState(transfer instanceof TrackedImportTransfer); | ||
| result.transfers.add(transfer.id(), ((TrackedImportTransfer) transfer).sstables); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to support TrackedRepairTransfer inclusion here to make sure summaries include activated repairs after restart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to attempt to do this by having CoordinatedTransfer and ActivationRequest carry Bounds instead of a Range...
| this.planId = planId; | ||
| this.transferId = transferId; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pondering whether we still need planId in SyncStat if it's here now...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it probably can...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think it's because we key local pending transfers on plan ID.
- don't try to run tests in TrackedRepairTransferTestBase and TrackedTransferTestBase
- deduplication in Shard - supporting all CoordinatedTransfer types in loadFromJournal()
d24b770 to
143eab3
Compare
55bd0f2 to
d2b6ed6
Compare
- clean up already committed check in activate()
d2b6ed6 to
a992602
Compare
… (it's possible to have a plan ID but not stream any actual data and therefor not have a pending local transfer to activate)
… (it's possible to have a plan ID but not stream any actual data and therefor not have a pending local transfer to activate)
Tracked keyspaces cannot accept new data without first registering it in
the log. Any unreconciled data that isn't present in the log will break
read monotonicity, since mutation tracking uses a single data read and
can only read reconcile mutation IDs that are present in the log. For
more information about how bulk transfers work on tracked keyspaces, see
TrackedImportTransfer.
Full repair sync tasks also deliver data to replicas, and require
integration with the log just like imports do. For more details on a
read anomaly that could happen without integration with the bulk
transfer machinery, see
TrackedKeyspaceRepairSupportTest#testFullRepairPartiallyCompleteAnomaly.
The general design of this integration is to give repair SyncTasks the
same two-phase commit as import transfers, where we stream SSTables to a
pending/ directory, then once sufficient streams complete successfully,
we "activate" those streams and move them out of the pending directory
and into the live set.
The first step is to ensure that each SyncTask is aligned to a single
Mutation Tracking shard, by splitting SyncTasks along the shard
boundaries. Each SyncTask will then stream data within a single shard,
and permit us to assign a single transfer ID to each SyncTask.
Each participant in a repair may receive different SyncTasks (or none at
all, if they're already in-sync). This means that ActivationRequest must be flexible
enough to support transfers with multiple plan IDs or none at all.