Conversation
|
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
|
Important Review skippedAuto reviews are limited based on label configuration. 🏷️ Required labels (at least one) (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| name: system-programs | ||
| if: github.event.pull_request.draft == false | ||
| runs-on: ubuntu-latest | ||
| timeout-minutes: 60 | ||
|
|
||
| services: | ||
| redis: | ||
| image: redis:8.0.1 | ||
| ports: | ||
| - 6379:6379 | ||
| options: >- | ||
| --health-cmd "redis-cli ping" | ||
| --health-interval 10s | ||
| --health-timeout 5s | ||
| --health-retries 5 | ||
|
|
||
| env: | ||
| REDIS_URL: redis://localhost:6379 | ||
|
|
||
| strategy: | ||
| matrix: | ||
| include: | ||
| - program: sdk-test-program | ||
| sub-tests: '["cargo-test-sbf -p sdk-native-test"]' | ||
| - program: sdk-anchor-test-program | ||
| sub-tests: '["cargo-test-sbf -p sdk-anchor-test", "cargo-test-sbf -p sdk-pinocchio-test"]' | ||
| - program: sdk-libs | ||
| packages: light-macros light-sdk light-program-test light-client light-batched-merkle-tree | ||
| test_cmd: | | ||
| cargo test -p light-macros | ||
| cargo test -p light-sdk | ||
| cargo test -p light-program-test | ||
| cargo test -p light-client | ||
| cargo test -p client-test | ||
| cargo test -p light-sparse-merkle-tree | ||
| cargo test -p light-batched-merkle-tree --features test-only -- --skip test_simulate_transactions --skip test_e2e | ||
| steps: | ||
| - name: Checkout sources | ||
| uses: actions/checkout@v4 | ||
|
|
||
| - name: Setup and build | ||
| uses: ./.github/actions/setup-and-build | ||
| with: | ||
| skip-components: "redis" | ||
|
|
||
| - name: build-programs | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/programs | ||
|
|
||
| - name: Run sub-tests for ${{ matrix.program }} | ||
| if: matrix.sub-tests != null | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/zk-compression-cli | ||
|
|
||
| IFS=',' read -r -a sub_tests <<< "${{ join(fromJSON(matrix.sub-tests), ', ') }}" | ||
| for subtest in "${sub_tests[@]}" | ||
| do | ||
| echo "$subtest" | ||
| eval "RUSTFLAGS=\"-D warnings\" $subtest" | ||
| done | ||
|
|
||
| - name: Run tests for ${{ matrix.program }} | ||
| if: matrix.test_cmd != null | ||
| run: | | ||
| source ./scripts/devenv.sh | ||
| npx nx build @lightprotocol/zk-compression-cli | ||
| ${{ matrix.test_cmd }} |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
In general, fix this by adding an explicit permissions block that grants only the minimal required scopes for GITHUB_TOKEN. For a test-only workflow that just checks out code and runs local commands, contents: read is sufficient.
For this specific file, the simplest, non-functional-changing fix is to add a workflow-level permissions block (so it applies to all current and future jobs) right after the name: examples-tests line. Set it to:
permissions:
contents: readThis grants only read access to repository contents, which is enough for actions/checkout@v6 and the subsequent test steps. No other scopes (like pull-requests or issues) are required by anything shown in the snippet, and no other code changes or imports are necessary.
| @@ -19,6 +19,9 @@ | ||
|
|
||
| name: examples-tests | ||
|
|
||
| permissions: | ||
| contents: read | ||
|
|
||
| concurrency: | ||
| group: ${{ github.workflow }}-${{ github.ref }} | ||
| cancel-in-progress: true |
js/compressed-token/src/utils/pack-compressed-token-accounts.ts
Dismissed
Show dismissed
Hide dismissed
* chore: bump toolchain to 1.90 * chore: bump solana to 2.3 and litsvm 0.7
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](actions/setup-go@v5...v6) --- updated-dependencies: - dependency-name: actions/setup-go dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [playwright](https://github.com/microsoft/playwright) from 1.54.2 to 1.55.1. - [Release notes](https://github.com/microsoft/playwright/releases) - [Commits](microsoft/playwright@v1.54.2...v1.55.1) --- updated-dependencies: - dependency-name: playwright dependency-version: 1.55.1 dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* cherrypicked hasher update: support sha in hasher and lighthasher macro lint remove unused _output_account_info update lightdiscriminator macro chore: add sha flat macro test, perf: sha flat hash in LightAccount avoid double borsh serialization chore: cleanup tests, add new discriminator test, add anchor discriminator compatibility test: LightAccount close * refactor: cpi context account, system program account checks, account compression program nullify create output order test: program libs diff fix program integration tests fix: cpi context has data feat: InstructionDataInvokeCpiWithReadOnly builder pattern fix: GetMultipleCompressedAccountsPost200ResponseResult vec<Option>> fix: light-client & light-program-test tests chore: use devnet v2 address tree pubkey, add v2 address derivation to light-sdk-types and sdk add LightSdkError -> LightClientError conversion fix: to_output_compressed_account_with_packed_context missing data refactor: add new_with_config to PhotonIndexer chore: add builder pattern for all instruction data types refactor: InvokeLightSystemProgram to use CpiAccountsTrait instead of only account infos implement instruction data helpers for v1 and v2 and gate them test: add sdk-v1-native-test fix rebase * fix: LightAccount close * add to_in_account() * add to_output_compressed_account_with_packed_context * fix CpiAccountsV2 lifetime * fix: add_system_accounts_v2 * fix: cpi context account has data * test: v2 sdk address derivation * fix: test * regenerate cli accounts * feat: LightAccount init if needed * fix: sdk-anchor-test * feat: stateless js add v2 address trees, fix: empty data parsing * fix install script * refactor: empty account, update photon commit * refactored v1 and v2 into separate directories in light-sdk and light-sdk-types * refactor: pinocchio sdk v2 * fix: tests * stash added pack accounts docs * feat: sdk add cpi context feature * update light-sdk docs * light-sdk docs * updated light-program-test and light-client docs * fix test --------- Co-authored-by: Swenschaeferjohann <swen@lightprotocol.com>
* chore: bump pinocchio to v0.9 * fix: test account info
* chore: add new rust release flow
* fix: non nullable data * remove conversion --------- Co-authored-by: Swenschaeferjohann <swen@lightprotocol.com>
* feat: enhance transaction processing with active phase checks and improved logging * feat: improve queue element fetching with chunked requests and enhanced logging * logging for success, failure, cancellation, and timeout for v1 transactions * format * add active phase checks for transaction processing * adjust rate limiting * Reduce batch request size to 250 elements * fetch state batch request by 500 * add support for processing specific tree * update Photon indexer version in install script * format * refactor: update queue handling to use queue index instead of offset * cleanup * format * oclif/test 4 * bump oclif * update eslint * fix path * fix path * support gnark 12 * refactor: update proving keys and verifying keys for gnark 12 compatibility * add new verifying keys * gnark 14 * go mod update * go mod update * wip * cleanup * clean up checksum generation for proving keys - Introduced `CHECKSUM` file for storing key checksums. - Updated `generate_keys.sh` to include checksum generation using `generate_checksums.py`. - Simplified `generate_checksums.py` code and added a progress indicator. * Remove `CHECKSUM` file from proving key server * cleanup * cleanup * cleanup * cleanup * update keys bucket * cleanup * regenerate keys * update checksum file path in key generation scripts * fix download_keys.sh * add `go build` step to prover-test workflow * update gnark-lean-extractor * update marshal for gnark v0.14 compatibility * remove deprecated verifier keys * rebase on top of sergey/forester-cleanup-2 * fix download_keys.sh, clarify circuit key naming * format * match internal proving key versions with file prefixes (v1_*, v2_*) Version 0 => 1 Version 1 => 2 * format * v1 circuits + tests * fix v1 unit tests * add tree_id config * improve registration info handling and transaction processing * format * * allow configuration of `max_concurrent_sends` for v1 * update forester docs * remove unused producer-consumer transaction processing code from v1 * add `max_concurrent_sends` configuration to priority fee tests * 1..20 / 1..32 v2 inclusion and non-inclusion * update v2 tests to cover all keys * update proving keys * refactor: update response types to use QueueElementsResult in indexer * add batch append circuit support to r1cs cli * add ImportBatchAddressAppendSetupWithR1CS * add v1 R1CS support and optional verifying key output * add script to regenerate verification keys and update indexer methods and traits * update proving keys * add new vkeys * fix after rebase * update proving keys to use batch naming convention * update proving keys * format * update proving keys to use full mode in prover tests * update proving keys to use batch naming convention in download script * update download_keys.sh --------- Co-authored-by: Marcin Kostrzewa <marckostrzewa@gmail.com>
Bumps [typescript](https://github.com/microsoft/TypeScript) from 5.9.2 to 5.9.3. - [Release notes](https://github.com/microsoft/TypeScript/releases) - [Changelog](https://github.com/microsoft/TypeScript/blob/main/azure-pipelines.release-publish.yml) - [Commits](microsoft/TypeScript@v5.9.2...v5.9.3) --- updated-dependencies: - dependency-name: typescript dependency-version: 5.9.3 dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* fix: compressed token es module generation * fix
* chore: add address merkle tree pubkey print to light program test output * feat: statelessjs add PackedAccounts v1 and v2 * get address tree pubkey * fix feedback * fix: ts sdk anchor test * revert: anchor dev dep bump * build anchor sdk test program for ci
* fix release * fix: use cargo publish instead of cargo release - Replace 'cargo release publish' with 'cargo publish' in validate-packages.sh - cargo-release was removed in commit c7227ba - Revert workflow to use PR event data (not hardcoded commits)
* refactor: light program test make anchor programs optional deps * feat: LightAccount read only support * fix: add test serial
* chore: remove duplicate program builds * chore: cli ci mode * next try * cleanup * refactor: caching * fix solana cache * remove cli build * enable cli again * revert toolchain * fix clean checkout * remove manual redis setup * disable nx cache security * chore: add multiple prover keys caches * fix test cli ci * remove duplicate address program build * split up program tests more evenly * add js ci nx commands
…#2267) * fix: reject rent_payment < 2 for CMint decompression (L-10) CMints are always compressible and need minimum 2 epochs of rent prepayment. Previously only rent_payment == 1 was rejected, allowing rent_payment == 0 which could enable DoS by creating underfunded CMints. * test: update forester mint tests for rent_payment >= 2 requirement Update tests to use rent_payment=2 (new minimum) and warp slots forward past the rent period before checking compressibility. * fix lint
* fix: reject duplicate accounts in convert_account_infos Audit issue #21 (INFO): convert_account_infos created multiple mutable references when duplicate account keys were passed. Add a pre-check that detects duplicate keys and returns InvalidArgument to prevent undefined behavior from aliased mutable references. * fix: share Rc<RefCell<>> for duplicate accounts instead of rejecting Replace blanket rejection of duplicate account keys with Rc sharing that mimics Solana runtime behavior. When the same account appears in multiple positions (e.g., fee_payer and authority are the same signer), the Rc<RefCell<>> from the first occurrence is reused, preventing independent mutable references while allowing legitimate duplicate account usage. * Update programs/compressed-token/program/src/convert_account_infos.rs Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: deduplicate comment and use pubkey_eq for duplicate key check - Remove duplicated comment block - Use light_array_map::pubkey_eq with pinocchio keys directly instead of comparing solana_pubkey::Pubkey references --------- Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…ype (#2236) * fix: borsh token account deserialization throw error on invalid account type * test: borsh deserialization
…2248) * fix: zero base token bytes before init to prevent IDL buffer attack Audit issue #17 (CRITICAL): Token::new_zero_copy only sets mint, owner, and state fields without zeroing amount, delegate, delegated_amount, is_native, or close_authority. An attacker could pre-set the amount field via IDL buffer. Zero all 165 base token bytes before initialization. * fix: fail explicitly when token account data < 165 bytes
* fix: reject rent sponsor self-referencing the token account Audit issue #9 (INFO): The rent payer could be the same account as the target token account being created. Add a check that rejects this self-reference to prevent accounting issues. * test: add failing test rent sponsor self reference
* fix: process metadata add/remove actions in sequential order Audit issue #16 (LOW): should_add_key checked for any add and any remove independently, ignoring action ordering. An add-remove-add sequence would incorrectly remove the key. Process actions sequentially so the final state reflects the actual order. * chore: format * test: add randomized test for metadata action processing Validates that process_extensions_config_with_actions produces correct AdditionalMetadataConfig for random sequences of UpdateMetadataField and RemoveMetadataKey actions, covering the add-remove-add bug from audit issue #16. * test: add integration test for audit issue #13 (no double rent charge) Verifies that two compress operations targeting the same compressible CToken account in a single Transfer2 instruction do not double-charge the rent top-up budget. * chore: format extensions_metadata test
* fix: handle self-transfer in ctoken transfer and transfer_checked Validate that the authority is a signer and is the owner or delegate before allowing self-transfer early return. Previously the self-transfer path returned Ok(()) without any authority validation. * fix: simplify map_or to is_some_and per clippy * fix: use pubkey_eq for self-transfer check * refactor: extract self-transfer validation into shared function Extract duplicate self-transfer check from default.rs and checked.rs into validate_self_transfer() in shared.rs with cold path for authority validation. * chore: format * fix: deduplicate random metadata keys in test_random_mint_action Random key generation could produce duplicate keys, causing DuplicateMetadataKey error (18040) with certain seeds.
) * fix: enforce mint extension checks in cToken-to-cToken decompress hot path Add enforce_extension_state() to MintExtensionChecks and call it in the Decompress branch when decompress_inputs is None (hot-path, not CompressedOnly restore). This prevents cToken-to-cToken transfers from bypassing pause, transfer fee, and transfer hook restrictions. * fix test
…mint_extension_cache (#2240) * chore: reject compress for mints with restricted extensions in mint check * Update programs/compressed-token/program/src/compressed_token/transfer2/check_extensions.rs Co-authored-by: 0xa5df-c <172008956+0xa5df-c@users.noreply.github.com> * fix: format else-if condition for lint --------- Co-authored-by: 0xa5df-c <172008956+0xa5df-c@users.noreply.github.com>
…2263) * fix: add MintCloseAuthority as restricted extension (M-03) A mint with MintCloseAuthority can be closed and re-opened with different extensions. Treating it as restricted ensures compressed tokens from such mints require CompressedOnly mode. * test: add MintCloseAuthority compression_only requirement tests Add test coverage for MintCloseAuthority requiring compression_only mode, complementing the fix in f2da063.
* refactor: light program pinocchio macro * fix: address CodeRabbit review comments on macro codegen - Fix account order bug in process_update_config (config=0, authority=1) - Use backend-provided serialize/deserialize derives in LightAccountData - Remove redundant is_pinocchio() branch for has_le_bytes unpack fields - DRY doc attribute generation across 4 struct generation methods - Unify unpack_data path using account_crate re-export for both backends
…#2262) * fix: allow account-level delegate to compress tokens from CToken (M-02) check_ctoken_owner() only checked owner and permanent delegate. An account-level delegate (approved via CTokenApprove) could not compress tokens. Added delegate check after permanent delegate. * test: compress by delegate
* fix: accumulate delegated amount at decompression * fix lint * refactor: simplify apply_delegate to single accumulation path * fix: ignore delegated_amount without delegate * restore decompress amount check
* feat(dashboard): add next.js ui and wire up status/metrics endpoints - add dashboard app (Next.js + Tailwind), components, hooks, and API client - include dashboard Dockerfile + env example; update gitignore and main Dockerfile - remove legacy static dashboard.html - extend Rust backend/CLI to expose status, metrics, epochs, and compressible-related data - refactor compressible/config/processor helpers, queue pressure + validation, and related telemetry/metrics * format
* rm v1 compat in mint action layout * rm * bump v * default to rent sponsored for ata * bump v * test cov loading * fix ts bump layout issue
…ext (#2274) * feat: integrate photon submodule versioning into build process * Refactor AccountInterface handling and remove unused functions - Removed `make_get_token_account_interface_body` and `make_get_ata_interface_body` functions from the photon API module. - Updated various test files to remove references to `ColdContext` and directly use compressed accounts in `AccountInterface`. - Adjusted `AccountSpec::Ata` instantiation to wrap `ata_interface` in `Box::new`. - Cleaned up imports in multiple test files by removing unused `ColdContext` references. * format
…ount state is zeroed pre initialization (#2276)
* fix: enforce canonical bump in PDA verification Audit issue #15 (HIGH): verify_pda used derive_address which accepts any bump seed, allowing non-canonical bumps for ATAs. Switch to find_program_address to derive the canonical bump and reject any non-canonical bump with InvalidSeeds error. * fix: use pinocchio::pubkey::find_program_address instead of pinocchio_pubkey * fix: remove bump from ATA instruction data and derive canonical bump on-chain Remove client-provided bump from CreateAssociatedTokenAccountInstructionData and all SDK/test callers. The on-chain program now derives the canonical bump via find_program_address, preventing non-canonical bump attacks (audit #15). - Remove bump field from instruction data structs - Update verify_pda to derive canonical bump and return it - Update validate_ata_derivation and decompress_mint callers - Remove _with_bump SDK variants and ATA2 dead code - Remove associated_token::bump from macro attribute support - Update derive_associated_token_account to return Pubkey only - Update all 100+ call sites across SDK, tests, and TypeScript * fix: update wrong bump test for canonical bump derivation With canonical bumps, the program derives the bump internally so providing a wrong bump is no longer possible. Replace with a test that passes a wrong ATA address to verify PDA validation. * fix test * fix lint
…#2265) * fix: interpret max_top_up as units of 1,000 lamports (L-07) max_top_up is u16 (max 65,535). As raw lamports this only allows ~0.065 SOL which is insufficient for many use cases. Interpreting as units of 1,000 lamports gives a max of ~65.5M lamports (~0.065 SOL becomes ~65 SOL), covering realistic rent top-up scenarios. * address comments
* fix: validate mint for all token accounts, not just compressible Audit issue #7 (MEDIUM): is_valid_mint was only called inside configure_compression_info, so non-compressible token accounts could be initialized with an invalid mint. Move validation to initialize_ctoken_account so it runs for all account types. * fix: read mint data once, pass decimals to configure_compression_info * fix: tests * fix tests * fix tests * fix tests * fix js tests
* chore: add token_pool test to CI and fix InvalidMint error expectation - Add test-compressed-token-token-pool to justfile CI targets - Fix failing_tests_add_token_pool to expect InvalidMint error instead of ConstraintSeeds (restricted_seed() parses mint before PDA check) * fix: enforce extension state checks for SPL compress (H-01 follow-up) Add extension state enforcement (paused, non-zero fees, non-nil hook) for SPL Token-2022 compress operations. Previously, SPL compress could bypass these checks, allowing an attacker to: 1. SPL Compress 10K with transfer fee mint (pool receives 9.9K) 2. SPL Decompress 10K (pool sends 10K) 3. Profit from the fee difference, draining pool funds Fix follows the same pattern as H-01 (PR #2246) - enforcement at the processing point in process_token_compression(), not in cache building. - Add enforce_extension_state() call for Compress mode in Token-2022 branch - Update test_spl_to_ctoken_fails_when_mint_paused to expect error 6127 (MintPaused from Light Token program) instead of 67 (SPL Token-2022) * fix lint
* fix: store_data may cache incorrect owner * fix test * fix: add owner comparison to new_addresses_eq and test owner fix path The new_addresses_eq helper was missing owner() comparison, which is critical since this PR fixes store_data caching the incorrect owner. Add unit tests with non-empty new_address_params to verify store_data sets owner to invoking_program on first and subsequent invocations.
* refactor: max top to be u16::MAX * fix: correct stale max_top_up doc comments Since Some(0) is now meaningful (no top-ups allowed), the doc comments saying "non-zero value" were misleading. Updated SDK structs to say "When set (Some)" and TRANSFER_CHECKED.md to specify [1, u16::MAX-1].
* chore: check compress only is applied correctly * restore: has delegate check
Entire-Checkpoint: d44264603cb5
No description provided.