Skip to content

Conversation

@Tony2h
Copy link
Contributor

@Tony2h Tony2h commented Nov 9, 2025

Description

Please briefly describe the code changes in this pull request.

Checklist

Please check the items in the checklist if applicable.

  • Is the user manual updated?
  • Are the test cases passed and automated?
  • Is there no significant decrease in test coverage?

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Tony2h, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the system's caching infrastructure by adding a specialized cache for stable tag filter conditions. This new cache is designed to optimize queries that involve specific equality-based tag filtering, thereby reducing redundant computations and accelerating data retrieval. The changes encompass new data structures, API extensions, and comprehensive cache management logic to ensure data consistency across various metadata operations.

Highlights

  • New Stable Tag Filter Cache: Introduced a dedicated caching mechanism for tag filter conditions that involve equality (=) and AND operators, aiming to improve query performance by storing pre-filtered table lists.
  • Cache Management APIs: Added new API functions (getStableCachedTableList, putStableCachedTableList) to storageapi.h and their corresponding implementations in metaCache.c for interacting with the new stable tag filter cache.
  • Data Structures for Cache: Defined new internal data structures (STagCondFilterEntry, STagConds) within metaCache.c to efficiently organize and manage the cached results based on super table UIDs and tag filter conditions.
  • Cache Invalidation and Updates: Implemented logic to correctly manage the lifecycle of cached entries, including functions to open/close the cache, drop entries for entire super tables, update UIDs when child tables are created/dropped/updated, and remove entries when tag columns are dropped.
  • Query Optimization Integration: Modified executil.c to identify tag filter conditions suitable for the new stable cache. If a condition can be optimized, it attempts to retrieve results from this cache and stores new results in it, using a generated key based on sorted tag data entries.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new caching mechanism for tag filter queries with EQUAL conditions, which is a good performance enhancement. The implementation spans both the query execution and metadata caching layers. I've found several critical issues related to memory management and correctness, such as a use-after-free, potential for memory allocation with a negative size, and unsafe iterator usage during removal. There are also some medium-severity issues like missing NULL checks, code duplication, and minor code style problems. Addressing these issues is important for the stability and correctness of this new feature.

Comment on lines 853 to 857
for (int32_t i = 0; i < taosArrayGetSize(pIdWithVal); ++i) {
STagDataEntry* pEntry = taosArrayGet(pIdWithVal, i);
len += sizeof(col_id_t) +
((SValueNode*)pEntry->pValueNode)->node.resType.bytes;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The length calculation for the digest payload is incorrect for variable-length data types. ((SValueNode*)pEntry->pValueNode)->node.resType.bytes will return the size of a pointer for types like VARCHAR, not the actual length of the string data. This will result in an incorrect digest and cache misses. You should use a logic similar to buildTagDataEntryKey to correctly calculate the length for variable-length types.

    for (int32_t i = 0; i < taosArrayGetSize(pIdWithVal); ++i) {
      STagDataEntry* pEntry = taosArrayGet(pIdWithVal, i);
      SValueNode* pValueNode = (SValueNode*)pEntry->pValueNode;
      len += sizeof(col_id_t);
      if (IS_VAR_DATA_TYPE(pValueNode->node.resType.type)) {
        if (pValueNode->node.resType.type == TSDB_DATA_TYPE_JSON) {
          len += getJsonValueLen(pValueNode->datum.p);
        } else {
          len += varDataLen(pValueNode->datum.p);
        }
      } else {
        len += pValueNode->node.resType.bytes;
      }
    }

Comment on lines +2007 to +2023
*tagCondKeyLen =
(int32_t)(taosArrayGetSize(*pTagColIds) * (sizeof(col_id_t) + 1) - 1);
*pTagCondKey = (char*)taosMemoryCalloc(1, *tagCondKeyLen);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If pTagColIds is empty, taosArrayGetSize(*pTagColIds) will be 0. This makes *tagCondKeyLen equal to (0 * (sizeof(col_id_t) + 1) - 1), which is -1. Calling taosMemoryCalloc(1, -1) on the next line is undefined behavior and will likely cause a crash or memory corruption. You should handle the case where pTagColIds is empty.

Comment on lines +1112 to +1115
code = taosHashRemove((*pTagConds)->set, key, keyLen);
TSDB_CHECK_CODE(code, lino, _end);
}
pIter = taosHashIterate((*pTagConds)->set, pIter);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The iterator pIter might be invalidated by taosHashRemove on line 1112, but it is used again on line 1115 to get the next element. This can lead to undefined behavior like skipping elements or crashing. You should get the next iterator before the potential removal. A safer pattern would be: void* pNextIter = taosHashIterate((*pTagConds)->set, pIter); if (found) { ... taosHashRemove(...); } pIter = pNextIter;

int32_t tagCondKeyLen;
SArray* pTagColIds = NULL;
buildTagCondKey(pTagCond, &pTagCondKey, &tagCondKeyLen, &pTagColIds);
taosArrayDestroy(pTagColIds);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The pTagColIds array is created by buildTagCondKey and then immediately destroyed without being used. This is inefficient. Consider modifying buildTagCondKey to have an option to not compute pTagColIds if it's not needed, to avoid unnecessary allocations and computations.

Comment on lines +832 to +881
STagConds** pTagConds =
(STagConds**)taosHashGet(pTableEntry, &suid, sizeof(uint64_t));
if (pTagConds == NULL) {
// add new (suid -> tag conds) entry
STagConds* pEntry = (STagConds*)taosMemoryMalloc(sizeof(STagConds));
TSDB_CHECK_NULL(pEntry, code, lino, _end, terrno);

pEntry->hitTimes = 0;
pEntry->set = taosHashInit(
1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY),
false, HASH_NO_LOCK);
taosHashSetFreeFp(pEntry->set, freeTagFilterEntryFp);
TSDB_CHECK_NULL(pEntry->set, code, lino, _end, terrno);

code = taosHashPut(
pTableEntry, &suid, sizeof(uint64_t), &pEntry, POINTER_BYTES);
TSDB_CHECK_CODE(code, lino, _end);

pTagConds = (STagConds**)taosHashGet(pTableEntry, &suid, sizeof(uint64_t));
TSDB_CHECK_NULL(pTagConds, code, lino, _end, terrno);
}

STagCondFilterEntry** pFilterEntry =
(STagCondFilterEntry**)taosHashGet(
(*pTagConds)->set, pTagCondKey, tagCondKeyLen);
if (pFilterEntry == NULL) {
// add new (tag cond -> filter entry) entry
STagCondFilterEntry* pEntry =
(STagCondFilterEntry*)taosMemoryMalloc(sizeof(STagCondFilterEntry));
TSDB_CHECK_NULL(pEntry, code, lino, _end, terrno);

pEntry->hitTimes = 0;
pEntry->set = taosHashInit(
1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY),
false, HASH_NO_LOCK);
taosHashSetFreeFp(pEntry->set, freeSArrayPtr);
TSDB_CHECK_NULL(pEntry->set, code, lino, _end, terrno);

code = taosHashPut(
(*pTagConds)->set, pTagCondKey, tagCondKeyLen, &pEntry, POINTER_BYTES);
TSDB_CHECK_CODE(code, lino, _end);

pFilterEntry = (STagCondFilterEntry**)taosHashGet(
(*pTagConds)->set, pTagCondKey, tagCondKeyLen);
TSDB_CHECK_NULL(pFilterEntry, code, lino, _end, terrno);
(*pFilterEntry)->pColIds = taosArrayDup(pTagColIds, NULL);
} else {
// pColIds is already set, so we can destroy the new one
taosArrayDestroy(pTagColIds);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code for creating a new STagConds entry (lines 834-852) and a new STagCondFilterEntry (lines 857-881) is quite verbose and contains similar patterns of allocation, initialization, and insertion into a hash map. This could be refactored into helper functions to improve readability and maintainability. For example, you could create get_or_create_tag_conds and get_or_create_filter_entry functions.

pFilterEntry = (STagCondFilterEntry**)taosHashGet(
(*pTagConds)->set, pTagCondKey, tagCondKeyLen);
TSDB_CHECK_NULL(pFilterEntry, code, lino, _end, terrno);
(*pFilterEntry)->pColIds = taosArrayDup(pTagColIds, NULL);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The return value of taosArrayDup is not checked. If memory allocation fails, it can return NULL, which would lead to a NULL pointer being assigned to (*pFilterEntry)->pColIds. This would likely cause a crash later. You should check the return value and handle the error.

    (*pFilterEntry)->pColIds = taosArrayDup(pTagColIds, NULL);
    TSDB_CHECK_NULL((*pFilterEntry)->pColIds, code, lino, _end, terrno);

}

// add to cache.
SArray* pPayload = taosArrayDup(pUidList, NULL);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The return value of taosArrayDup is not checked. If memory allocation fails, it can return NULL, which would lead to a NULL pointer being assigned to pPayload. This would likely cause a crash later. You should check the return value and handle the error.

  SArray* pPayload = taosArrayDup(pUidList, NULL);
  TSDB_CHECK_NULL(pPayload, code, lino, _end, terrno);

code = taosThreadRwlockWrlock(pRwlock);
TSDB_CHECK_CODE(code, lino, _end);

tb_uid_t suid = pDroppedTable->ctbEntry.suid;;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a stray semicolon at the end of this line.

  tb_uid_t suid = pDroppedTable->ctbEntry.suid;

Comment on lines +1028 to +1029
int32_t keyLen = 0;
char* pKey = NULL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variables keyLen and pKey are declared but never used. They should be removed to improve code clarity.

      // rebuild the tagCondFilterKey

pFilterEntry->set, context.digest, tListLen(context.digest));
if (pArray != NULL) {
// check and remove the dropped table uid from the array
// TODO(Tony Zhang): optimize this scan
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a TODO to optimize the linear scan for removing a UID from the array. This linear scan can be inefficient if the array is large. Consider using a more efficient data structure or algorithm if performance in this section is critical.

@Tony2h Tony2h requested a review from a team as a code owner November 9, 2025 08:58
@codecov
Copy link

codecov bot commented Nov 9, 2025

Codecov Report

❌ Patch coverage is 59.86239% with 175 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.27%. Comparing base (3d38933) to head (a8d8f75).
⚠️ Report is 271 commits behind head on main.

Files with missing lines Patch % Lines
source/libs/executor/src/executil.c 44.94% 98 Missing ⚠️
source/dnode/vnode/src/meta/metaCache.c 71.42% 64 Missing ⚠️
source/dnode/vnode/src/meta/metaEntry2.c 55.55% 12 Missing ⚠️
source/dnode/vnode/src/meta/metaTable.c 83.33% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main   #33508       +/-   ##
===========================================
+ Coverage   54.05%   71.27%   +17.22%     
===========================================
  Files         472      534       +62     
  Lines      294900   345316    +50416     
  Branches    99139        0    -99139     
===========================================
+ Hits       159397   246137    +86740     
- Misses      83784    99179    +15395     
+ Partials    51719        0    -51719     
Flag Coverage Δ
TDengine 71.27% <59.86%> (+17.23%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants