Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,27 @@ As long as you have an openai-like endpoint, it should work.
- `/clear` - Clear conversation history
- `/init` - Initialize project context

## Configuration

You can configure the following parameters in `~/.kode.json`:

### autoCompactThreshold

Auto-compact trigger threshold (float between 0-1). When context usage exceeds this ratio of the model's limit, automatic compression is triggered.

| Value | Effect | Use Case |
|-------|---------------------------|---------------------------------------|
| 0.80 | Earlier, more conservative | Small context models (e.g. deepseek 131k) |
| 0.85 | Balanced | Medium context models |
| 0.92 | Default | Large context models (e.g. Claude 200k) |

Example:
```json
{
"autoCompactThreshold": 0.80
}
```

## Multi-Model Intelligent Collaboration

Unlike official Claude which supports only a single model, Kode implements **true multi-model collaboration**, allowing you to fully leverage the unique strengths of different AI models.
Expand Down
21 changes: 21 additions & 0 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,27 @@ Kode 同时使用 `~/.kode` 目录(存放额外数据,如内存文件)和
- `/clear` - 清除对话历史
- `/init` - 初始化项目上下文

## 配置参数

在 `~/.kode.json` 中可以配置以下参数:

### autoCompactThreshold

自动压缩触发阈值(0-1 之间的浮点数),当上下文使用超过模型限制的此比例时触发自动压缩。

| 值 | 效果 | 适用场景 |
|------|------------------|----------------------------------|
| 0.80 | 更早压缩,更保守 | 小上下文模型(如 deepseek 131k) |
| 0.85 | 平衡 | 中等上下文模型 |
| 0.92 | 默认 | 大上下文模型(如 Claude 200k) |

示例:
```json
{
"autoCompactThreshold": 0.80
}
```

## 多模型智能协同

与 CC 仅支持单一模型不同,Kode 实现了**真正的多模型协同工作**,让你能够充分发挥不同 AI 模型的独特优势。
Expand Down
30 changes: 26 additions & 4 deletions src/utils/autoCompactCore.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,31 @@ import { queryLLM } from '@services/claude'
import { selectAndReadFiles } from './fileRecoveryCore'
import { addLineNumbers } from './file'
import { getModelManager } from './model'
import { getGlobalConfig } from './config'

/**
* Threshold ratio for triggering automatic context compression
* When context usage exceeds 92% of the model's limit, auto-compact activates
* Default threshold ratio for triggering automatic context compression
* When context usage exceeds this ratio of the model's limit, auto-compact activates
* Users can override this via `autoCompactThreshold` in global config
*/
const AUTO_COMPACT_THRESHOLD_RATIO = 0.92
const DEFAULT_AUTO_COMPACT_THRESHOLD_RATIO = 0.92

/**
* Gets the auto-compact threshold ratio from config or uses default
* Lower values = more aggressive compression (happens earlier)
* Higher values = less aggressive compression (happens later)
*/
function getAutoCompactThresholdRatio(): number {
const config = getGlobalConfig()
const threshold = config.autoCompactThreshold

// Validate threshold is between 0 and 1
if (typeof threshold === 'number' && threshold > 0 && threshold < 1) {
return threshold
}

return DEFAULT_AUTO_COMPACT_THRESHOLD_RATIO
}

/**
* Retrieves the context length for the main model that should execute compression
Expand Down Expand Up @@ -69,10 +88,13 @@ Focus on information essential for continuing the conversation effectively, incl
/**
* Calculates context usage thresholds based on the main model's capabilities
* Uses the main model context length since compression tasks require a capable model
*
* @param tokenCount Current token count of the conversation
* @returns Object with threshold calculation results
*/
async function calculateThresholds(tokenCount: number) {
const contextLimit = await getCompressionModelContextLimit()
const autoCompactThreshold = contextLimit * AUTO_COMPACT_THRESHOLD_RATIO
const autoCompactThreshold = contextLimit * getAutoCompactThresholdRatio()

return {
isAboveAutoCompactThreshold: tokenCount >= autoCompactThreshold,
Expand Down
3 changes: 3 additions & 0 deletions src/utils/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,8 @@ export type GlobalConfig = {
defaultModelName?: string // Default model
// Update notifications
lastDismissedUpdateVersion?: string
// Auto-compact threshold ratio (0-1), triggers when context usage exceeds this ratio
autoCompactThreshold?: number
}

export const DEFAULT_GLOBAL_CONFIG: GlobalConfig = {
Expand Down Expand Up @@ -201,6 +203,7 @@ export const DEFAULT_GLOBAL_CONFIG: GlobalConfig = {

export const GLOBAL_CONFIG_KEYS = [
'autoUpdaterStatus',
'autoCompactThreshold',
'theme',
'hasCompletedOnboarding',
'lastOnboardingVersion',
Expand Down