Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
7e180d2
fix(tee_registry): randomize TEE selection to improve load distribution
amathxbt Mar 23, 2026
cddb413
fix(llm): surface HTTP 402 as actionable error with OPG approval hint
amathxbt Mar 23, 2026
d345fdd
fix(_conversions): return int for zero-decimal fixed-point values
amathxbt Mar 23, 2026
72a6279
fix(alpha): guard against None from inference node + use estimate_gas…
amathxbt Mar 23, 2026
7c786b1
fix: make convert_fixed_point_to_python type-stable (always np.float3…
amathxbt Mar 26, 2026
deff27c
fix: separate ContractLogicError from generic Exception in gas estima…
amathxbt Mar 26, 2026
5dcecd1
test: fix TestGetLlmTee to patch random.choice for deterministic CI
amathxbt Mar 26, 2026
6cfbb68
fix: merge main into llm.py — restore _connect_tee, _refresh_tee, _ca…
amathxbt Mar 26, 2026
7b2157b
test: add HTTP 402 hint coverage for completion, chat, and streaming
amathxbt Mar 27, 2026
23f06db
fix: merge main into llm_test.py — restore retry/SSL tests, add 402 h…
amathxbt Mar 27, 2026
488c7ce
fix: remove stale Union import from _conversions.py (convert_fixed_po…
amathxbt Mar 27, 2026
f5f5c70
chore: sync .github/workflows/release.yml from upstream main
amathxbt Mar 27, 2026
7708dec
chore: sync README.md from upstream main
amathxbt Mar 27, 2026
dece88b
chore: sync WINDOWS_INSTALL.md from upstream main
amathxbt Mar 27, 2026
7fbc758
chore: sync docs/opengradient/client/llm.md from upstream main
amathxbt Mar 27, 2026
fc972b8
chore: sync docs/opengradient/index.md from upstream main
amathxbt Mar 27, 2026
e0927b1
chore: sync integrationtest/llm/test_llm_chat.py from upstream main
amathxbt Mar 27, 2026
de224f5
chore: sync pyproject.toml from upstream main
amathxbt Mar 27, 2026
1769a0e
chore: sync src/opengradient/client/_utils.py from upstream main
amathxbt Mar 27, 2026
1f7aa52
chore: sync src/opengradient/client/opg_token.py from upstream main
amathxbt Mar 27, 2026
7373919
chore: sync tests/opg_token_test.py from upstream main
amathxbt Mar 27, 2026
3450911
chore: sync stresstest/llm.py from upstream main
amathxbt Mar 27, 2026
74fa273
chore: sync stresstest/utils.py from upstream main
amathxbt Mar 27, 2026
6786190
chore: sync uv.lock from upstream main
amathxbt Mar 27, 2026
67542a9
chore: remove stresstest/infer.py (deleted in upstream main)
amathxbt Mar 27, 2026
bd147fb
Merge main into fix branch: rebased llm.py with all PR changes
amathxbt Mar 27, 2026
258e08c
Merge main into fix branch: rebased llm_test.py with all PR test addi…
amathxbt Mar 27, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,11 @@ jobs:
- name: Run tests
run: make test

- name: Run LLM integration tests
env:
PRIVATE_KEY: ${{ secrets.PRIVATE_KEY }}
run: make llm_integrationtest

release:
needs: [check, test]
runs-on: ubuntu-latest
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ OpenGradient enables developers to build AI applications with verifiable executi
pip install opengradient
```

**Note**: Windows users should temporarily enable WSL during installation (fix in progress).
**Note**: > **Windows users:** See the [Windows Installation Guide](./WINDOWS_INSTALL.md) for step-by-step setup instructions.

## Network Architecture

Expand Down
36 changes: 36 additions & 0 deletions WINDOWS_INSTALL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Windows Installation Guide

The `opengradient` package requires a C compiler
to build its native dependencies. Windows does not
have one by default.

## Step 1 — Enable WSL

Open PowerShell as Administrator and run:

wsl --install

Restart your PC when prompted.

## Step 2 — Install Python and uv inside WSL

Open the Ubuntu app and run:

sudo apt update && sudo apt install -y python3 curl
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env

## Step 3 — Install SDK

uv add opengradient

## Step 4 — Verify

uv run python3 -c "import opengradient; print('Ready!')"

## Common Errors

- Visual C++ 14.0 required → Use WSL instead
- wsl: command not found → Update Windows 10 to Build 19041+
- WSL stuck → Enable Virtualization in BIOS
- uv: command not found → Run: source $HOME/.local/bin/env
6 changes: 3 additions & 3 deletions docs/opengradient/client/llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,8 +200,8 @@ a transaction. Otherwise, sends an ERC-20 approve transaction.

**Arguments**

* **`opg_amount`**: Minimum number of OPG tokens required (e.g. ``0.05``
for 0.05 OPG). Must be at least 0.05 OPG.
* **`opg_amount`**: Minimum number of OPG tokens required (e.g. ``0.1``
for 0.1 OPG). Must be at least 0.1 OPG.

**Returns**

Expand All @@ -211,5 +211,5 @@ Permit2ApprovalResult: Contains ``allowance_before``,

**Raises**

* **`ValueError`**: If the OPG amount is less than 0.05.
* **`ValueError`**: If the OPG amount is less than 0.1.
* **`RuntimeError`**: If the approval transaction fails.
2 changes: 1 addition & 1 deletion docs/opengradient/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ opengradient

# Package opengradient

**Version: 0.9.0**
**Version: 0.9.2**

OpenGradient Python SDK for decentralized AI inference with end-to-end verification.

Expand Down
2 changes: 1 addition & 1 deletion integrationtest/llm/test_llm_chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
]

# Amount of OPG tokens to fund the test account with
OPG_FUND_AMOUNT = 0.05
OPG_FUND_AMOUNT = 0.1
# Amount of ETH to fund the test account with (for gas)
ETH_FUND_AMOUNT = 0.0001

Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "opengradient"
version = "0.9.1"
version = "0.9.3"
description = "Python SDK for OpenGradient decentralized model management & inference services"
authors = [{name = "OpenGradient", email = "adam@vannalabs.ai"}]
readme = "README.md"
Expand All @@ -27,7 +27,7 @@ dependencies = [
"langchain>=0.3.7",
"openai>=1.58.1",
"pydantic>=2.9.2",
"og-x402==0.0.1.dev2"
"og-x402==0.0.1.dev4"
]

[project.optional-dependencies]
Expand Down
35 changes: 29 additions & 6 deletions src/opengradient/client/_conversions.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,32 @@ def convert_to_fixed_point(number: float) -> Tuple[int, int]:
return value, decimals


def convert_fixed_point_to_python(value: int, decimals: int) -> np.float32:
"""
Converts a fixed-point representation back to a NumPy float32.

This function is intentionally type-stable and always returns np.float32,
regardless of the value of `decimals`. Callers that require integer
semantics should perform an explicit cast (e.g., int(...)) based on
their own dtype metadata or application logic.

Args:
value: The integer significand stored on-chain.
decimals: The scale factor exponent (value / 10**decimals).

Returns:
np.float32 corresponding to `value / 10**decimals`.
"""
return np.float32(Decimal(value) / (10 ** Decimal(decimals)))


def convert_to_float32(value: int, decimals: int) -> np.float32:
"""
Converts fixed point back into floating point
Deprecated: use convert_fixed_point_to_python() instead.

Returns an np.float32 type
Kept for backwards compatibility. New callers should use
convert_fixed_point_to_python which is type-stable and always
returns np.float32.
"""
return np.float32(Decimal(value) / (10 ** Decimal(decimals)))

Expand Down Expand Up @@ -131,10 +152,11 @@ def convert_to_model_output(event_data: AttributeDict) -> Dict[str, np.ndarray]:
name = tensor.get("name")
shape = tensor.get("shape")
values = []
# Convert from fixed point back into np.float32
# Use convert_fixed_point_to_python so integer tensors (decimals==0)
# come back as int instead of np.float32 (fixes issue #103).
for v in tensor.get("values", []):
if isinstance(v, (AttributeDict, dict)):
values.append(convert_to_float32(value=int(v.get("value")), decimals=int(v.get("decimals"))))
values.append(convert_fixed_point_to_python(value=int(v.get("value")), decimals=int(v.get("decimals"))))
else:
logging.warning(f"Unexpected number type: {type(v)}")
output_dict[name] = np.array(values).reshape(shape)
Expand Down Expand Up @@ -183,10 +205,11 @@ def convert_array_to_model_output(array_data: List) -> ModelOutput:
values = tensor[1]
shape = tensor[2]

# Convert from fixed point into np.float32
# Use convert_fixed_point_to_python so integer tensors (decimals==0)
# come back as int instead of np.float32 (fixes issue #103).
converted_values = []
for value in values:
converted_values.append(convert_to_float32(value=value[0], decimals=value[1]))
converted_values.append(convert_fixed_point_to_python(value=value[0], decimals=value[1]))

number_data[name] = np.array(converted_values).reshape(shape)

Expand Down
5 changes: 5 additions & 0 deletions src/opengradient/client/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@ def run_with_retry(
"""
effective_retries = max_retries if max_retries is not None else DEFAULT_MAX_RETRY

if effective_retries < 1:
raise ValueError(f"max_retries must be at least 1, got {effective_retries}")

for attempt in range(effective_retries):
try:
return txn_function()
Expand All @@ -62,3 +65,5 @@ def run_with_retry(
continue

raise

raise RuntimeError(f"run_with_retry exhausted {effective_retries} attempts without returning or raising")
37 changes: 33 additions & 4 deletions src/opengradient/client/alpha.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,9 +119,19 @@ def execute_transaction():
model_output = convert_to_model_output(parsed_logs[0]["args"])
if len(model_output) == 0:
# check inference directly from node
parsed_logs = precompile_contract.events.ModelInferenceEvent().process_receipt(tx_receipt, errors=DISCARD)
inference_id = parsed_logs[0]["args"]["inferenceID"]
precompile_logs = precompile_contract.events.ModelInferenceEvent().process_receipt(tx_receipt, errors=DISCARD)
if not precompile_logs:
raise RuntimeError(
"ModelInferenceEvent not found in transaction logs. "
"Cannot fall back to node-side inference result."
)
inference_id = precompile_logs[0]["args"]["inferenceID"]
inference_result = self._get_inference_result_from_node(inference_id, inference_mode)
if inference_result is None:
raise RuntimeError(
f"Inference node returned no result for inference ID {inference_id!r}. "
"The result may not be available yet — retry after a short delay."
)
model_output = convert_to_model_output(inference_result)

return InferenceResult(tx_hash.hex(), model_output)
Expand Down Expand Up @@ -315,7 +325,7 @@ def deploy_transaction():
signed_txn = self._wallet_account.sign_transaction(transaction)
tx_hash = self._blockchain.eth.send_raw_transaction(signed_txn.raw_transaction)

tx_receipt = self._blockchain.eth.wait_for_transaction_receipt(tx_hash, timeout=60)
tx_receipt = self._blockchain.eth.wait_for_transaction_receipt(tx_hash, timeout=INFERENCE_TX_TIMEOUT)

if tx_receipt["status"] == 0:
raise Exception(f"Contract deployment failed, transaction hash: {tx_hash.hex()}")
Expand Down Expand Up @@ -419,11 +429,30 @@ def run_workflow(self, contract_address: str) -> ModelOutput:
nonce = self._blockchain.eth.get_transaction_count(self._wallet_account.address, "pending")

run_function = contract.functions.run()

# Estimate gas instead of using a hardcoded 30M limit, which is wasteful
# and may exceed the block gas limit on some networks.
try:
estimated_gas = run_function.estimate_gas({"from": self._wallet_account.address})
gas_limit = int(estimated_gas * 3)
except ContractLogicError as exc:
# Estimation failed due to a contract revert — simulate the call to
# surface the revert reason and avoid sending a transaction that will fail.
try:
run_function.call({"from": self._wallet_account.address})
except ContractLogicError as call_exc:
# Re-raise the detailed revert reason from the simulated call.
raise call_exc
# If the simulated call somehow doesn't raise, re-raise the original error.
raise exc
except Exception:
gas_limit = 30000000 # Conservative fallback for transient/RPC estimation errors

transaction = run_function.build_transaction(
{
"from": self._wallet_account.address,
"nonce": nonce,
"gas": 30000000,
"gas": gas_limit,
"gasPrice": self._blockchain.eth.gas_price,
"chainId": self._blockchain.eth.chain_id,
}
Expand Down
Loading
Loading