Add OCI LangChain support for hosted Nemotron workflows#1804
Add OCI LangChain support for hosted Nemotron workflows#1804fede-kamel wants to merge 2 commits intoNVIDIA:developfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds OCI Generative AI support: new Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(100,149,237,0.5)
participant Client
participant WorkflowBuilder
participant Registry
participant ProviderFactory
participant LangChainWrapper
participant OCI_SDK
end
Client->>WorkflowBuilder: build workflow with `oci_llm` config
WorkflowBuilder->>Registry: resolve provider for OCIModelConfig
Registry-->>WorkflowBuilder: provider factory (oci_llm)
WorkflowBuilder->>ProviderFactory: invoke async oci_llm(config)
ProviderFactory-->>WorkflowBuilder: yield LLMProviderInfo
alt LangChain wrapper requested
WorkflowBuilder->>LangChainWrapper: request LangChain wrapper
LangChainWrapper->>OCI_SDK: instantiate ChatOCIGenAI (model_id, endpoint, auth, model_kwargs)
OCI_SDK-->>LangChainWrapper: model responses / streaming
LangChainWrapper-->>Client: return wrapped model interface / outputs
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
6b1ec5b to
3daa5eb
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
packages/nvidia_nat_core/src/nat/llm/oci_llm.py (1)
75-78: Type and document the new public provider factory.
oci_llmis part of the core public surface, but it has neither a return annotation nor a docstring. Please add theAsyncIterator[LLMProviderInfo]return type and a short Google-style docstring while this API is new.✍️ Proposed cleanup
+from collections.abc import AsyncIterator + from pydantic import AliasChoices from pydantic import ConfigDict from pydantic import Field @@ `@register_llm_provider`(config_type=OCIModelConfig) -async def oci_llm(config: OCIModelConfig, _builder: Builder): +async def oci_llm(config: OCIModelConfig, _builder: Builder) -> AsyncIterator[LLMProviderInfo]: + """Yield provider metadata for an OCI Generative AI model. + + Args: + config: OCI model configuration. + _builder: Builder instance. + + Yields: + `LLMProviderInfo` describing the configured OCI model. + """ yield LLMProviderInfo(config=config, description="An OCI Generative AI model for use with an LLM client.")As per coding guidelines, "All public APIs require Python 3.11+ type hints on parameters and return values" and "Provide Google-style docstrings for every public module, class, function and CLI command".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py` around lines 75 - 78, The public factory function oci_llm lacks a return type and docstring; update its signature to return AsyncIterator[LLMProviderInfo] and add a short Google-style docstring describing that it yields an LLMProviderInfo for an OCI Generative AI model, its parameters (config: OCIModelConfig, _builder: Builder), and its behavior; also import AsyncIterator from typing (or typing_extensions if required) so the annotation resolves. Ensure references to LLMProviderInfo, OCIModelConfig, and Builder remain accurate in the docstring and signature.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/source/build-workflows/llms/index.md`:
- Around line 31-32: The docs table and sample config for the `oci` provider
describe an OpenAI-compatible API shape (api_key, OCI_GENAI_* env vars) but the
implementation uses the SDK-backed OCIModelConfig with fields like
`compartment_id`, `auth_type`, `auth_profile`, `auth_file_location` and optional
`provider`; update the `oci` entry and all sample snippets (the table row and
the sample config blocks referenced) to reflect the actual `OCIModelConfig`
fields and example values, remove or replace references to
`api_key`/OpenAI-compatible env vars, and ensure examples include
`compartment_id`, `auth_type`, `auth_profile`/`auth_file_location` and the
`provider` field where appropriate so the docs match the `OCIModelConfig`
implementation.
In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py`:
- Around line 63-69: The test currently seeds registry._llm_provider_map
directly which masks registration bugs; instead patch
nat.cli.type_registry.GlobalTypeRegistry and let importing nat.llm.oci_llm
invoke the `@register_llm_provider` decorator, then inspect the mocked registry to
assert that OCIModelConfig is registered to the oci_llm provider (use the same
identifiers: test_oci_provider_registration, GlobalTypeRegistry, OCIModelConfig,
oci_llm, and the register_llm_provider decorator) rather than writing to
_llm_provider_map yourself.
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py`:
- Around line 239-259: The OCIModelConfig's max_retries and request_timeout are
never applied—update the oci_langchain() wrapper to preconfigure an OCI SDK
client with the desired retry and timeout settings (based on
OCIModelConfig.max_retries and request_timeout) and pass that SDK client into
ChatOCIGenAI via its client= parameter (instead of expecting ChatOCIGenAI to
accept those kwargs), referencing the ChatOCIGenAI(...) instantiation and
OCIModelConfig fields; alternatively, if you choose not to support custom
retry/timeout, remove max_retries and request_timeout from OCIModelConfig and
any docs/usages in oci_langchain() so the public config surface no longer
exposes unsupported fields.
In `@packages/nvidia_nat_langchain/tests/test_langchain_agents.py`:
- Around line 145-150: The test sets up OCIModelConfig in llm_config using
os.environ.get("OCI_GENAI_AUTH_PROFILE", "API_KEY_AUTH") which uses an auth type
as the fallback; update the fallback to the OCI profile default by using
"DEFAULT" instead of "API_KEY_AUTH" so OCIModelConfig receives a valid profile
name when OCI_GENAI_AUTH_PROFILE is unset.
In `@packages/nvidia_nat_test/src/nat/test/plugin.py`:
- Line 155: Add explicit return type annotations to the new pytest fixture
functions; for example change def oci_nemotron_endpoint_fixture(fail_missing:
bool): to def oci_nemotron_endpoint_fixture(fail_missing: bool) ->
typing.Generator[Any, None, None]: (and do the same for the other new fixture
defined nearby) and ensure typing.Generator and Any are imported from typing at
the top of the module if not already present.
---
Nitpick comments:
In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py`:
- Around line 75-78: The public factory function oci_llm lacks a return type and
docstring; update its signature to return AsyncIterator[LLMProviderInfo] and add
a short Google-style docstring describing that it yields an LLMProviderInfo for
an OCI Generative AI model, its parameters (config: OCIModelConfig, _builder:
Builder), and its behavior; also import AsyncIterator from typing (or
typing_extensions if required) so the annotation resolves. Ensure references to
LLMProviderInfo, OCIModelConfig, and Builder remain accurate in the docstring
and signature.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a1bc7293-0222-46e6-b9ee-400069afb2ae
⛔ Files ignored due to path filters (1)
packages/nvidia_nat_langchain/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
docs/source/build-workflows/llms/index.mddocs/source/components/integrations/index.mddocs/source/components/integrations/integrating-oci-generative-ai-models.mddocs/source/conf.pydocs/source/get-started/installation.mdexamples/frameworks/agno_personal_finance/pyproject.tomlexamples/frameworks/multi_frameworks/pyproject.tomlpackages/nvidia_nat_agno/pyproject.tomlpackages/nvidia_nat_core/src/nat/llm/oci_llm.pypackages/nvidia_nat_core/src/nat/llm/register.pypackages/nvidia_nat_core/tests/nat/llm/test_oci_llm.pypackages/nvidia_nat_langchain/pyproject.tomlpackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_langchain/tests/test_langchain_agents.pypackages/nvidia_nat_langchain/tests/test_llm_langchain.pypackages/nvidia_nat_test/src/nat/test/plugin.py
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
docs/source/build-workflows/llms/index.md (2)
126-148:⚠️ Potential issue | 🟠 MajorDocumentation describes OpenAI-compatible fields but
OCIModelConfiguses OCI SDK auth.The section lists
api_keyandOCI_GENAI_API_KEY/OCI_GENAI_BASE_URLenvironment variables, butOCIModelConfiginoci_llm.pyuses OCI SDK authentication with fields likecompartment_id,auth_type,auth_profile, andauth_file_location. There is noapi_keyfield inOCIModelConfig.Update this section to match the actual implementation fields:
compartment_id,auth_type,auth_profile,auth_file_location, andprovider.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/source/build-workflows/llms/index.md` around lines 126 - 148, The docs currently list API key fields but the implementation uses OCI SDK auth; update the OCI Hosted OpenAI-Compatible section to reflect the actual OCIModelConfig fields used in oci_llm.py (class OCIModelConfig) by replacing `api_key`/OCI_GENAI_* entries with the SDK auth fields: `compartment_id`, `auth_type`, `auth_profile`, `auth_file_location`, and `provider`; also note that this provider targets OCI-hosted OpenAI-compatible chat-completions endpoints (Responses API not enabled) and remove or correct any references to `OCI_GENAI_API_KEY`/`api_key` to avoid mismatch with OCIModelConfig.
31-32:⚠️ Potential issue | 🟠 MajorTable and YAML example need OCI SDK auth fields.
The table description and YAML example should reflect that this uses OCI SDK authentication (
compartment_id,auth_type,auth_profile), not OpenAI-styleapi_key. Update the example to include the required OCI fields.Also applies to: 56-59
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/source/build-workflows/llms/index.md` around lines 31 - 32, Update the OCI entry and its YAML example so they use OCI SDK auth fields instead of OpenAI-style api_key: replace or augment the `oci` provider example to include `compartment_id`, `auth_type`, and `auth_profile` in the sample configuration and update the table description for "OCI Hosted OpenAI-Compatible" to state it uses OCI SDK authentication; ensure any example YAML referencing `api_key` for `oci` is changed to demonstrate `compartment_id`, `auth_type`, and `auth_profile` usage.packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py (1)
63-69:⚠️ Potential issue | 🟡 MinorRegistration test is self-seeded and doesn't validate the decorator.
The test writes
{OCIModelConfig: oci_llm}intoregistry._llm_provider_mapthen asserts the same entry. This would pass even if@register_llm_providerwas broken. Test the actual registration by importing the module and inspecting the real registry.Suggested approach
def test_oci_provider_registration(): """Verify OCIModelConfig is registered via the decorator.""" from nat.cli.type_registry import GlobalTypeRegistry # Import triggers registration via `@register_llm_provider` decorator from nat.llm.oci_llm import OCIModelConfig, oci_llm registry = GlobalTypeRegistry.get() assert registry._llm_provider_map.get(OCIModelConfig) is oci_llm🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py` around lines 63 - 69, The test currently seeds registry._llm_provider_map directly and doesn't exercise the `@register_llm_provider` decorator; change test_oci_provider_registration to import the module that defines OCIModelConfig and oci_llm (so the decorator runs), obtain the real registry via GlobalTypeRegistry.get() (or GlobalTypeRegistry.get.return_value if still patching the class but better: remove the patch), and assert registry._llm_provider_map.get(OCIModelConfig) is oci_llm; reference the symbols GlobalTypeRegistry, OCIModelConfig, oci_llm and the test function test_oci_provider_registration when making this change.packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py (1)
232-261:⚠️ Potential issue | 🟠 Major
max_retriesandrequest_timeoutare exposed in config but not applied.
OCIModelConfigexposesmax_retriesandrequest_timeoutas public fields, but they are not passed toChatOCIGenAI. According tolangchain-ociv0.2.4, these must be configured on the underlying OCI SDK client and passed via theclient=parameter. Either implement proper support or remove these fields fromOCIModelConfigand documentation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py` around lines 232 - 261, The OCIModelConfig fields max_retries and request_timeout are not being applied in oci_langchain; update oci_langchain to build/configure the underlying OCI SDK client with those settings and pass it into ChatOCIGenAI via the client= parameter (instead of only using service_endpoint/compartment_id/auth_*), ensuring the SDK client's retry and timeout are set from llm_config.max_retries and llm_config.request_timeout before calling _patch_llm_based_on_config on the returned client.
🧹 Nitpick comments (4)
docs/source/build-workflows/llms/index.md (1)
126-126: Minor: Add hyphen for compound modifier.Per static analysis hint, "OCI Hosted" should be hyphenated when used as a compound modifier.
Suggested fix
-### OCI Hosted OpenAI-Compatible +### OCI-Hosted OpenAI-Compatible🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/source/build-workflows/llms/index.md` at line 126, The heading "OCI Hosted OpenAI-Compatible" should use a hyphenated compound modifier; update the markdown heading text "OCI Hosted OpenAI-Compatible" to "OCI-Hosted OpenAI-Compatible" so the compound modifier is correct (locate the heading string in the file and replace it accordingly).packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py (1)
257-257: Simplifyis_streamassignment.
OCIModelConfigdoes not define astreamattribute, sogetattr(llm_config, "stream", False)always returnsFalse. Consider usingis_stream=Falsedirectly for clarity.Suggested fix
- is_stream=getattr(llm_config, "stream", False), + is_stream=False,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py` at line 257, The is_stream argument is being set via getattr(llm_config, "stream", False) even though OCIModelConfig has no stream attribute; replace that expression with the literal is_stream=False to make intent explicit and remove the unnecessary getattr usage (update the call site where is_stream=getattr(llm_config, "stream", False) and use is_stream=False, referencing the llm_config variable and the is_stream parameter).packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py (1)
42-53: Test uses auth type instead of profile name.Line 47 uses
auth_profile="API_KEY_AUTH"butauth_profileshould be an OCI config profile name (e.g.,"DEFAULT"or"API_KEY_AUTH"as a profile section name). IfAPI_KEY_AUTHis intended as a profile name, this is fine; otherwise, consider using"DEFAULT"for consistency with the model defaults.Suggested fix for consistency
- auth_profile="API_KEY_AUTH", + auth_profile="DEFAULT",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py` around lines 42 - 53, The test test_oci_llm_provider_yields_provider_info sets OCIModelConfig.auth_profile to "API_KEY_AUTH" which should be an OCI config profile name; update the test to use a canonical profile name like "DEFAULT" (or replace with the actual profile section name you expect) so OCIModelConfig in the test matches the intended OCI config profile; modify the OCIModelConfig instantiation in test_oci_llm_provider_yields_provider_info (used by oci_llm) to set auth_profile="DEFAULT" (or the real profile name) to keep the test consistent with model defaults and the oci_llm provider expectations.packages/nvidia_nat_core/src/nat/llm/oci_llm.py (1)
60-60: Potential confusion betweenmax_retriesandRetryMixin.num_retries.
OCIModelConfiginherits fromRetryMixin(which providesnum_retriesfor the retry patching mechanism) and also defines its ownmax_retriesfield. Sincemax_retriesisn't currently wired toChatOCIGenAI, consider whether both fields are needed or if they should be consolidated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py` at line 60, OCIModelConfig defines max_retries while RetryMixin already provides num_retries, causing confusion; consolidate them by removing the standalone max_retries field and using the RetryMixin.num_retries value throughout, or (to preserve backwards compatibility) sync them by setting RetryMixin.num_retries = max_retries during OCIModelConfig initialization/validation; also update ChatOCIGenAI to read the retry count from config.num_retries (not config.max_retries) so the retry patching mechanism uses the single authoritative value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/source/components/integrations/integrating-oci-generative-ai-models.md`:
- Around line 50-52: The example config uses auth_profile incorrectly—replace
the auth_profile value so it contains an OCI config profile name (e.g.,
"DEFAULT") rather than an auth-type string; update the snippet that currently
sets auth_type: API_KEY and auth_profile: API_KEY_AUTH to use the proper OCI
profile name (per OCIModelConfig) and ensure auth_type remains the appropriate
enum if needed, so consumers know auth_profile refers to a ~/.oci/config profile
section like "DEFAULT".
- Around line 90-92: Update the Usage example so the auth_profile value matches
the earlier example by replacing the auth_profile entry from "API_KEY_AUTH" to
"DEFAULT"; locate the auth_profile setting in the usage block (near
compartment_id and other OCI config entries) and change its value to "DEFAULT"
to ensure consistency across examples.
---
Duplicate comments:
In `@docs/source/build-workflows/llms/index.md`:
- Around line 126-148: The docs currently list API key fields but the
implementation uses OCI SDK auth; update the OCI Hosted OpenAI-Compatible
section to reflect the actual OCIModelConfig fields used in oci_llm.py (class
OCIModelConfig) by replacing `api_key`/OCI_GENAI_* entries with the SDK auth
fields: `compartment_id`, `auth_type`, `auth_profile`, `auth_file_location`, and
`provider`; also note that this provider targets OCI-hosted OpenAI-compatible
chat-completions endpoints (Responses API not enabled) and remove or correct any
references to `OCI_GENAI_API_KEY`/`api_key` to avoid mismatch with
OCIModelConfig.
- Around line 31-32: Update the OCI entry and its YAML example so they use OCI
SDK auth fields instead of OpenAI-style api_key: replace or augment the `oci`
provider example to include `compartment_id`, `auth_type`, and `auth_profile` in
the sample configuration and update the table description for "OCI Hosted
OpenAI-Compatible" to state it uses OCI SDK authentication; ensure any example
YAML referencing `api_key` for `oci` is changed to demonstrate `compartment_id`,
`auth_type`, and `auth_profile` usage.
In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py`:
- Around line 63-69: The test currently seeds registry._llm_provider_map
directly and doesn't exercise the `@register_llm_provider` decorator; change
test_oci_provider_registration to import the module that defines OCIModelConfig
and oci_llm (so the decorator runs), obtain the real registry via
GlobalTypeRegistry.get() (or GlobalTypeRegistry.get.return_value if still
patching the class but better: remove the patch), and assert
registry._llm_provider_map.get(OCIModelConfig) is oci_llm; reference the symbols
GlobalTypeRegistry, OCIModelConfig, oci_llm and the test function
test_oci_provider_registration when making this change.
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py`:
- Around line 232-261: The OCIModelConfig fields max_retries and request_timeout
are not being applied in oci_langchain; update oci_langchain to build/configure
the underlying OCI SDK client with those settings and pass it into ChatOCIGenAI
via the client= parameter (instead of only using
service_endpoint/compartment_id/auth_*), ensuring the SDK client's retry and
timeout are set from llm_config.max_retries and llm_config.request_timeout
before calling _patch_llm_based_on_config on the returned client.
---
Nitpick comments:
In `@docs/source/build-workflows/llms/index.md`:
- Line 126: The heading "OCI Hosted OpenAI-Compatible" should use a hyphenated
compound modifier; update the markdown heading text "OCI Hosted
OpenAI-Compatible" to "OCI-Hosted OpenAI-Compatible" so the compound modifier is
correct (locate the heading string in the file and replace it accordingly).
In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py`:
- Line 60: OCIModelConfig defines max_retries while RetryMixin already provides
num_retries, causing confusion; consolidate them by removing the standalone
max_retries field and using the RetryMixin.num_retries value throughout, or (to
preserve backwards compatibility) sync them by setting RetryMixin.num_retries =
max_retries during OCIModelConfig initialization/validation; also update
ChatOCIGenAI to read the retry count from config.num_retries (not
config.max_retries) so the retry patching mechanism uses the single
authoritative value.
In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py`:
- Around line 42-53: The test test_oci_llm_provider_yields_provider_info sets
OCIModelConfig.auth_profile to "API_KEY_AUTH" which should be an OCI config
profile name; update the test to use a canonical profile name like "DEFAULT" (or
replace with the actual profile section name you expect) so OCIModelConfig in
the test matches the intended OCI config profile; modify the OCIModelConfig
instantiation in test_oci_llm_provider_yields_provider_info (used by oci_llm) to
set auth_profile="DEFAULT" (or the real profile name) to keep the test
consistent with model defaults and the oci_llm provider expectations.
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py`:
- Line 257: The is_stream argument is being set via getattr(llm_config,
"stream", False) even though OCIModelConfig has no stream attribute; replace
that expression with the literal is_stream=False to make intent explicit and
remove the unnecessary getattr usage (update the call site where
is_stream=getattr(llm_config, "stream", False) and use is_stream=False,
referencing the llm_config variable and the is_stream parameter).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ce069a94-587f-4879-bad4-fca9f7ec4ba0
⛔ Files ignored due to path filters (2)
packages/nvidia_nat_langchain/uv.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
docs/source/build-workflows/llms/index.mddocs/source/components/integrations/index.mddocs/source/components/integrations/integrating-oci-generative-ai-models.mddocs/source/conf.pydocs/source/get-started/installation.mdexamples/frameworks/agno_personal_finance/pyproject.tomlexamples/frameworks/multi_frameworks/pyproject.tomlpackages/nvidia_nat_agno/pyproject.tomlpackages/nvidia_nat_core/src/nat/llm/oci_llm.pypackages/nvidia_nat_core/src/nat/llm/register.pypackages/nvidia_nat_core/tests/nat/llm/test_oci_llm.pypackages/nvidia_nat_langchain/pyproject.tomlpackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_langchain/tests/test_langchain_agents.pypackages/nvidia_nat_langchain/tests/test_llm_langchain.pypackages/nvidia_nat_test/src/nat/test/plugin.pypyproject.toml
🚧 Files skipped from review as they are similar to previous changes (7)
- examples/frameworks/agno_personal_finance/pyproject.toml
- packages/nvidia_nat_core/src/nat/llm/register.py
- docs/source/get-started/installation.md
- packages/nvidia_nat_langchain/pyproject.toml
- examples/frameworks/multi_frameworks/pyproject.toml
- packages/nvidia_nat_langchain/tests/test_llm_langchain.py
- packages/nvidia_nat_test/src/nat/test/plugin.py
docs/source/components/integrations/integrating-oci-generative-ai-models.md
Show resolved
Hide resolved
docs/source/components/integrations/integrating-oci-generative-ai-models.md
Show resolved
Hide resolved
5f441da to
50b208c
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
packages/nvidia_nat_core/src/nat/llm/oci_llm.py (1)
62-62: Consider documenting the distinction betweenmax_retriesandnum_retries.
OCIModelConfiginheritsRetryMixinwhich providesnum_retries(used by NAT's_patch_llm_based_on_configfor application-level retries), whilemax_retriesis defined here for OCI SDK-level retries. This dual-retry mechanism is intentional but could be confusing. Consider adding a brief note in the field description clarifying this is for OCI SDK retries specifically.📝 Suggested clarification
- max_retries: int = Field(default=10, description="The max number of retries for the request.") + max_retries: int = Field(default=10, description="The max number of OCI SDK-level retries for the request.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py` at line 62, The max_retries field in OCIModelConfig currently reads generically; clarify that it controls OCI SDK-level retries (not NAT's application-level retry loop) by updating the Field description to state it is for OCI SDK/client retries, while num_retries (from RetryMixin) is used by NAT's application-level retry logic in _patch_llm_based_on_config. Reference OCIModelConfig.max_retries, RetryMixin.num_retries and the _patch_llm_based_on_config usage to ensure the distinction is clear in the docstring.packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py (1)
65-76: LGTM - Registration test properly validates decorator behavior.The test now correctly patches
GlobalTypeRegistry, reimports the module to trigger decorator registration, and asserts the registration call was made with the correct config type and build function. This addresses the previous review comment about self-seeding.Note: The file is missing a trailing newline at line 76.
📝 Add trailing newline
assert info.config_type is module.OCIModelConfig assert info.build_fn is module.oci_llm +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py` around lines 65 - 76, Add a missing trailing newline at EOF of the test file containing the test_oci_provider_registration function so the file ends with a newline character; simply edit the file to insert a single newline after the last line (after the end of the test function) to satisfy POSIX/formatter expectations.packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py (3)
232-277: Add a docstring for the wrapper function.Per coding guidelines, public APIs require Google-style docstrings. This wrapper follows the same pattern as other LangChain wrappers (e.g.,
aws_bedrock_langchain,nim_langchain) which also lack docstrings, but adding one would improve documentation consistency.📝 Suggested docstring
`@register_llm_client`(config_type=OCIModelConfig, wrapper_type=LLMFrameworkEnum.LANGCHAIN) async def oci_langchain(llm_config: OCIModelConfig, _builder: Builder): + """Create a LangChain ChatOCIGenAI client for OCI Generative AI. + + Args: + llm_config: OCI model configuration. + _builder: Builder instance. + + Yields: + A ChatOCIGenAI client configured for the specified OCI model. + """ import oci🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py` around lines 232 - 277, Add a Google-style docstring to the public wrapper function oci_langchain describing its purpose, parameters, and yield value consistent with other LangChain wrappers; mention that it registers an OCI Generative AI client, explain the llm_config (OCIModelConfig) and _builder parameters, note returned/yielded value is a configured LangChain-compatible chat model (result of _patch_llm_based_on_config), and include summary, Args, and Yields sections to match project docstring conventions.
58-61: Consider adding a return type hint.For consistency with coding guidelines requiring type hints on all public APIs, and for better IDE support, consider adding a return type annotation.
♻️ Suggested improvement
-def _get_langchain_oci_chat_model(): +def _get_langchain_oci_chat_model() -> type: from langchain_oci import ChatOCIGenAI return ChatOCIGenAI🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py` around lines 58 - 61, The helper _get_langchain_oci_chat_model should have an explicit return type; update its signature to include a type hint returning the class object (e.g., -> Type[ChatOCIGenAI]) and ensure typing.Type is imported (either at module top or inside the function) so the signature becomes _get_langchain_oci_chat_model() -> Type[ChatOCIGenAI]; keep the local import of ChatOCIGenAI and only adjust the function signature and imports to satisfy the public API typing guideline.
273-273: Consider adding astreamfield toOCIModelConfigif streaming is intended.
OCIModelConfigdoes not define astreamfield. The expressiongetattr(llm_config, "stream", False)will always returnFalsefor OCI configurations. If streaming support is intended for OCI, add astreamfield toOCIModelConfig. Otherwise, simplify this line tois_stream=Falsefor clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py` at line 273, The call to set is_stream uses getattr(llm_config, "stream", False) but OCIModelConfig does not define a stream field, so streaming will never be enabled for OCI; either add a boolean stream attribute to OCIModelConfig (e.g., stream: bool = False) and ensure code that constructs OCIModelConfig can set it, or change the caller to set is_stream=False explicitly; update references around the is_stream assignment and the OCIModelConfig definition to keep them consistent (look for OCIModelConfig and the line using getattr(llm_config, "stream", False) to apply the fix).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py`:
- Line 33: OCIModelConfig inherits SSLVerificationMixin exposing verify_ssl but
that value is never passed to the OCI client; either wire verify_ssl into the
OCI client construction or remove the mixin. Locate the OCI client creation in
oci_llm.py (the code that constructs the oci_langchain/OCI client inside the
OCIModel/OCI wrapper) and if the OCI SDK or oci_langchain wrapper supports
disabling SSL verification, pass OCIModelConfig.verify_ssl into the
client/configuration call (or into any HttpClient/requests adapter used);
otherwise remove SSLVerificationMixin from the OCIModelConfig class definition
so the unused verify_ssl field is not exposed. Ensure you reference
OCIModelConfig and the client construction code when making the change.
In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py`:
- Around line 44-55: The test test_oci_llm_provider_yields_provider_info
references a missing pytest fixture mock_builder causing "fixture not found"
failures; add a fixture named mock_builder (returning a MagicMock) either by
creating a conftest.py in the same llm test directory with a pytest.fixture
named "mock_builder" that returns MagicMock(), or define the same fixture at the
top of this test file before test_oci_llm_provider_yields_provider_info; ensure
you import pytest and MagicMock and keep the fixture name exactly "mock_builder"
so the async test using oci_llm can use it.
---
Nitpick comments:
In `@packages/nvidia_nat_core/src/nat/llm/oci_llm.py`:
- Line 62: The max_retries field in OCIModelConfig currently reads generically;
clarify that it controls OCI SDK-level retries (not NAT's application-level
retry loop) by updating the Field description to state it is for OCI SDK/client
retries, while num_retries (from RetryMixin) is used by NAT's application-level
retry logic in _patch_llm_based_on_config. Reference OCIModelConfig.max_retries,
RetryMixin.num_retries and the _patch_llm_based_on_config usage to ensure the
distinction is clear in the docstring.
In `@packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py`:
- Around line 65-76: Add a missing trailing newline at EOF of the test file
containing the test_oci_provider_registration function so the file ends with a
newline character; simply edit the file to insert a single newline after the
last line (after the end of the test function) to satisfy POSIX/formatter
expectations.
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py`:
- Around line 232-277: Add a Google-style docstring to the public wrapper
function oci_langchain describing its purpose, parameters, and yield value
consistent with other LangChain wrappers; mention that it registers an OCI
Generative AI client, explain the llm_config (OCIModelConfig) and _builder
parameters, note returned/yielded value is a configured LangChain-compatible
chat model (result of _patch_llm_based_on_config), and include summary, Args,
and Yields sections to match project docstring conventions.
- Around line 58-61: The helper _get_langchain_oci_chat_model should have an
explicit return type; update its signature to include a type hint returning the
class object (e.g., -> Type[ChatOCIGenAI]) and ensure typing.Type is imported
(either at module top or inside the function) so the signature becomes
_get_langchain_oci_chat_model() -> Type[ChatOCIGenAI]; keep the local import of
ChatOCIGenAI and only adjust the function signature and imports to satisfy the
public API typing guideline.
- Line 273: The call to set is_stream uses getattr(llm_config, "stream", False)
but OCIModelConfig does not define a stream field, so streaming will never be
enabled for OCI; either add a boolean stream attribute to OCIModelConfig (e.g.,
stream: bool = False) and ensure code that constructs OCIModelConfig can set it,
or change the caller to set is_stream=False explicitly; update references around
the is_stream assignment and the OCIModelConfig definition to keep them
consistent (look for OCIModelConfig and the line using getattr(llm_config,
"stream", False) to apply the fix).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 40b7b228-d85f-4048-930c-fa04f1f216bb
⛔ Files ignored due to path filters (2)
packages/nvidia_nat_langchain/uv.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
docs/source/build-workflows/llms/index.mddocs/source/components/integrations/index.mddocs/source/components/integrations/integrating-oci-generative-ai-models.mddocs/source/conf.pydocs/source/get-started/installation.mdexamples/frameworks/agno_personal_finance/pyproject.tomlexamples/frameworks/multi_frameworks/pyproject.tomlpackages/nvidia_nat_agno/pyproject.tomlpackages/nvidia_nat_core/src/nat/llm/oci_llm.pypackages/nvidia_nat_core/src/nat/llm/register.pypackages/nvidia_nat_core/tests/nat/llm/test_oci_llm.pypackages/nvidia_nat_langchain/pyproject.tomlpackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_langchain/tests/test_langchain_agents.pypackages/nvidia_nat_langchain/tests/test_llm_langchain.pypackages/nvidia_nat_test/src/nat/test/plugin.pypyproject.toml
🚧 Files skipped from review as they are similar to previous changes (9)
- packages/nvidia_nat_langchain/tests/test_langchain_agents.py
- docs/source/build-workflows/llms/index.md
- packages/nvidia_nat_langchain/pyproject.toml
- docs/source/components/integrations/integrating-oci-generative-ai-models.md
- packages/nvidia_nat_agno/pyproject.toml
- docs/source/conf.py
- examples/frameworks/multi_frameworks/pyproject.toml
- pyproject.toml
- packages/nvidia_nat_test/src/nat/test/plugin.py
|
Hello @fede-kamel, Is there an issue or some other thread where this feature was requested and discussed with the maintainers? |
|
@AnuradhaKaruppiah thanks for raising this. There was not a linked tracking issue for this work before the PR, so I created #1809 and linked the request/discussion there. The issue captures the main context from this PR: OCI would like to create an official integration path in NAT similar to the existing cloud-provider integrations, rather than keeping OCI support as an ad hoc path. It also calls out the intended scope around a first-class We can use #1809 as the maintainer discussion thread going forward. |
|
@dagardner-nv @willkill07 @AnuradhaKaruppiah — Friendly follow-up! This PR adds full OCI Generative AI support for NeMo Agent Toolkit — async LLM provider, LangChain integration, docs, and tests. Would appreciate a review and the runner vetting when you get a chance. Happy to address any feedback. Thanks! |
|
@dagardner-nv @willkill07 @AnuradhaKaruppiah — Hey team 👋 — just checking in on this one. Happy to address any feedback or make adjustments to scope if that helps move things along. Let me know if there's anything I can do on my end! |
50b208c to
1ea626a
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/nvidia_nat_langchain/tests/test_llm_langchain.py (2)
203-215: Consider adding assertions for additional ChatOCIGenAI kwargs.The test verifies core kwargs but the source code in
oci_langchainalso passesauth_type,auth_file_location,provider, andis_streamtoChatOCIGenAI. Adding assertions for these would improve coverage.Additionally, for consistency and maintainability, consider using
oci_cfgfield references instead of literal values (e.g.,oci_cfg.auth_profileinstead of"API_KEY_AUTH").♻️ Suggested additions
assert kwargs["auth_profile"] == "API_KEY_AUTH" + assert kwargs["auth_type"] == oci_cfg.auth_type + assert kwargs["auth_file_location"] == oci_cfg.auth_file_location + assert kwargs["provider"] == oci_cfg.provider + assert kwargs["is_stream"] is False # default when stream not set assert kwargs["model_kwargs"] == {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/tests/test_llm_langchain.py` around lines 203 - 215, Update the test assertions in test_llm_langchain.py to verify the additional kwargs passed to ChatOCIGenAI: assert kwargs["auth_type"], kwargs["auth_file_location"], kwargs["provider"], and kwargs["is_stream"] match the expected values from the oci_langchain setup, and replace hardcoded literal comparisons (e.g., "API_KEY_AUTH", endpoint, compartment) with references to the test fixture/config object (oci_cfg.auth_profile, oci_cfg.auth_file_location, oci_cfg.provider, oci_cfg.service_endpoint, oci_cfg.compartment_id) so the assertions use those fields; ensure you still assert client is mock_chat_class.return_value and that model_kwargs remains unchanged.
156-176: Fixture naming is consistent with existing patterns but doesn't follow coding guidelines.The fixtures
oci_cfgandoci_cfg_wrong_apimatch the naming style used by all other fixtures in this file. However, coding guidelines require fixtures to use afixture_prefix or_fixturesuffix with thenameargument in the decorator.Since this is a file-wide pattern, addressing it here alone would create inconsistency. Consider filing a follow-up issue to update all fixtures in this file if desired.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/tests/test_llm_langchain.py` around lines 156 - 176, The fixtures oci_cfg and oci_cfg_wrong_api do not follow the project's fixture naming convention; update their pytest.fixture declarations to provide a name that uses the required prefix/suffix (for example, change to `@pytest.fixture`(name="fixture_oci_cfg") or rename the function to fixture_oci_cfg and similarly for oci_cfg_wrong_api), and ensure any tests referencing these fixtures use the new fixture names; consider filing a follow-up to apply the same change to other fixtures in this file for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/nvidia_nat_langchain/tests/test_llm_langchain.py`:
- Around line 203-215: Update the test assertions in test_llm_langchain.py to
verify the additional kwargs passed to ChatOCIGenAI: assert kwargs["auth_type"],
kwargs["auth_file_location"], kwargs["provider"], and kwargs["is_stream"] match
the expected values from the oci_langchain setup, and replace hardcoded literal
comparisons (e.g., "API_KEY_AUTH", endpoint, compartment) with references to the
test fixture/config object (oci_cfg.auth_profile, oci_cfg.auth_file_location,
oci_cfg.provider, oci_cfg.service_endpoint, oci_cfg.compartment_id) so the
assertions use those fields; ensure you still assert client is
mock_chat_class.return_value and that model_kwargs remains unchanged.
- Around line 156-176: The fixtures oci_cfg and oci_cfg_wrong_api do not follow
the project's fixture naming convention; update their pytest.fixture
declarations to provide a name that uses the required prefix/suffix (for
example, change to `@pytest.fixture`(name="fixture_oci_cfg") or rename the
function to fixture_oci_cfg and similarly for oci_cfg_wrong_api), and ensure any
tests referencing these fixtures use the new fixture names; consider filing a
follow-up to apply the same change to other fixtures in this file for
consistency.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5acd7af2-aaf8-4c28-afd3-060d876dbc24
⛔ Files ignored due to path filters (2)
packages/nvidia_nat_langchain/uv.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
docs/source/build-workflows/llms/index.mddocs/source/components/integrations/index.mddocs/source/components/integrations/integrating-oci-generative-ai-models.mddocs/source/conf.pydocs/source/get-started/installation.mdexamples/frameworks/agno_personal_finance/pyproject.tomlexamples/frameworks/multi_frameworks/pyproject.tomlpackages/nvidia_nat_agno/pyproject.tomlpackages/nvidia_nat_core/src/nat/llm/oci_llm.pypackages/nvidia_nat_core/src/nat/llm/register.pypackages/nvidia_nat_core/tests/nat/llm/test_oci_llm.pypackages/nvidia_nat_langchain/pyproject.tomlpackages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_langchain/tests/test_langchain_agents.pypackages/nvidia_nat_langchain/tests/test_llm_langchain.pypackages/nvidia_nat_test/src/nat/test/plugin.pypyproject.toml
✅ Files skipped from review due to trivial changes (9)
- docs/source/components/integrations/index.md
- examples/frameworks/multi_frameworks/pyproject.toml
- docs/source/get-started/installation.md
- packages/nvidia_nat_langchain/pyproject.toml
- docs/source/conf.py
- packages/nvidia_nat_agno/pyproject.toml
- docs/source/build-workflows/llms/index.md
- examples/frameworks/agno_personal_finance/pyproject.toml
- packages/nvidia_nat_core/src/nat/llm/oci_llm.py
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/nvidia_nat_core/src/nat/llm/register.py
- pyproject.toml
- packages/nvidia_nat_langchain/tests/test_langchain_agents.py
- packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py
- packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py
077c87b to
270a90c
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py (1)
233-234: Add return type and docstring to the new public wrapper.
oci_langchainis public and should declare its return type and include a Google-style docstring.As per coding guidelines, "All public APIs require Python 3.11+ type hints on parameters and return values" and "Provide Google-style docstrings for every public module, class, function and CLI command".♻️ Suggested update
`@register_llm_client`(config_type=OCIModelConfig, wrapper_type=LLMFrameworkEnum.LANGCHAIN) -async def oci_langchain(llm_config: OCIModelConfig, _builder: Builder): +async def oci_langchain(llm_config: OCIModelConfig, _builder: Builder) -> AsyncIterator[Any]: + """Create a LangChain OCI Generative AI client from `OCIModelConfig`."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py` around lines 233 - 234, The public function oci_langchain lacks a return type annotation and a Google-style docstring; update the function signature to include the appropriate Python 3.11+ return type hint (e.g., -> <ReturnType>) and add a Google-style docstring above oci_langchain that documents parameters (llm_config: OCIModelConfig, _builder: Builder) and the returned value, including exceptions raised and a short description of behavior so the public API is fully typed and documented.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py`:
- Around line 260-262: The current code passes llm_config.max_retries directly
into oci.retry.RetryStrategyBuilder as max_attempts, but OCI treats max_attempts
as total attempts (initial + retries); update the call that builds the retry
strategy (the code that sets client_kwargs["retry_strategy"] using
oci.retry.RetryStrategyBuilder) to pass max_attempts=llm_config.max_retries + 1
so that llm_config.max_retries represents the intended number of retries (or
alternatively convert the field to a total-attempts name if you prefer); ensure
you update only the parameter passed to RetryStrategyBuilder to avoid changing
other logic.
---
Nitpick comments:
In `@packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.py`:
- Around line 233-234: The public function oci_langchain lacks a return type
annotation and a Google-style docstring; update the function signature to
include the appropriate Python 3.11+ return type hint (e.g., -> <ReturnType>)
and add a Google-style docstring above oci_langchain that documents parameters
(llm_config: OCIModelConfig, _builder: Builder) and the returned value,
including exceptions raised and a short description of behavior so the public
API is fully typed and documented.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: db8aea89-35ba-4c88-8abb-2aa14a2bba1d
📒 Files selected for processing (2)
packages/nvidia_nat_langchain/src/nat/plugins/langchain/llm.pypackages/nvidia_nat_langchain/tests/test_llm_langchain.py
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/nvidia_nat_langchain/tests/test_llm_langchain.py
Introduce OCIModelConfig with region-based endpoint derivation, OCI SDK auth fields, and provider override support. Register the provider alongside existing NIM/OpenAI/Bedrock providers. Includes unit tests for config defaults, region derivation, endpoint alias handling, and decorator-based registration.
8442009 to
f6252fe
Compare
Wire oci_langchain wrapper using langchain-oci with OCI SDK client construction, max_completion_tokens for OpenAI models, retry and timeout support. Add OCI integration docs, test fixtures, and declare uv extra conflicts for langchain-oci compatibility.
f6252fe to
ab7f9da
Compare
✅ Full validation against live Nemotron on OCIJust ran the complete OCI test suite — 11/11 tests pass — against a live What was tested
InfrastructureThe inference backend is a private OKE cluster running vLLM on a Test outputRebased on latest |
|
@dagardner-nv @willkill07 @AnuradhaKaruppiah — Excited to share that we just validated the full OCI integration end-to-end with a live Nemotron model! 🚀 We spun up This also cross-validates with the OKE deployment cookbook over at NVIDIA-NeMo/Nemotron#117 — together they deliver a full OCI story for Nemotron: infrastructure + toolkit integration. Would love to get this across the finish line! |
Summary
langchain-oci, matching the workflow-layer integration shape used by AWS Bedrock in this repouvextra conflicts so the workspace remains solvable whenlangchain-ociintroducesopenai>=2alongside existingstrandsandvannasurfacesWhat Was Tested
PYTHONPATH=$(pwd)/packages/nvidia_nat_core/src:$(pwd)/packages/nvidia_nat_langchain/src:$(pwd)/packages/nvidia_nat_test/src .venv/bin/pytest packages/nvidia_nat_core/tests/nat/llm/test_oci_llm.py packages/nvidia_nat_langchain/tests/test_llm_langchain.py -q -k 'OCI or oci'OCI_NEMOTRON_BASE_URL=http://127.0.0.1:8080/v1 OCI_NEMOTRON_MODEL=nvidia/Llama-3.1-Nemotron-Nano-8B-v1 PYTHONPATH=$(pwd)/packages/nvidia_nat_core/src:$(pwd)/packages/nvidia_nat_langchain/src:$(pwd)/packages/nvidia_nat_test/src .venv/bin/pytest packages/nvidia_nat_langchain/tests/test_langchain_agents.py -q --run_integration -k oci_hosted_nemotron_openai_compatible_agentuv lockNotes
nvidia/Llama-3.1-Nemotron-Nano-8B-v1nvidia_nat_langchainSummary by CodeRabbit
New Features
Documentation
Tests
Chores