From ab4b3efa5b8a722afed06d1ff77da303aba5ff2e Mon Sep 17 00:00:00 2001 From: cswatt Date: Mon, 20 Apr 2026 10:36:58 -0700 Subject: [PATCH 1/2] batch of changes --- content/en/agentic_onboarding/setup.md | 6 +- content/en/ai_agents_console/_index.md | 8 +- .../en/bits_ai/bits_ai_dev_agent/_index.md | 4 +- content/en/bits_ai/bits_ai_dev_agent/setup.md | 24 ++--- .../en/bits_ai/bits_ai_security_analyst.md | 26 +++--- content/en/bits_ai/bits_ai_sre/configure.md | 18 ++-- .../bits_ai/bits_ai_sre/investigate_issues.md | 36 ++++---- .../bits_ai/bits_ai_sre/knowledge_sources.md | 4 +- content/en/bits_ai/bits_assistant.md | 8 +- content/en/bits_ai/mcp_server/setup.md | 44 +++++----- content/en/change_tracking/_index.md | 18 ++-- content/en/change_tracking/feature_flags.md | 20 ++--- content/en/cloud_cost_management/_index.md | 12 +-- .../allocation/bigquery.md | 6 +- .../container_cost_allocation.mdoc.md | 24 ++--- .../allocation/custom_allocation_rules.md | 26 +++--- .../allocation/tag_pipelines.md | 42 ++++----- .../cost_changes/anomalies.md | 24 ++--- .../cost_changes/real_time_costs.md | 2 +- .../en/cloud_cost_management/datadog_costs.md | 10 +-- .../cloud_cost_management/planning/budgets.md | 66 +++++++------- .../planning/commitment_programs.md | 42 ++++----- .../planning/forecasting.md | 30 +++---- .../recommendations/_index.md | 22 ++--- .../recommendations/custom_recommendations.md | 20 ++--- .../cloud_cost_management/reporting/_index.md | 34 +++---- .../reporting/dashboards.md | 12 +-- .../reporting/explorer.md | 50 +++++------ .../reporting/scheduled_reports.md | 14 +-- content/en/cloud_cost_management/setup/aws.md | 74 ++++++++-------- .../en/cloud_cost_management/setup/azure.md | 88 +++++++++---------- .../en/cloud_cost_management/setup/custom.md | 2 +- .../setup/google_cloud.md | 10 +-- .../cloud_cost_management/setup/saas_costs.md | 70 +++++++-------- .../en/cloud_cost_management/tags/_index.md | 4 +- .../tags/multisource_querying.md | 18 ++-- .../tags/tag_explorer.md | 30 +++---- 37 files changed, 474 insertions(+), 474 deletions(-) diff --git a/content/en/agentic_onboarding/setup.md b/content/en/agentic_onboarding/setup.md index f063c5ce36a..1b02689da9c 100644 --- a/content/en/agentic_onboarding/setup.md +++ b/content/en/agentic_onboarding/setup.md @@ -41,7 +41,7 @@ To install the Datadog Onboarding Model Context Protocol (MCP) server, follow th 2. Select the MCP server installed in Step 1. You should see a `disconnected - Enter to login` message. Press Enter. 3. When you see the option to authenticate, press Enter. This brings you to the OAuth screen. -4. After authentication, choose **Open** to continue and grant access to your Datadog account. +4. After authentication, choose {{< ui >}}Open{{< /ui >}} to continue and grant access to your Datadog account. 5. Confirm that MCP tools appear under the **datadog-onboarding-{{< region-param key=dd_datacenter_lowercase >}}** server. {{< /site-region >}} @@ -59,8 +59,8 @@ To install the Datadog Onboarding Model Context Protocol (MCP) server, follow th {{< region-param key=cursor_mcp_install_deeplink >}} -2. In Cursor, click **Install** for the **datadog-onboarding-{{< region-param key=dd_datacenter_lowercase >}}** server. -3. If the MCP server shows a **Needs login** or **Connect** link, select it and complete the OAuth flow. When prompted, choose **Open** to continue and grant access to your Datadog account. +2. In Cursor, click {{< ui >}}Install{{< /ui >}} for the **datadog-onboarding-{{< region-param key=dd_datacenter_lowercase >}}** server. +3. If the MCP server shows a {{< ui >}}Needs login{{< /ui >}} or {{< ui >}}Connect{{< /ui >}} link, select it and complete the OAuth flow. When prompted, choose {{< ui >}}Open{{< /ui >}} to continue and grant access to your Datadog account. 4. After authentication, return to Cursor and confirm that MCP tools appear under the **datadog-onboarding-{{< region-param key=dd_datacenter_lowercase >}}** server. {{< /site-region >}} diff --git a/content/en/ai_agents_console/_index.md b/content/en/ai_agents_console/_index.md index 821a14fbaa4..3f2e66bcc36 100644 --- a/content/en/ai_agents_console/_index.md +++ b/content/en/ai_agents_console/_index.md @@ -35,7 +35,7 @@ AI Agents Console supports the following integrations: To monitor Claude Code with AI Agents Console, set up the [Anthropic Usage and Costs][4] integration. -After setup, navigate to the [AI Agents Console][1] and click the **Claude Code** tile to view metrics. +After setup, navigate to the [AI Agents Console][1] and click the {{< ui >}}Claude Code{{< /ui >}} tile to view metrics. #### Option 2: OpenTelemetry (OTLP) @@ -65,7 +65,7 @@ The following procedure configures Claude Code to send telemetry directly to Dat
To set up AI Agents Console for Claude Code across your organization, your IT team can use a Mobile Device Management (MDM) system or server-managed settings to distribute the Claude Code settings file across all managed devices.
4. Restart Claude Code. -After you restart Claude Code, navigate to the [AI Agents Console][1] in Datadog and click on the **Claude Code** tile. Metrics (usage, cost, latency, errors) should appear within a few minutes. +After you restart Claude Code, navigate to the [AI Agents Console][1] in Datadog and click on the {{< ui >}}Claude Code{{< /ui >}} tile. Metrics (usage, cost, latency, errors) should appear within a few minutes. #### Option 3: Forward data through the Datadog Agent @@ -100,13 +100,13 @@ After you restart Claude Code, navigate to the [AI Agents Console][1] in Datadog
To set up AI Agents Console for Claude Code across your organization, your IT team can use a Mobile Device Management (MDM) system or server-managed settings to distribute the Claude Code settings file across all managed devices.
5. Restart Claude Code. -After you restart Claude Code, navigate to the [AI Agents Console][1] in Datadog and click on the **Claude Code** tile. Metrics (usage, cost, latency, errors) should appear within a few minutes. +After you restart Claude Code, navigate to the [AI Agents Console][1] in Datadog and click on the {{< ui >}}Claude Code{{< /ui >}} tile. Metrics (usage, cost, latency, errors) should appear within a few minutes. ### Cursor To monitor Cursor with AI Agents Console, set up the [Cursor][5] integration using the Datadog Extension for Cursor. -After setup, navigate to the [AI Agents Console][1] and click the **Cursor** tile to view metrics. +After setup, navigate to the [AI Agents Console][1] and click the {{< ui >}}Cursor{{< /ui >}} tile to view metrics. ## Further reading diff --git a/content/en/bits_ai/bits_ai_dev_agent/_index.md b/content/en/bits_ai/bits_ai_dev_agent/_index.md index cd6f069bc45..dee153e8c50 100644 --- a/content/en/bits_ai/bits_ai_dev_agent/_index.md +++ b/content/en/bits_ai/bits_ai_dev_agent/_index.md @@ -49,7 +49,7 @@ Bits AI Dev Agent integrates with GitHub to create pull requests, respond to com **Note**: Comment `@Datadog` to prompt Bits for updates to the PR. Bits Dev never auto-merges PRs. -Go to **Bits AI** > **Dev Agent** > **[Code sessions][7]** to see all Dev Agent code sessions and generated PRs. You can search sessions and filter by service, product source, and status. +Go to {{< ui >}}Bits AI{{< /ui >}} > {{< ui >}}Dev Agent{{< /ui >}} > [{{< ui >}}Code sessions{{< /ui >}}][7] to see all Dev Agent code sessions and generated PRs. You can search sessions and filter by service, product source, and status. ### Auto-push @@ -74,7 +74,7 @@ In [Error Tracking][1], Bits AI Dev Agent diagnoses and remediates code issues w - Determines whether an error can be fixed through code and generates a fix with unit tests. - Provides links within the chat to relevant files and methods for streamlined navigation. - Analyzes errors asynchronously as they arrive. -- Marks errors with a **Fix available** status and enables filtering to surface those issues. +- Marks errors with a {{< ui >}}Fix available{{< /ui >}} status and enables filtering to surface those issues. [Auto-push](#auto-push) is available for this feature. diff --git a/content/en/bits_ai/bits_ai_dev_agent/setup.md b/content/en/bits_ai/bits_ai_dev_agent/setup.md index 0396c06746e..67de13f6d5d 100644 --- a/content/en/bits_ai/bits_ai_dev_agent/setup.md +++ b/content/en/bits_ai/bits_ai_dev_agent/setup.md @@ -17,22 +17,22 @@ If your organization uses custom roles, an admin must add this permission manual 1. Install the [GitHub integration][2]. For full installation and configuration steps, see the [GitHub integration guide][3]. -1. In your GitHub account, navigate to **Settings** > **Apps** > **Datadog** to configure GitHub permissions. +1. In your GitHub account, navigate to {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Apps{{< /ui >}} > {{< ui >}}Datadog{{< /ui >}} to configure GitHub permissions. 1. To enable basic Dev Agent functionality, set the following permissions: - - **Repository permissions** + - {{< ui >}}Repository permissions{{< /ui >}} - Repository contents: Read & write - Pull requests: Read & write - - **Subscribe to events** + - {{< ui >}}Subscribe to events{{< /ui >}} - Push 1. (Optional) To allow the Dev Agent to use CI logs when iterating on pull requests, you must send CI logs to Datadog and enable the [auto-push](#enable-auto-push) feature. This requires additional permissions: - - **Repository permissions** + - {{< ui >}}Repository permissions{{< /ui >}} - Checks: Read - Commit statuses: Read only - - **Subscribe to events** + - {{< ui >}}Subscribe to events{{< /ui >}} - Check run - Check suite - Issue comment @@ -48,10 +48,10 @@ Bits AI Dev Agent uses the `service` and `version` telemetry tags to match detec To configure telemetry tagging, see [Tag your APM telemetry with Git information][4]. -You can also configure service-to-repository mapping manually in the Bits AI Dev Agent settings under [**Repositories**][5] > **Service Repository Mapping**. +You can also configure service-to-repository mapping manually in the Bits AI Dev Agent settings under [{{< ui >}}Repositories{{< /ui >}}][5] > {{< ui >}}Service Repository Mapping{{< /ui >}}. ### Enable auto-push -To enable auto-push, so the Dev Agent can push commits directly to a branch, navigate to [**Bits AI Dev** > **Settings** > **General**][12] , and set the toggle to **Enable**. +To enable auto-push, so the Dev Agent can push commits directly to a branch, navigate to [{{< ui >}}Bits AI Dev{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}General{{< /ui >}}][12] , and set the toggle to {{< ui >}}Enable{{< /ui >}}. **Note**: If auto-push is disabled, you must review and approve code in Datadog before the Dev Agent can push it. @@ -67,7 +67,7 @@ The Dev Agent ingests custom instruction files from your repository, including: - `agent.md` -You can also define global custom instructions, which apply to all Dev Agent sessions, in **Bits AI Dev** > [**Settings**][12] > **General**, in the **Global Agent Instructions** section. +You can also define global custom instructions, which apply to all Dev Agent sessions, in {{< ui >}}Bits AI Dev{{< /ui >}} > [{{< ui >}}Settings{{< /ui >}}][12] > {{< ui >}}General{{< /ui >}}, in the {{< ui >}}Global Agent Instructions{{< /ui >}} section. ## Environment setup @@ -75,7 +75,7 @@ Configure the Dev Agent's runtime environment, including network access policies ### Configure internet access -By default, the Dev Agent has **no internet access** during agent execution. To configure which external domains agents can reach, navigate to **Bits AI Dev** > [**Settings**][12] > **General**, and find the **Internet Access** section. Choose from the following access policies: **No Internet Access**, **Default Allowlist**, **Custom + Default Allowlist**, or **Custom Allowlist**. +By default, the Dev Agent has no internet access during agent execution. To configure which external domains agents can reach, navigate to {{< ui >}}Bits AI Dev{{< /ui >}} > [{{< ui >}}Settings{{< /ui >}}][12] > {{< ui >}}General{{< /ui >}}, and find the {{< ui >}}Internet Access{{< /ui >}} section. Choose from the following access policies: {{< ui >}}No Internet Access{{< /ui >}}, {{< ui >}}Default Allowlist{{< /ui >}}, {{< ui >}}Custom + Default Allowlist{{< /ui >}}, or {{< ui >}}Custom Allowlist{{< /ui >}}. The default allowlist includes the following domains: @@ -96,10 +96,10 @@ Configure a custom environment for the Dev Agent to install dependencies, format To configure a repository environment: -1. Go to **Bits AI Dev** > **Settings** > [**Repositories**][5], and find the **Environments** section. -1. Click **Add Environment** to create a repository configuration: +1. Go to {{< ui >}}Bits AI Dev{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > [{{< ui >}}Repositories{{< /ui >}}][5], and find the {{< ui >}}Environments{{< /ui >}} section. +1. Click {{< ui >}}Add Environment{{< /ui >}} to create a repository configuration: 1. Select a repository from the dropdown. - 1. (Optional) Under **Pre-installed Languages**, click **Select Versions** to specify the language versions the sandbox should use. + 1. (Optional) Under {{< ui >}}Pre-installed Languages{{< /ui >}}, click {{< ui >}}Select Versions{{< /ui >}} to specify the language versions the sandbox should use. 1. (Optional) Define environment variables and secrets. Environment variables are available during both environment setup and Dev Agent execution. Secrets are available as environment variables only during environment setup. 1. (Optional) Add a shell script with setup commands to execute (for example: `pip install -r requirements.txt`). 1. Run the setup command to ensure it runs successfully. diff --git a/content/en/bits_ai/bits_ai_security_analyst.md b/content/en/bits_ai/bits_ai_security_analyst.md index 4f5b5ec8fe8..1083e31790a 100644 --- a/content/en/bits_ai/bits_ai_security_analyst.md +++ b/content/en/bits_ai/bits_ai_security_analyst.md @@ -17,7 +17,7 @@ Bits AI Security Analyst is an autonomous AI agent that investigates Cloud SIEM Bits AI Security Analyst investigations are autonomous. If a detection rule is enabled, Bits AI autonomously investigates signals associated with it. -In the [Cloud SIEM Signals Explorer][5], you can click the **Bits AI Security Analyst** tab to only show signals that Bits AI investigated. In the Severity column, a Bits AI status displays as Investigating, until marking the signal as either Benign or Suspicious. +In the [Cloud SIEM Signals Explorer][5], you can click the {{< ui >}}Bits AI Security Analyst{{< /ui >}} tab to only show signals that Bits AI investigated. In the Severity column, a Bits AI status displays as Investigating, until marking the signal as either Benign or Suspicious. {{< img src="bits_ai/bits_ai_security_analyst_signals_explorer.png" alt="The Cloud SIEM signals explorer, on the Bits AI Security Analyst tab" style="width:100%;" >}} @@ -72,32 +72,32 @@ When you enable Bits AI Security Analyst, Datadog analyzes your rules, including Rule eligibility depends on whether Datadog has built the investigation capability for the log source, and whether the Agent is able to investigate the specific rule. If you have new custom rules to evaluate, or want to ask about a rule that wasn't made eligible, contact [Datadog support][1]. -1. In Datadog, go to **Security** > **Settings** > **[Bits AI Security Analyst][3]**. +1. In Datadog, go to {{< ui >}}Security{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > [{{< ui >}}Bits AI Security Analyst{{< /ui >}}][3]. 1. Turn on the toggle to enable Bits AI Security Analyst. Additional settings appear. 1. (Optional) Configure which rules and which severities you want Bits AI Security Analyst to automatically investigate signals for. There are two ways to do so: - - Click **Rule Settings** to configure investigations for individual rules. You can change the minimum severity for signals to be investigated, and enable or disable individual rules for investigation. - - Click **Query Filter** to write a signal query filter, so Bits AI Security Analyst only investigates signals that match your filter. -1. Some log sources require credentials to run or enhance investigations by accessing logs, telemetry, or other data that isn't in Datadog. To add credentials, click **Edit credentials**. In the **Select or Add Connection** window that opens, follow the prompts to select an [existing connection][4] from Actions Catalog, or add a connection. Datadog securely stores and restricts all credentials using Actions Catalog. + - Click {{< ui >}}Rule Settings{{< /ui >}} to configure investigations for individual rules. You can change the minimum severity for signals to be investigated, and enable or disable individual rules for investigation. + - Click {{< ui >}}Query Filter{{< /ui >}} to write a signal query filter, so Bits AI Security Analyst only investigates signals that match your filter. +1. Some log sources require credentials to run or enhance investigations by accessing logs, telemetry, or other data that isn't in Datadog. To add credentials, click {{< ui >}}Edit credentials{{< /ui >}}. In the {{< ui >}}Select or Add Connection{{< /ui >}} window that opens, follow the prompts to select an [existing connection][4] from Actions Catalog, or add a connection. Datadog securely stores and restricts all credentials using Actions Catalog. - Some log sources require additional setup so you can create HTTP connections. Here's an example: {{< collapse-content title="Configure SentinelOne" level="h4" expanded=false id="sentinelone" >}}
    -
  1. In SentinelOne, ensure you have permission to create an API token. Create an S1 API service user, then assign the Viewer role to that user.
  2. -
  3. In Datadog, in the Select or Add Connection window, in the dropdown, select New Connection, then click the HTTP tile.
  4. +
  5. In SentinelOne, ensure you have permission to create an API token. Create an S1 API service user, then assign the {{< ui >}}Viewer{{< /ui >}} role to that user.
  6. +
  7. In Datadog, in the {{< ui >}}Select or Add Connection{{< /ui >}} window, in the dropdown, select {{< ui >}}New Connection{{< /ui >}}, then click the {{< ui >}}HTTP{{< /ui >}} tile.
  8. Add the following information:
      -
    • In the Description field, Datadog recommends adding your token expiry date, to make it easily accessible.
    • -
    • In the Base URL field, enter your SentinelOne Management Console URL.
    • -
    • Under Token Auth, enter a name for your token in the Token Name field, and your API token in the Token Value field.
    • +
    • In the {{< ui >}}Description{{< /ui >}} field, Datadog recommends adding your token expiry date, to make it easily accessible.
    • +
    • In the {{< ui >}}Base URL{{< /ui >}} field, enter your SentinelOne Management Console URL.
    • +
    • Under {{< ui >}}Token Auth{{< /ui >}}, enter a name for your token in the {{< ui >}}Token Name{{< /ui >}} field, and your API token in the {{< ui >}}Token Value{{< /ui >}} field.
  9. -
  10. Click Next, Confirm Access to verify your connection.
  11. +
  12. Click {{< ui >}}Next, Confirm Access{{< /ui >}} to verify your connection.
{{< /collapse-content >}} ## Disable Bits AI Security Analyst -1. In Datadog, go to **Security** > **Settings** > **[Bits AI Security Analyst][3]**. -1. Scroll to the bottom of the page. Under **Disable Bits AI Security Analyst**, turn off the **Enabled** toggle. +1. In Datadog, go to {{< ui >}}Security{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > [{{< ui >}}Bits AI Security Analyst{{< /ui >}}][3]. +1. Scroll to the bottom of the page. Under {{< ui >}}Disable Bits AI Security Analyst{{< /ui >}}, turn off the {{< ui >}}Enabled{{< /ui >}} toggle.
Disabling Bits AI Security Analyst permanently resets all configuration settings.
## Further reading diff --git a/content/en/bits_ai/bits_ai_sre/configure.md b/content/en/bits_ai/bits_ai_sre/configure.md index 52d688847df..544fe079b04 100644 --- a/content/en/bits_ai/bits_ai_sre/configure.md +++ b/content/en/bits_ai/bits_ai_sre/configure.md @@ -26,15 +26,15 @@ For monitor alert investigations, a summary of the findings is available on the ### Slack 1. Ensure the [Datadog Slack app][3] is installed in your Slack workspace. -1. In your monitor, go to **Configure notifications and automations** and add the `@slack-{channel-name}` handle. This sends monitor notifications to your chosen Slack channel. -1. Lastly, go to [**Bits AI SRE** > **Settings** > **Integrations**][4] and connect your Slack workspace. This allows Bits to write its findings directly under the monitor notification in Slack. +1. In your monitor, go to {{< ui >}}Configure notifications and automations{{< /ui >}} and add the `@slack-{channel-name}` handle. This sends monitor notifications to your chosen Slack channel. +1. Lastly, go to [{{< ui >}}Bits AI SRE{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Integrations{{< /ui >}}][4] and connect your Slack workspace. This allows Bits to write its findings directly under the monitor notification in Slack.
Each Slack workspace can only be connected to one Datadog organization.
### Microsoft Teams (Preview) 1. [Connect your Microsoft tenant to Datadog][12]. -1. In your monitor, go to **Configure notifications and automations** and add the `@teams-{handle-name}` handle. This sends monitor notifications to your chosen MS Teams channel. Bits will append its findings to these notifications. +1. In your monitor, go to {{< ui >}}Configure notifications and automations{{< /ui >}} and add the `@teams-{handle-name}` handle. This sends monitor notifications to your chosen MS Teams channel. Bits will append its findings to these notifications.
The Microsoft Teams integration with Bits AI SRE is in Preview for all customers.
@@ -45,9 +45,9 @@ Datadog Case Management provides a centralized workspace for triaging, tracking, To set up Case Management, and the Jira and ServiceNow integrations: 1. Create a [Case Management project][5] for your team. -1. In Datadog, go to [**Case Management** > **Settings**][6]. In the list of projects, expand your project, go to **Integrations** > **Datadog Monitors**, and turn on the **Enable Datadog Monitors integration for this project** toggle. This generates your project's unique handle: `@case-{project_name}`. -1. On the same page, under **Integrations**, set up the Case Management Jira and/or ServiceNow integrations. When a new case is created, Case Management can automatically open the corresponding Jira ticket or ServiceNow incident. -1. In your monitor, go to **Configure notifications and automations** and add the `@case-{project_name}` handle. When the monitor triggers: +1. In Datadog, go to [{{< ui >}}Case Management{{< /ui >}} > {{< ui >}}Settings{{< /ui >}}][6]. In the list of projects, expand your project, go to {{< ui >}}Integrations{{< /ui >}} > {{< ui >}}Datadog Monitors{{< /ui >}}, and turn on the {{< ui >}}Enable Datadog Monitors integration for this project{{< /ui >}} toggle. This generates your project's unique handle: `@case-{project_name}`. +1. On the same page, under {{< ui >}}Integrations{{< /ui >}}, set up the Case Management Jira and/or ServiceNow integrations. When a new case is created, Case Management can automatically open the corresponding Jira ticket or ServiceNow incident. +1. In your monitor, go to {{< ui >}}Configure notifications and automations{{< /ui >}} and add the `@case-{project_name}` handle. When the monitor triggers: - Datadog automatically creates a new case - The case creates a linked Jira ticket or ServiceNow incident - Bits writes its investigation findings directly to the case, which gets appended to Jira as a timeline comment or ServiceNow as a work note @@ -56,7 +56,7 @@ To set up Case Management, and the Jira and ServiceNow integrations: Datadog On-Call is a paging solution that unifies monitoring, paging, and incident response in a single platform. -To set up On-Call, in your monitor, go to **Configure notifications and automations** and add the `@oncall-{team}` handle. Bits' findings can then appear on the On-Call page in the Datadog mobile app, helping your teams triage issues on the go. +To set up On-Call, in your monitor, go to {{< ui >}}Configure notifications and automations{{< /ui >}} and add the `@oncall-{team}` handle. Bits' findings can then appear on the On-Call page in the Datadog mobile app, helping your teams triage issues on the go. ## Pull context from knowledge bases @@ -101,10 +101,10 @@ Organization limit ### Set a rate limit To set a rate limit: -1. Navigate to [**Bits AI SRE** > **Settings** > **Rate Limits**][10]. +1. Navigate to [{{< ui >}}Bits AI SRE{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Rate Limits{{< /ui >}}][10]. 2. Toggle on the rate limit you want to enable. 3. Set the maximum number of investigations you want to run within a rolling 24-hour window. -4. Click **Save**. +4. Click {{< ui >}}Save{{< /ui >}}. {{< img src="bits_ai/rate_limits.png" alt="Options to set a rate limit" style="width:60%;" >}} diff --git a/content/en/bits_ai/bits_ai_sre/investigate_issues.md b/content/en/bits_ai/bits_ai_sre/investigate_issues.md index 6c7b3a253f2..418322a5d66 100644 --- a/content/en/bits_ai/bits_ai_sre/investigate_issues.md +++ b/content/en/bits_ai/bits_ai_sre/investigate_issues.md @@ -26,14 +26,14 @@ You can launch a Bits AI SRE investigation from several entry points: You can invoke Bits on an individual monitor alert or warn event from several entry points: #### Option 1: Bits AI SRE Monitors list {#monitor-list} -1. Go to [**Bits AI SRE** > **Monitors** > **Supported**][5]. -1. Click **Investigate Recent Alerts** and select an alert. +1. Go to [{{< ui >}}Bits AI SRE{{< /ui >}} > {{< ui >}}Monitors{{< /ui >}} > {{< ui >}}Supported{{< /ui >}}][5]. +1. Click {{< ui >}}Investigate Recent Alerts{{< /ui >}} and select an alert. #### Option 2: Monitor status page -Navigate to the monitor status page of a [Bits AI SRE-supported monitor](#supported-monitors) and click **Investigate with Bits AI SRE** in the top-right corner. +Navigate to the monitor status page of a [Bits AI SRE-supported monitor](#supported-monitors) and click {{< ui >}}Investigate with Bits AI SRE{{< /ui >}} in the top-right corner. #### Option 3: Monitor event side panel -In the monitor event side panel of a [Bits AI SRE-supported monitor](#supported-monitors), click **Investigate with Bits AI SRE**. +In the monitor event side panel of a [Bits AI SRE-supported monitor](#supported-monitors), click {{< ui >}}Investigate with Bits AI SRE{{< /ui >}}. #### Option 4: Slack To use the Slack integration, [connect your Slack workspace to Bits AI SRE][8]. @@ -48,13 +48,13 @@ Bits AI SRE investigations started from APM latency graphs and APM Watchdog stor #### APM latency graphs on service pages -1. In Datadog, navigate to [APM][1] and open the service or resource page you want to investigate. Next to the latency graph, click **Investigate**. +1. In Datadog, navigate to [APM][1] and open the service or resource page you want to investigate. Next to the latency graph, click {{< ui >}}Investigate{{< /ui >}}. 1. Click and drag your cursor over the point plot visualization to make a rectangular selection over a region that shows unusual latency to seed the analysis. Initial diagnostics on the latency issue appear, including the observed user impact, anomalous tags contributing to the issue, and recent changes. For more information, see [APM Investigator][2]. -1. Click **Investigate with Bits AI SRE** to run a deeper investigation. +1. Click {{< ui >}}Investigate with Bits AI SRE{{< /ui >}} to run a deeper investigation. #### APM latency Watchdog stories -On a Watchdog APM latency story, click **Investigate with Bits AI SRE**. +On a Watchdog APM latency story, click {{< ui >}}Investigate with Bits AI SRE{{< /ui >}}. ### Synthetic tests (Preview) @@ -65,15 +65,15 @@ When a Synthetic Browser or API test monitor triggers, you can launch a Bits AI #### From the Synthetic test details page -1. On the [Synthetic Tests][18] page, open the Synthetic test you want to investigate and go to the **Timeline** section. -1. Select the **Alert Triggered** event for the failing test run. -1. Click **Investigate with Bits AI SRE**. +1. On the [Synthetic Tests][18] page, open the Synthetic test you want to investigate and go to the {{< ui >}}Timeline{{< /ui >}} section. +1. Select the {{< ui >}}Alert Triggered{{< /ui >}} event for the failing test run. +1. Click {{< ui >}}Investigate with Bits AI SRE{{< /ui >}}. The investigation opens in a new page, and you can also view it from the test details page after it runs. #### From a Synthetic monitor -Synthetic monitors support the same monitor-based entry points as other supported monitor types. See [Monitor alerts](#manual-monitor-alerts) for the available options, or toggle **Auto-Investigate** on a Synthetic monitor to start investigations automatically. For details, see [Enable automatic investigations](#enable-automatic-investigations). +Synthetic monitors support the same monitor-based entry points as other supported monitor types. See [Monitor alerts](#manual-monitor-alerts) for the available options, or toggle {{< ui >}}Auto-Investigate{{< /ui >}} on a Synthetic monitor to start investigations automatically. For details, see [Enable automatic investigations](#enable-automatic-investigations). ### General prompt (Preview) @@ -101,12 +101,12 @@ Starting Bits AI SRE investigations from a prompt is in Preview for all customer In addition to manual investigations, you can configure Bits to run automatically when a monitor transitions to the alert state: #### From the Bits AI SRE Monitors list -1. Go to [**Bits AI SRE** > **Monitors** > **Supported**][5]. -1. Toggle **Auto-Investigate** on for a single monitor, or bulk-edit multiple monitors by selecting multiple monitors, then clicking **Auto-Investigate All**. +1. Go to [{{< ui >}}Bits AI SRE{{< /ui >}} > {{< ui >}}Monitors{{< /ui >}} > {{< ui >}}Supported{{< /ui >}}][5]. +1. Toggle {{< ui >}}Auto-Investigate{{< /ui >}} on for a single monitor, or bulk-edit multiple monitors by selecting multiple monitors, then clicking {{< ui >}}Auto-Investigate All{{< /ui >}}. #### For a single monitor -1. Open the monitor's status page and click **Edit**. -1. Scroll to **Configure notifications & automations** and toggle **Investigate with Bits AI SRE**. +1. Open the monitor's status page and click {{< ui >}}Edit{{< /ui >}}. +1. Scroll to {{< ui >}}Configure notifications & automations{{< /ui >}} and toggle {{< ui >}}Investigate with Bits AI SRE{{< /ui >}}.
@@ -167,14 +167,14 @@ For best practices on maximizing the effectiveness of investigations, see [Knowl ### Investigation display modes There are two display modes: Agent Trace and Investigation. -While an investigation is in progress, Bits captures every step it takes—including how it evaluates evidence and makes decisions—in the **Agent Trace** view. This provides a real-time, detailed record of the agent’s reasoning process. +While an investigation is in progress, Bits captures every step it takes—including how it evaluates evidence and makes decisions—in the {{< ui >}}Agent Trace{{< /ui >}} view. This provides a real-time, detailed record of the agent’s reasoning process. -Once the investigation is complete, you can switch to the **Investigation** view to explore a structured, tree-based visualization of the investigative path, making it easier to understand findings and conclusions at a glance. +Once the investigation is complete, you can switch to the {{< ui >}}Investigation{{< /ui >}} view to explore a structured, tree-based visualization of the investigative path, making it easier to understand findings and conclusions at a glance. ## Reports -The Reports tab enables you to track the number of investigations run over time by monitor, user, service, and team. You can also track the mean time to conclusion to assess the impact of Bits AI SRE on your on-call efficiency. +The {{< ui >}}Reports{{< /ui >}} tab enables you to track the number of investigations run over time by monitor, user, service, and team. You can also track the mean time to conclusion to assess the impact of Bits AI SRE on your on-call efficiency. [1]: https://app.datadoghq.com/apm/home [2]: /tracing/guide/latency_investigator/ diff --git a/content/en/bits_ai/bits_ai_sre/knowledge_sources.md b/content/en/bits_ai/bits_ai_sre/knowledge_sources.md index e9f9b0b630b..7b22568f505 100644 --- a/content/en/bits_ai/bits_ai_sre/knowledge_sources.md +++ b/content/en/bits_ai/bits_ai_sre/knowledge_sources.md @@ -30,7 +30,7 @@ To maximize the value of this integration, document the services, dependencies,
Bits.md is in Preview for all customers.
-You can proactively guide how Bits investigates your environment by creating a `bits.md` file at [**Bits AI SRE** > **Settings** > **Bits.md**][2]. +You can proactively guide how Bits investigates your environment by creating a `bits.md` file at [{{< ui >}}Bits AI SRE{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Bits.md{{< /ui >}}][2]. `bits.md` is a Markdown file that provides structured context about your environment to Bits. It serves as lightweight guidance to improve investigation accuracy, query construction, and terminology alignment. Add team-specific knowledge such as tagging conventions, architectural patterns, glossary terms, and investigation best practices. @@ -114,7 +114,7 @@ If the conclusion was inaccurate, provide Bits AI SRE with the correct root caus All positive feedback, as well as any negative feedback that includes details provided in the Bits' chat, creates a **memory**. Bits AI SRE dynamically selects which memories to use in future investigations to improve its performance. It applies past corrections in similar contexts, reuses effective queries, and refines how it prioritizes investigative steps. Over time, this enables Bits AI SRE to adapt to your environment, becoming more accurate and efficient with each investigation. -To manage memories, including viewing and deleting them, go to the **Memories** column of the [Monitor Management][1] page. +To manage memories, including viewing and deleting them, go to the {{< ui >}}Memories{{< /ui >}} column of the [Monitor Management][1] page. [1]: https://app.datadoghq.com/bits-ai/monitors/supported [2]: https://app.datadoghq.com/bits-ai/settings/bits-md diff --git a/content/en/bits_ai/bits_assistant.md b/content/en/bits_ai/bits_assistant.md index bfcf6b70d45..dcdb89e9b18 100644 --- a/content/en/bits_ai/bits_assistant.md +++ b/content/en/bits_ai/bits_assistant.md @@ -69,10 +69,10 @@ Example prompts: ### Web application There are multiple ways to open Bits Assistant in the Datadog web application: -- In the top-right of the navigation bar, click **Ask Bits** -- In a Datadog product integrated with Bits Assistant, click **Ask Bits** or {{< img src="bits_ai/dev_agent/twinkling_stars_icon.png" inline="true" style="width:24px">}} (the twinkling stars icon) +- In the top-right of the navigation bar, click {{< ui >}}Ask Bits{{< /ui >}} +- In a Datadog product integrated with Bits Assistant, click {{< ui >}}Ask Bits{{< /ui >}} or {{< img src="bits_ai/dev_agent/twinkling_stars_icon.png" inline="true" style="width:24px">}} (the twinkling stars icon) - Press Cmd/Ctrl + I -- In the left-side navigation panel, click **Bits AI** +- In the left-side navigation panel, click {{< ui >}}Bits AI{{< /ui >}} {{< img src="bits_ai/getting_started/bits_assistant_side_panel.png" alt="Bits Assistant side panel showing example prompts" style="width:40%;">}} @@ -82,7 +82,7 @@ Bits Assistant is available on iOS v5.8.4+. 1. [Download the mobile app and log in][2]. -2. On the home screen, tap **Bits Assistant**. +2. On the home screen, tap {{< ui >}}Bits Assistant{{< /ui >}}. 3. Start chatting with Bits Assistant in chat or voice mode. {{< img src="bits_ai/getting_started/bitsai_mobile_app.PNG" alt="View of the Mobile App Home dashboard with Bits AI" style="width:40%;" >}} diff --git a/content/en/bits_ai/mcp_server/setup.md b/content/en/bits_ai/mcp_server/setup.md index e7197bfe992..a2b886a2b4e 100644 --- a/content/en/bits_ai/mcp_server/setup.md +++ b/content/en/bits_ai/mcp_server/setup.md @@ -32,7 +32,7 @@ Datadog's [Cursor and VS Code extension][1] includes built-in access to the mana 1. Sign in to your Datadog account. {{< img src="bits_ai/mcp_server/ide_sign_in.png" alt="Sign in to Datadog from the IDE extension" style="width:70%;" >}} 1. **Restart the IDE.** -1. Confirm the Datadog MCP Server is available and the [tools][3] are listed: Go to **Cursor Settings** (`Shift` + `Cmd/Ctrl` + `J`), select the **Tools & MCP** tab, and expand the extension's tools list. +1. Confirm the Datadog MCP Server is available and the [tools][3] are listed: Go to {{< ui >}}Cursor Settings{{< /ui >}} (`Shift` + `Cmd/Ctrl` + `J`), select the {{< ui >}}Tools & MCP{{< /ui >}} tab, and expand the extension's tools list. 1. If you previously installed the Datadog MCP Server manually, remove it from the IDE's configuration to avoid conflicts. 1. Verify that you have the required [permissions](#required-permissions) for the Datadog resources you want to access. @@ -49,7 +49,7 @@ Datadog's [Cursor and VS Code extension][1] includes built-in access to the mana {{% tab "Claude Code" %}} -Point your AI agent to the MCP Server endpoint for your regional [Datadog site][1]. For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site. +Point your AI agent to the MCP Server endpoint for your regional [Datadog site][1]. For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site. {{< site-region region="us,us3,us5,eu,ap1,ap2" >}} Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-param key="mcp_server_endpoint" >}}. @@ -89,12 +89,12 @@ Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-pa {{% tab "Claude" %}} -Connect Claude (including Claude Cowork) to the Datadog MCP Server by adding it as a **custom connector** with the remote MCP URL. +Connect Claude (including Claude Cowork) to the Datadog MCP Server by adding it as a {{< ui >}}custom connector{{< /ui >}} with the remote MCP URL. {{< site-region region="us,us3,us5,eu,ap1,ap2" >}} 1. Follow the Claude help center guide on [custom connectors][1] to add a new custom connector. -1. When prompted for a URL, enter the Datadog MCP Server endpoint for your [Datadog site][2] ({{< region-param key="dd_site_name" >}}). For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site. +1. When prompted for a URL, enter the Datadog MCP Server endpoint for your [Datadog site][2] ({{< region-param key="dd_site_name" >}}). For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site.
{{< region-param key="mcp_server_endpoint" >}}
To enable [product-specific tools](#toolsets), include the `toolsets` query parameter at the end of the endpoint URL. For example, this URL enables _only_ APM and LLM Observability tools (use `toolsets=all` to enable all generally available toolsets, best for clients that support tool filtering): @@ -117,7 +117,7 @@ Connect Claude (including Claude Cowork) to the Datadog MCP Server by adding it {{% tab "Codex" %}} -Point your AI agent to the MCP Server endpoint for your regional [Datadog site][1]. For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site. +Point your AI agent to the MCP Server endpoint for your regional [Datadog site][1]. For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site. {{< site-region region="us,us3,us5,eu,ap1,ap2" >}} Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-param key="mcp_server_endpoint" >}}. @@ -153,12 +153,12 @@ Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-pa {{% tab "Warp" %}} -[Warp][1] is an agentic terminal with built-in MCP support. Point the Warp agent to the MCP Server endpoint for your regional [Datadog site][2]. For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site. +[Warp][1] is an agentic terminal with built-in MCP support. Point the Warp agent to the MCP Server endpoint for your regional [Datadog site][2]. For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site. {{< site-region region="us,us3,us5,eu,ap1,ap2" >}} Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-param key="mcp_server_endpoint" >}}. -1. In the Warp app, go to **Settings** > **MCP Servers** and click **+ Add**. +1. In the Warp app, go to {{< ui >}}Settings{{< /ui >}} > {{< ui >}}MCP Servers{{< /ui >}} and click {{< ui >}}+ Add{{< /ui >}}. 1. Paste the following configuration: @@ -172,7 +172,7 @@ Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-pa
{{< region-param key="mcp_server_endpoint" >}}?toolsets=apm,llmobs
-1. Click **Start** on the Datadog server. Warp opens your browser to complete the OAuth login flow. Credentials are stored securely on your device and reused for future sessions. +1. Click {{< ui >}}Start{{< /ui >}} on the Datadog server. Warp opens your browser to complete the OAuth login flow. Credentials are stored securely on your device and reused for future sessions. 1. Verify that you have the required [permissions](#required-permissions) for the Datadog resources you want to access. @@ -198,7 +198,7 @@ Datadog's [Cursor and VS Code extension][1] includes built-in access to the mana Alternatively, install the [Datadog extension][2]. If you have the extension installed already, make sure it's the latest version. 1. Sign in to your Datadog account. 1. **Restart the IDE.** -1. Confirm the Datadog MCP Server is available and the [tools][3] are listed: Open the chat panel, select agent mode, and click the **Configure Tools** button. +1. Confirm the Datadog MCP Server is available and the [tools][3] are listed: Open the chat panel, select agent mode, and click the {{< ui >}}Configure Tools{{< /ui >}} button. {{< img src="bits_ai/mcp_server/vscode_configure_tools_button.png" alt="Configure Tools button in VS Code" style="width:70%;" >}} 1. If you previously installed the Datadog MCP Server manually, remove it from the IDE's configuration to avoid conflicts. Open the command palette (`Shift` + `Cmd/Ctrl` + `P`) and run `MCP: Open User Configuration`. 1. Verify that you have the required [permissions](#required-permissions) for the Datadog resources you want to access. @@ -218,13 +218,13 @@ Datadog's [Cursor and VS Code extension][1] includes built-in access to the mana JetBrains offers the [Junie][1] and [AI Assistant][2] plugins for their range of IDEs. GitHub offers the [Copilot][4] plugin. Alternatively, many developers use an agent CLI, such as Claude Code or Codex, alongside their IDE. -Point your plugin to the MCP Server endpoint for your regional [Datadog site][3]. For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site. +Point your plugin to the MCP Server endpoint for your regional [Datadog site][3]. For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site. {{< site-region region="us,us3,us5,eu,ap1,ap2" >}} Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-param key="mcp_server_endpoint" >}}. {{% collapse-content title="Junie" level="h4" expanded=false id="jetbrains-junie" %}} -1. Go to **Tools** > **Junie** > **MCP Settings** and add the following block: +1. Go to {{< ui >}}Tools{{< /ui >}} > {{< ui >}}Junie{{< /ui >}} > {{< ui >}}MCP Settings{{< /ui >}} and add the following block:
{
       "mcpServers": {
@@ -247,7 +247,7 @@ Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-pa
 {{% /collapse-content %}}
 
 {{% collapse-content title="JetBrains AI Assistant" level="h4" expanded=false id="jetbrains-ai-assistant" %}}
-1. Go to **Tools** > **AI Assistant** > **Model Context Protocol (MCP)** and add the following block:
+1. Go to {{< ui >}}Tools{{< /ui >}} > {{< ui >}}AI Assistant{{< /ui >}} > {{< ui >}}Model Context Protocol (MCP){{< /ui >}} and add the following block:
 
     
{
       "mcpServers": {
@@ -273,7 +273,7 @@ Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-pa
 {{% /collapse-content %}}
 
 {{% collapse-content title="GitHub Copilot" level="h4" expanded=false id="github-copilot" %}}
-1. Go to **Tools** > **GitHub Copilot** > **Model Context Protocol (MCP)** and add the following block:
+1. Go to {{< ui >}}Tools{{< /ui >}} > {{< ui >}}GitHub Copilot{{< /ui >}} > {{< ui >}}Model Context Protocol (MCP){{< /ui >}} and add the following block:
 
     
{
       "servers": {
@@ -320,7 +320,7 @@ The [Datadog plugin for JetBrains IDEs][3] integrates with these agent CLIs. For
 
 {{% tab "Kiro" %}}
 
-Point your AI agent to the MCP Server endpoint for your regional [Datadog site][3]. For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site.
+Point your AI agent to the MCP Server endpoint for your regional [Datadog site][3]. For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site.
 
 {{< site-region region="us,us3,us5,eu,ap1,ap2" >}}
 Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-param key="mcp_server_endpoint" >}}.
@@ -355,7 +355,7 @@ Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-pa
 
 For most other [supported clients](#supported-clients), use these instructions for remote authentication. For Cline or when remote authentication is unreliable or not available, use [local binary authentication](#local-binary-authentication).
 
-Point your AI agent to the MCP Server endpoint for your regional [Datadog site][1]. For the correct instructions, use the **Datadog Site** selector on the right side of this documentation page to select your site.
+Point your AI agent to the MCP Server endpoint for your regional [Datadog site][1]. For the correct instructions, use the {{< ui >}}Datadog Site{{< /ui >}} selector on the right side of this documentation page to select your site.
 
 {{< site-region region="us,us3,us5,eu,ap1,ap2" >}}
 Selected endpoint ({{< region-param key="dd_site_name" >}}): {{< region-param key="mcp_server_endpoint" >}}.
@@ -457,7 +457,7 @@ These toolsets are in Preview. Sign up for a toolset by completing the Product P
 | [VS Code][7] | Microsoft | Datadog [Cursor & VS Code extension][16] recommended. |
 | [JetBrains IDEs][18] | JetBrains | [Datadog plugin][18] recommended. |
 | [Kiro][9], [Kiro CLI][10] | Amazon Web Services | |
-| [Goose][8], [Cline][11] | Various | See the **Other** tab above. Use local binary authentication for Cline if remote authentication is unreliable. |
+| [Goose][8], [Cline][11] | Various | See the {{< ui >}}Other{{< /ui >}} tab above. Use local binary authentication for Cline if remote authentication is unreliable. |
 
 
The Datadog MCP Server is under significant development, and additional supported clients may become available.
@@ -474,10 +474,10 @@ In addition to `mcp_read` or `mcp_write`, users need the standard Datadog permis Users with the **Datadog Standard Role** have both MCP Server permissions by default. If your organization uses [custom roles][23], add the permissions manually: 1. Go to [**Organization Settings > Roles**][26] as an administrator, and click the role you want to update. -1. Click **Edit Role** (pencil icon). -1. Under the permissions list, select the **MCP Read** and **MCP Write** checkboxes. +1. Click {{< ui >}}Edit Role{{< /ui >}} (pencil icon). +1. Under the permissions list, select the {{< ui >}}MCP Read{{< /ui >}} and {{< ui >}}MCP Write{{< /ui >}} checkboxes. 1. Select any other resource-level permissions you need for the role. -1. Click **Save**. +1. Click {{< ui >}}Save{{< /ui >}}. Organization administrators can manage global MCP access and write capabilities from [Organization Settings][27]. @@ -554,12 +554,12 @@ Local authentication is recommended for Cline and when remote authentication is ```bash npx @modelcontextprotocol/inspector ``` -2. In the inspector's web UI, for **Transport Type**, select **Streamable HTTP**. -3. For **URL**, enter the MCP Server endpoint for your regional Datadog site. +2. In the inspector's web UI, for {{< ui >}}Transport Type{{< /ui >}}, select {{< ui >}}Streamable HTTP{{< /ui >}}. +3. For {{< ui >}}URL{{< /ui >}}, enter the MCP Server endpoint for your regional Datadog site. {{< site-region region="us,us3,us5,eu,ap1,ap2" >}} For example, for {{< region-param key="dd_site_name" >}}: {{< region-param key="mcp_server_endpoint" >}} {{< /site-region >}} -4. Click **Connect**, then go to **Tools** > **List Tools**. +4. Click {{< ui >}}Connect{{< /ui >}}, then go to {{< ui >}}Tools{{< /ui >}} > {{< ui >}}List Tools{{< /ui >}}. 5. Check if the [available tools][12] appear. ## Further reading diff --git a/content/en/change_tracking/_index.md b/content/en/change_tracking/_index.md index a582cd007ec..4d6be1af08a 100644 --- a/content/en/change_tracking/_index.md +++ b/content/en/change_tracking/_index.md @@ -97,10 +97,10 @@ View and analyze changes from the [service page][2]. #### To analyze changes from the service page: 1. Navigate to the service page you want to investigate. -1. Locate the changes timeline in the **Service Summary** section. +1. Locate the changes timeline in the {{< ui >}}Service Summary{{< /ui >}} section. 1. Use the service and dependencies tabs to view either: - - Changes limited to the specific service (**Changes by Service**) - - Changes to the specific service and dependent services that might impact this service (**Changes by Service + Dependencies**) + - Changes limited to the specific service ({{< ui >}}Changes by Service{{< /ui >}}) + - Changes to the specific service and dependent services that might impact this service ({{< ui >}}Changes by Service + Dependencies{{< /ui >}}) 1. Click the change indicator to view detailed information and take remediation actions. ### Dashboards @@ -115,7 +115,7 @@ To see relevant changes within the timeline and as overlays on your dashboard, e #### To analyze changes from dashboards: 1. Navigate to your dashboard. -2. Click **Show Overlays** at the top of the page to enable the change timeline and change overlays on supported widgets. +2. Click {{< ui >}}Show Overlays{{< /ui >}} at the top of the page to enable the change timeline and change overlays on supported widgets. 3. Hover over any change indicator or overlay to view a summary of the change. 4. Click the change indicator or overlay to view detailed information and take remediation actions. @@ -128,20 +128,20 @@ In addition to the out-of-the-box integrations, Change Tracking is available as To configure a widget using Change Tracking data: 1. In a dashboard or notebook, add or edit a supported widget type (Timeseries, Query Value, Table, Tree Map, Top List, Pie, Change, or Bar Chart). -3. From the **data source** dropdown, select `Change Tracking`. -4. Configure your filters (**Service** is required). -5. (Optional) For widgets that support grouping, use **Group by** to split results. +3. From the {{< ui >}}data source{{< /ui >}} dropdown, select {{< ui >}}Change Tracking{{< /ui >}}. +4. Configure your filters ({{< ui >}}Service{{< /ui >}} is required). +5. (Optional) For widgets that support grouping, use {{< ui >}}Group by{{< /ui >}} to split results. {{< img src="/change_tracking/change-tracking-datasource-edit-widget.png" alt="Change Tracking datasource widgets" style="width:100%;" >}} -For Timeseries widgets, you can also enable Change Tracking as an **Event Overlay**, which displays changes on top of the timeseries to help correlate them with metric behavior. +For Timeseries widgets, you can also enable Change Tracking as an {{< ui >}}Event Overlay{{< /ui >}}, which displays changes on top of the timeseries to help correlate them with metric behavior. {{< img src="/change_tracking/change-tracking-datasource-edit-overlay.png" alt="Change Tracking datasource as Event Overlay" style="width:100%;" >}} #### View change details -To view information about a change or set of changes, click a datapoint in the widget and select **View Changes**. This opens the Change Tracking side panel with additional details. +To view information about a change or set of changes, click a datapoint in the widget and select {{< ui >}}View Changes{{< /ui >}}. This opens the Change Tracking side panel with additional details. ## Tracked changes Change Tracking follows these types of changes across your infrastructure: diff --git a/content/en/change_tracking/feature_flags.md b/content/en/change_tracking/feature_flags.md index 2c3c263e71f..21e4aded8eb 100644 --- a/content/en/change_tracking/feature_flags.md +++ b/content/en/change_tracking/feature_flags.md @@ -30,9 +30,9 @@ Datadog supports tracking LaunchDarkly flags using the [LaunchDarkly integration To track LaunchDarkly feature flags in your services' Change Tracking timeline: 1. Enable the [Datadog integration][1] in LaunchDarkly. -1. Go to **Flags > `` in LaunchDarkly. -1. In **Datadog tags**, add a tag with key `service` and value ``, matching your Datadog service name exactly. -1. Click **Save changes**. +1. Go to {{< ui >}}Flags{{< /ui >}} > `` in LaunchDarkly. +1. In {{< ui >}}Datadog tags{{< /ui >}}, add a tag with key `service` and value ``, matching your Datadog service name exactly. +1. Click {{< ui >}}Save changes{{< /ui >}}. For example, to link a flag to the `payments_api` service used in the examples below, you would set the tag value to `payments_api`. After you submit the event, you can navigate to the [Software Catalog][7], select the `payments_api` service, and see the `fallback_payments_test` feature flag event in the Change Tracking timeline. @@ -44,8 +44,8 @@ Send feature flag events from any provider using the [Events API][3]. Create a ` When sending custom feature flag change events, include the following fields to enable accurate filtering and cross-product correlation within Datadog: -- **impacted_resources** (with type `service`): Add the relevant service name to the `impacted_resources` array to associate the feature flag change with the affected service. -- **env tag**: Specify the environment where the change occurred (for example, production, staging, or development). +- `impacted_resources` (with type `service`): Add the relevant service name to the `impacted_resources` array to associate the feature flag change with the affected service. +- `env` tag: Specify the environment where the change occurred (for example, production, staging, or development). If these tags cannot be added at event creation time, see the next section for guidance on automatic enrichment. @@ -170,18 +170,18 @@ To set up feature flag toggles using Workflow Automation: 1. Go to [**Actions > Action Catalog > Connections**][6]. 1. Click **New Connection**. -1. Choose *LaunchDarkly*. -1. Complete the required information, then click **Next, Confirm Access**. +1. Choose {{< ui >}}LaunchDarkly{{< /ui >}}. +1. Complete the required information, then click {{< ui >}}Next, Confirm Access{{< /ui >}}. 1. Set access permissions for the connection. -1. Click **Create**. +1. Click {{< ui >}}Create{{< /ui >}}. ### Use feature flag toggles To toggle feature flags on or off from inside Datadog: 1. Click a LaunchDarkly feature flag change in the Change Tracking timeline. -1. Click the **Toggle Feature Flag** button. -1. Click **Run Action** to run the workflow and toggle the feature flag on or off. +1. Click the {{< ui >}}Toggle Feature Flag{{< /ui >}} button. +1. Click {{< ui >}}Run Action{{< /ui >}} to run the workflow and toggle the feature flag on or off. {{< img src="/change_tracking/toggle.png" alt="The details panel for a LaunchDarkly feature flag event, showing the 'Toggle Feature Flag' button." style="width:90%;" >}} diff --git a/content/en/cloud_cost_management/_index.md b/content/en/cloud_cost_management/_index.md index aaddf64a992..1e1dbf9d3a7 100644 --- a/content/en/cloud_cost_management/_index.md +++ b/content/en/cloud_cost_management/_index.md @@ -78,7 +78,7 @@ Datadog ingests your cloud cost data and transforms it into metrics you can use Visualize infrastructure spend alongside related utilization metrics with a retention period of 15 months to spot potential inefficiencies and savings opportunities. -When creating a dashboard, select **Cloud Cost** as the data source for your search query. +When creating a dashboard, select {{< ui >}}Cloud Cost{{< /ui >}} as the data source for your search query. {{< img src="cloud_cost/cloud_cost_data_source-1.png" alt="Cloud Cost available as a data source in dashboard widget creation" style="width:80%;" >}} @@ -88,7 +88,7 @@ Optionally, you can programmatically export a timeseries graph of your cloud cos Visualize daily Datadog spending alongside related utilization metrics with a retention period of 15 months to spot potential inefficiencies and savings opportunities. Learn more about [Datadog Costs][8]. -When creating a dashboard, select **Cloud Cost** as the data source, then choose **Datadog** from the available cost types. +When creating a dashboard, select {{< ui >}}Cloud Cost{{< /ui >}} as the data source, then choose {{< ui >}}Datadog{{< /ui >}} from the available cost types. {{< img src="cloud_cost/datadog_costs/dashboard-updated.png" alt="Datadog costs as an option for the Cloud Cost data source in a dashboard" style="width:80%;" >}} @@ -102,7 +102,7 @@ You can create tag rules to correct missing or incorrect tags, and add inferred ## Create a cost monitor -Proactively manage and optimize your cloud spending by creating a [Cloud Cost Monitor][3]. You can choose **Cost Changes** or **Cost Threshold** to monitor your cloud expenses. +Proactively manage and optimize your cloud spending by creating a [Cloud Cost Monitor][3]. You can choose {{< ui >}}Cost Changes{{< /ui >}} or {{< ui >}}Cost Threshold{{< /ui >}} to monitor your cloud expenses. {{< img src="cloud_cost/monitor.png" alt="Create a Cloud Cost monitor that alerts on cost changes" style="width:100%;" >}} @@ -118,10 +118,10 @@ Cloud Cost Management uses two permissions to control access: `cloud_cost_manage {{< img src="cloud_cost/ccm-data-history.png" alt="View your Cloud Cost data history in Cloud Cost settings." style="width:100%;" >}} -Monitor the freshness and processing status of your cloud cost data on the **Cloud Cost > Settings > Data History** page. +Monitor the freshness and processing status of your cloud cost data on the {{< ui >}}Cloud Cost{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Data History{{< /ui >}} page. -- **Last Bill Received**: When your cloud or SaaS provider generated the billing data visible in CCM. -- **Last Processed**: When Datadog last processed billing data from your cloud provider, including: +- {{< ui >}}Last Bill Received{{< /ui >}}: When your cloud or SaaS provider generated the billing data visible in CCM. +- {{< ui >}}Last Processed{{< /ui >}}: When Datadog last processed billing data from your cloud provider, including: - Tag pipeline rules (retroactively processes up to 3 months of historical data by default) - Cost allocation rules (retroactively processes up to 1 month of historical data by default) diff --git a/content/en/cloud_cost_management/allocation/bigquery.md b/content/en/cloud_cost_management/allocation/bigquery.md index 6946b71d370..3ad8c8c31b9 100644 --- a/content/en/cloud_cost_management/allocation/bigquery.md +++ b/content/en/cloud_cost_management/allocation/bigquery.md @@ -88,9 +88,9 @@ You can identify BigQuery schedules to help connect costs to specific scheduled To identify which BigQuery schedule a `DTS_CONFIG_ID` refers to: -1. Go to **BigQuery** in the [**GCP Console**][8]. -2. Navigate to **Transfers > Schedules**. -3. Use the **search bar** or **Ctrl+F** to locate the `DTS_CONFIG_ID`. +1. Go to {{< ui >}}BigQuery{{< /ui >}} in the [GCP Console][8]. +2. Navigate to {{< ui >}}Transfers{{< /ui >}} > {{< ui >}}Schedules{{< /ui >}}. +3. Use the {{< ui >}}search bar{{< /ui >}} or Ctrl+F to locate the `DTS_CONFIG_ID`. 4. Click the matched entry to view details about the query schedule, including source, frequency, and target dataset. #### Additional cost analysis tags diff --git a/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md b/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md index 5fb56411b78..0f53f520185 100644 --- a/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md +++ b/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md @@ -23,7 +23,7 @@ Clouds Resources : CCM allocates costs for Kubernetes clusters and includes cost analysis for many associated resources such as Kubernetes persistent volumes used by your pods. -CCM displays costs for resources including CPU, memory, and more depending on the cloud and orchestrator you are using on the [**Containers** page][1]. +CCM displays costs for resources including CPU, memory, and more depending on the cloud and orchestrator you are using on the [{{< ui >}}Containers{{< /ui >}} page][1]. {% img src="cloud_cost/container_cost_allocation/container_allocation.png" alt="Cloud cost allocation table showing requests and idle costs over the past month on the Containers page" style="width:100%;" /%} @@ -44,8 +44,8 @@ The following table presents the list of collected features and the minimal Agen | Data Transfer Cost Allocation | 7.58.0 | 7.58.0 | 1. Configure the AWS Cloud Cost Management integration on the [Cloud Cost Setup page][2]. -1. For Kubernetes support, install the [**Datadog Agent**][3] in a Kubernetes environment and ensure that you enable the [**Orchestrator Explorer**][4] in your Agent configuration. -1. For Amazon ECS support, set up [**Datadog Container Monitoring**][5] in ECS tasks. +1. For Kubernetes support, install the [Datadog Agent][3] in a Kubernetes environment and ensure that you enable the [Orchestrator Explorer][4] in your Agent configuration. +1. For Amazon ECS support, set up [Datadog Container Monitoring][5] in ECS tasks. 1. Optionally, enable [AWS Split Cost Allocation][6] for usage-based ECS allocation. 1. To enable storage cost allocation, set up [EBS metric collection][7]. 1. To enable GPU container cost allocation, install the [Datadog DCGM integration][8]. @@ -68,7 +68,7 @@ The following table presents the list of collected features and the minimal Agen | GPU Container Cost Allocation | 7.54.0 | 7.54.0 | 1. Configure the Azure Cost Management integration on the [Cloud Cost Setup page][2]. -1. Install the [**Datadog Agent**][3] in a Kubernetes environment and ensure that you enable the [**Orchestrator Explorer**][4] in your Agent configuration. +1. Install the [Datadog Agent][3] in a Kubernetes environment and ensure that you enable the [Orchestrator Explorer][4] in your Agent configuration. 1. To enable GPU container cost allocation, install the [Datadog DCGM integration][10]. **Note**: GPU Container Cost Allocation only supports pod requests in the format `nvidia.com/gpu`. @@ -88,7 +88,7 @@ The following table presents the list of collected features and the minimal Agen | GPU Container Cost Allocation | 7.54.0 | 7.54.0 | 1. Configure the Google Cloud Cost Management integration on the [Cloud Cost Setup page][2]. -1. Install the [**Datadog Agent**][3] in a Kubernetes environment and ensure that you enable the [**Orchestrator Explorer**][4] in your Agent configuration. +1. Install the [Datadog Agent][3] in a Kubernetes environment and ensure that you enable the [Orchestrator Explorer][4] in your Agent configuration. 1. To enable GPU container cost allocation, install the [Datadog DCGM integration][10]. **Note**: GPU Container Cost Allocation only supports pod requests in the format `nvidia.com/gpu`. @@ -229,7 +229,7 @@ The cost of an EBS volume has three components: IOPS, throughput, and storage. E | Spend type | Description | | -----------| ----------- | | Usage | Cost of provisioned IOPS, throughput, or storage used by workloads. Storage cost is based on the maximum amount of volume storage used that day, while IOPS and throughput costs are based on the average amount of volume storage used that day. | -| Workload idle | Cost of provisioned IOPS, throughput, or storage that are reserved and allocated but not used by workloads. Storage cost is based on the maximum amount of volume storage used that day, while IOPS and throughput costs are based on the average amount of volume storage used that day. This is the difference between the total resources requested and the average usage. **Note:** This tag is only available if you have enabled `Resource Collection` in your [**AWS Integration**][21]. To prevent being charged for `Cloud Security Posture Management`, ensure that during the `Resource Collection` setup, the `Cloud Security Posture Management` box is unchecked. | +| Workload idle | Cost of provisioned IOPS, throughput, or storage that are reserved and allocated but not used by workloads. Storage cost is based on the maximum amount of volume storage used that day, while IOPS and throughput costs are based on the average amount of volume storage used that day. This is the difference between the total resources requested and the average usage. **Note:** This tag is only available if you have enabled `Resource Collection` in your [AWS Integration][21]. To prevent being charged for `Cloud Security Posture Management`, ensure that during the `Resource Collection` setup, the {{< ui >}}Cloud Security Posture Management{{< /ui >}} box is unchecked. | | Cluster idle | Cost of provisioned IOPS, throughput, or storage that are not reserved by any pods that day. This is the difference between the total cost of the resources and what is allocated to workloads. | **Note**: Persistent volume allocation is only supported in Kubernetes clusters, and is only available for pods that are part of a Kubernetes StatefulSet. @@ -294,22 +294,22 @@ Cluster idle costs (identified by `allocated_spend_type:cluster_idle`) represent To configure cluster idle allocation, go to the [Cluster Idle Allocation settings][22] page and follow these steps: -1. Click **Enable cluster idle allocation**. +1. Click {{< ui >}}Enable cluster idle allocation{{< /ui >}}. 1. Select a redistribution level: - **Cluster** + {{< ui >}}Cluster{{< /ui >}} : Redistributes idle costs at the cluster level. - **Node** + {{< ui >}}Node{{< /ui >}} : Redistributes idle costs at the node level. Datadog also allocates to the `kube_node_name` tag. - **Nodepool** + {{< ui >}}Nodepool{{< /ui >}} : Redistributes idle costs at the nodepool level. Select a nodepool tag. 1. Optionally, select up to two additional destination tags. -1. Click **Save**. +1. Click {{< ui >}}Save{{< /ui >}}. -To disable cluster idle allocation, return to the [Cluster Idle Allocation settings][22] page and click **Disable**. +To disable cluster idle allocation, return to the [Cluster Idle Allocation settings][22] page and click {{< ui >}}Disable{{< /ui >}}. **Note**: Any settings change, including disabling, re-enabling, or modifying the redistribution level, re-backfills the last 3 months of data with the latest settings. diff --git a/content/en/cloud_cost_management/allocation/custom_allocation_rules.md b/content/en/cloud_cost_management/allocation/custom_allocation_rules.md index 5039e510dae..fb5432f6ac4 100644 --- a/content/en/cloud_cost_management/allocation/custom_allocation_rules.md +++ b/content/en/cloud_cost_management/allocation/custom_allocation_rules.md @@ -33,7 +33,7 @@ You can manage custom allocation rules using the [API][4], [Terraform][5], or di ### Step 1 - Define the source -1. Navigate to [Cloud Cost > Settings > Custom Allocation Rules][2] and click **Add New Rule** to start. +1. Navigate to [{{< ui >}}Cloud Cost{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Custom Allocation Rules{{< /ui >}}][2] and click {{< ui >}}Add New Rule{{< /ui >}} to start. 2. From the dropdown, select the shared costs you want to allocate. _Example: Untagged support costs, shared database costs._ @@ -74,11 +74,11 @@ In the preceding diagram, the pink bar represents a filter on the cost allocatio To create a rule for this allocation, you can: -- Define the costs to allocate (source): **EC2 support fees** (`aws_product:support`). -- Choose the allocation method: **Proportional by spend**. -- Choose the [destination tag](#step-3---define-the-destination) to split your costs by: **User** (`User A`, `User B`, `User C`). -- Refine the allocation by applying [filters](#step-4---optional-apply-filters): **EC2** (`aws_product:ec2`). -- Create suballocations by [partitioning](#step-5---optional-apply-a-partition) the allocation rule: **environment** (`env`). +- Define the costs to allocate (source): EC2 support fees (`aws_product:support`). +- Choose the allocation method: {{< ui >}}Proportional by spend{{< /ui >}}. +- Choose the [destination tag](#step-3---define-the-destination) to split your costs by: User (`User A`, `User B`, `User C`). +- Refine the allocation by applying [filters](#step-4---optional-apply-filters): EC2 (`aws_product:ec2`). +- Create suballocations by [partitioning](#step-5---optional-apply-a-partition) the allocation rule: environment (`env`). You can also specify how cost proportions should be partitioned to ensure segment-specific allocations. For example, if you partition your costs by environment using tags like `staging` and `production`, the proportions are calculated separately for each environment. This ensures allocations are based on the specific proportions within each partition. @@ -96,11 +96,11 @@ For example, this PostgreSQL metrics query `sum:postgresql.queries.time{*} by {u To create a rule for this allocation, you could: -- Define the costs to allocate (source): **PostgreSQL costs** (`azure_product_family:dbforpostgresql`). -- Choose the allocation method: **Dynamic by metric** -- Choose the [destination tag](#step-3---define-the-destination) to split your costs by: **User** (`User A`, `User B`, `User C`). -- Define the metric query used to split the source costs: **Query execution time per user** (`sum:postgresql.queries.time{*}` by `{user}.as_count`). -- Create suballocations by [partitioning](#step-5---optional-apply-a-partition) the allocation rule: **environment** (`env`). +- Define the costs to allocate (source): PostgreSQL costs (`azure_product_family:dbforpostgresql`). +- Choose the allocation method: {{< ui >}}Dynamic by metric{{< /ui >}} +- Choose the [destination tag](#step-3---define-the-destination) to split your costs by: User (`User A`, `User B`, `User C`). +- Define the metric query used to split the source costs: Query execution time per user (`sum:postgresql.queries.time{*}` by `{user}.as_count`). +- Create suballocations by [partitioning](#step-5---optional-apply-a-partition) the allocation rule: environment (`env`). {{< img src="cloud_cost/custom_allocation_rules/ui-dynamic-by-metric.png" alt="The dynamic by metric split strategy as seen in Datadog" style="width:90%;" >}} @@ -122,8 +122,8 @@ Apply a filter across the entire allocation rule. Filters help you target the al _Example: Only apply cost allocation where environment is production._ -- **Proportional by spend**: Let's say you allocate shared costs to the team tag, proportional to how much each team spends. You can add a filter, creating a cost allocation that is proportional to how much team spends on `aws_product` is `ec2`. -- **Dynamic by metric**: Let's say you allocate shared PostgreSQL costs to the service tag, proportional to the query execution time of each service. You can add a filter, creating a cost allocation that only applies where `environment` is `production`. +- {{< ui >}}Proportional by spend{{< /ui >}}: Let's say you allocate shared costs to the team tag, proportional to how much each team spends. You can add a filter, creating a cost allocation that is proportional to how much team spends on `aws_product` is `ec2`. +- {{< ui >}}Dynamic by metric{{< /ui >}}: Let's say you allocate shared PostgreSQL costs to the service tag, proportional to the query execution time of each service. You can add a filter, creating a cost allocation that only applies where `environment` is `production`. ### Step 5 - (optional) Apply a partition diff --git a/content/en/cloud_cost_management/allocation/tag_pipelines.md b/content/en/cloud_cost_management/allocation/tag_pipelines.md index 289bbf62642..d520cb749b0 100644 --- a/content/en/cloud_cost_management/allocation/tag_pipelines.md +++ b/content/en/cloud_cost_management/allocation/tag_pipelines.md @@ -34,13 +34,13 @@ All new users have the recommended rule for [turning on tag normalization][6] en You can manage tag pipeline rulesets using the [API][7], [Terraform][8], or directly in Datadog by following the instructions below. -To create a ruleset, navigate to [**Cloud Cost > Settings > Tag Pipelines**][1]. +To create a ruleset, navigate to [{{< ui >}}Cloud Cost{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Tag Pipelines{{< /ui >}}][1].
You can create up to 100 rules. API-based Reference Tables are not supported.
-Before creating individual rules, create a ruleset (a folder for your rules) by clicking **+ New Ruleset**. +Before creating individual rules, create a ruleset (a folder for your rules) by clicking {{< ui >}}+ New Ruleset{{< /ui >}}. -Within each ruleset, click **+ Add New Rule** and select a rule type: **Add tag**, **Alias tag keys**, or **Map multiple tags**. These rules execute in a sequential, deterministic order from top to bottom. +Within each ruleset, click {{< ui >}}+ Add New Rule{{< /ui >}} and select a rule type: {{< ui >}}Add tag{{< /ui >}}, {{< ui >}}Alias tag keys{{< /ui >}}, or {{< ui >}}Map multiple tags{{< /ui >}}. These rules execute in a sequential, deterministic order from top to bottom. {{< img src="cloud_cost/pipelines-create-ruleset-1.png" alt="A list of tag rules on the Tag Pipelines page displaying various categories such as team, account, service, department, business unit, and more" style="width:60%;" >}} @@ -54,13 +54,13 @@ For example, you can create a rule to tag all resources with their business unit {{< img src="cloud_cost/pipelines-add-tag-2.png" alt="Add new business unit tag to resources with service:process-agent or service:process-billing." style="width:60%;" >}} -Under the **Additional options** section, you have the following options: +Under the {{< ui >}}Additional options{{< /ui >}} section, you have the following options: -- **Action when tag `{tag}` exists** - Choose what to do if the specified tag (`business-unit` in the example above) already exists: - - **Don't apply the rule** - Skips the rule if the tag already exists, preserving the original value. - - **Append the tag** - Adds the new value to the existing tag without removing the original value. - - **Replace the tag** - Replaces the existing tag value with the new value.
Replacing tags can overwrite existing data. Use this option with caution.
-- **Apply case-insensitive matching to resource tags** - Enables tags defined in the `To resources with tag(s)` field and tags from the cost data to be case insensitive. For example, if resource tags from the UI are: `foo:bar` and the tag from the cost data is `Foo:bar`, then the two can be matched. +- {{< ui >}}Action when tag `{tag}` exists{{< /ui >}} - Choose what to do if the specified tag (`business-unit` in the example above) already exists: + - {{< ui >}}Don't apply the rule{{< /ui >}} - Skips the rule if the tag already exists, preserving the original value. + - {{< ui >}}Append the tag{{< /ui >}} - Adds the new value to the existing tag without removing the original value. + - {{< ui >}}Replace the tag{{< /ui >}} - Replaces the existing tag value with the new value.
Replacing tags can overwrite existing data. Use this option with caution.
+- {{< ui >}}Apply case-insensitive matching to resource tags{{< /ui >}} - Enables tags defined in the `To resources with tag(s)` field and tags from the cost data to be case insensitive. For example, if resource tags from the UI are: `foo:bar` and the tag from the cost data is `Foo:bar`, then the two can be matched. ### Alias tag keys @@ -72,13 +72,13 @@ For example, if your organization wants to use the standard `application` tag ke Add the application tag to resources with `app`, `webapp`, or `apps` tags. The rule stops executing for each resource after the first match is found. For example, if a resource already has an `app` tag, then the rule no longer attempts to identify a `webapp` or `apps` tag. -Under the **Additional options** section, you have the following options: +Under the {{< ui >}}Additional options{{< /ui >}} section, you have the following options: -- **Action when tag `{tag}` exists** - Choose what to do if the specified tag (`application` in the example above) already exists: - - **Don't apply the rule** - Skips the rule if the tag already exists, preserving the original value. - - **Append the tag** - Adds the new value to the existing tag without removing the original value. - - **Replace the tag** - Replaces the existing tag value with the new value.
Replacing tags can overwrite existing data. Use this option with caution.
-- **Apply case-insensitive matching to resource tags** - Enables tags defined in the alias tag keys and tags from the cost data to be case insensitive. For example, if resource tags from the UI are: `app:bar` and the tag from the cost data is `App:bar`, then the two can be matched. +- {{< ui >}}Action when tag `{tag}` exists{{< /ui >}} - Choose what to do if the specified tag (`application` in the example above) already exists: + - {{< ui >}}Don't apply the rule{{< /ui >}} - Skips the rule if the tag already exists, preserving the original value. + - {{< ui >}}Append the tag{{< /ui >}} - Adds the new value to the existing tag without removing the original value. + - {{< ui >}}Replace the tag{{< /ui >}} - Replaces the existing tag value with the new value.
Replacing tags can overwrite existing data. Use this option with caution.
+- {{< ui >}}Apply case-insensitive matching to resource tags{{< /ui >}} - Enables tags defined in the alias tag keys and tags from the cost data to be case insensitive. For example, if resource tags from the UI are: `app:bar` and the tag from the cost data is `App:bar`, then the two can be matched. ### Map multiple tags @@ -90,13 +90,13 @@ For example, if you want to add information about which VPs, organizations, and Similar to [Alias tag keys](#alias-tag-keys), the rule stops executing for each resource after the first match is found. For example, if an `aws_member_account_id` is found, then the rule no longer attempts to find a `subscriptionid`. -Under the **Additional options** section, you have the following options: +Under the {{< ui >}}Additional options{{< /ui >}} section, you have the following options: -- **Action when column exists** - Choose what to do if the specified columns already exist: - - **Don't apply the rule** - Skips the rule if the columns already exist, preserving the original values. - - **Append the column** - Adds the new values to the existing columns without removing the original values. - - **Replace the column** - Replaces the existing column values with the new values.
Replacing columns can overwrite existing data. Use this option with caution.
-- **Apply case-insensitive matching for primary key values** - Enables case-insensitive matching between the primary key value from the reference table and the value of the tag in the cost data where the tag key matches the primary key. For example, if the primary key value pair from the UI is `foo:Bar` and the tag from the cost data is `foo:bar`, then the two can be matched. +- {{< ui >}}Action when column exists{{< /ui >}} - Choose what to do if the specified columns already exist: + - {{< ui >}}Don't apply the rule{{< /ui >}} - Skips the rule if the columns already exist, preserving the original values. + - {{< ui >}}Append the column{{< /ui >}} - Adds the new values to the existing columns without removing the original values. + - {{< ui >}}Replace the column{{< /ui >}} - Replaces the existing column values with the new values.
Replacing columns can overwrite existing data. Use this option with caution.
+- {{< ui >}}Apply case-insensitive matching for primary key values{{< /ui >}} - Enables case-insensitive matching between the primary key value from the reference table and the value of the tag in the cost data where the tag key matches the primary key. For example, if the primary key value pair from the UI is `foo:Bar` and the tag from the cost data is `foo:bar`, then the two can be matched. ## Reserved tags diff --git a/content/en/cloud_cost_management/cost_changes/anomalies.md b/content/en/cloud_cost_management/cost_changes/anomalies.md index aee8e4af26b..a76f6319b0b 100644 --- a/content/en/cloud_cost_management/cost_changes/anomalies.md +++ b/content/en/cloud_cost_management/cost_changes/anomalies.md @@ -20,7 +20,7 @@ Datadog Cloud Cost Management (CCM) continuously monitors your environment to de A typical workflow could be the following: -1. **View** anomalies on the Anomalies tab +1. **View** anomalies on the {{< ui >}}Anomalies{{< /ui >}} tab 2. **Investigate** using Watchdog Explains to understand what's driving the cost changes 3. **Share with engineering teams** who can take action by reviewing details, investigating further, or setting up monitoring 4. **Resolve** anomalies that are expected or not significant @@ -38,9 +38,9 @@ To distinguish between true anomalies and expected fluctuations, Datadog's algor On the [Anomalies tab of the Cloud Cost page in Datadog][1], you can view and filter anomalies: -- **Active**: Anomalies from the last full day of cost data (typically 2-3 days prior). -- **Past**: Anomalies that lasted more than 7 days or are no longer detected as anomalous. Past anomalies can be useful to report on, but are often less urgent and actionable. -- **Resolved**: Anomalies that you've marked as resolved with context. +- {{< ui >}}Active{{< /ui >}}: Anomalies from the last full day of cost data (typically 2-3 days prior). +- {{< ui >}}Past{{< /ui >}}: Anomalies that lasted more than 7 days or are no longer detected as anomalous. Past anomalies can be useful to report on, but are often less urgent and actionable. +- {{< ui >}}Resolved{{< /ui >}}: Anomalies that you've marked as resolved with context. Each anomaly card shows: - Service name (`rds`, for example) @@ -69,11 +69,11 @@ where the anomaly happened, reducing manual investigation steps. When hovering o Follow these steps to investigate and resolve anomalies: -1. **Hover** over an anomaly to see anomaly drivers or click **See more** to open the side panel. +1. Hover over an anomaly to see anomaly drivers or click {{< ui >}}See more{{< /ui >}} to open the side panel. {{< img src="cloud_cost/anomalies/anomalies-watchdog.png" alt="Click See More to see side panel showing anomaly details, investigation options, and action buttons" style="width:80;" >}} -1. **Review the details** for services affected, teams involved, environments impacted, resource IDs, or how usage and unit price may be driving the cost anomaly. +1. Review the details for services affected, teams involved, environments impacted, resource IDs, or how usage and unit price may be driving the cost anomaly. 1. **Investigate further**: View the anomaly in Cost Explorer or a Datadog Notebook to further investigate anomalies by using additional dimensions. You can then send the anomaly, Explorer link, or Notebook to the service owners or teams identified by Watchdog Explains. This enables teams to resolve anomalies with context for why the anomaly occurred and whether it's expected. {{< img src="cloud_cost/anomalies/anomalies-take-action.png" alt="Click Take Action to view the anomaly in Cost Explorer or add it to a Notebook" style="width:80;" >}} @@ -86,13 +86,13 @@ As you investigate anomalies, you may find some that are not significant, were a To resolve an anomaly: -1. Click **Resolve Anomaly** to open the resolution popup. +1. Click {{< ui >}}Resolve Anomaly{{< /ui >}} to open the resolution popup. 1. Select one of the following resolutions to help improve the algorithm: - - The anomaly amount was too small - - This is an unexpected increase - - This is an expected increase -1. **Add context** about why it is or is not an anomaly. -1. Click **Resolve** to move it to the Resolved tab. + - {{< ui >}}The anomaly amount was too small{{< /ui >}} + - {{< ui >}}This is an unexpected increase{{< /ui >}} + - {{< ui >}}This is an expected increase{{< /ui >}} +1. Add context about why it is or is not an anomaly. +1. Click {{< ui >}}Resolve{{< /ui >}} to move it to the {{< ui >}}Resolved{{< /ui >}} tab. This is an example of how to mark a cost anomaly as significant and explain why it's an anomaly: diff --git a/content/en/cloud_cost_management/cost_changes/real_time_costs.md b/content/en/cloud_cost_management/cost_changes/real_time_costs.md index 7635f09106c..fd9cd641a64 100644 --- a/content/en/cloud_cost_management/cost_changes/real_time_costs.md +++ b/content/en/cloud_cost_management/cost_changes/real_time_costs.md @@ -35,7 +35,7 @@ Real-time costs are currently available in Preview for: ## How to query real-time costs -Real-time costs can be found under the standard "Metrics" source in Metrics Explorer and dashboards, and should be queried using `sum:aws.cost.net.amortized.realtime.estimated{*}.as_count().rollup(sum, 300)`: +Real-time costs can be found under the standard {{< ui >}}Metrics{{< /ui >}} source in Metrics Explorer and dashboards, and should be queried using `sum:aws.cost.net.amortized.realtime.estimated{*}.as_count().rollup(sum, 300)`: - the `sum` or `sum by` aggregation - as `count` (learn more about [rate vs count metrics][1]) - rollup `sum`, minimum of 5 minutes (or 300 seconds in the query above, since real-time costs are updated every 5 minutes) diff --git a/content/en/cloud_cost_management/datadog_costs.md b/content/en/cloud_cost_management/datadog_costs.md index 05c70b689de..1109cfd51eb 100644 --- a/content/en/cloud_cost_management/datadog_costs.md +++ b/content/en/cloud_cost_management/datadog_costs.md @@ -17,7 +17,7 @@ further_reading: Daily Datadog costs give you visibility into daily Datadog spending across dashboards, notebooks, [cost monitors][1], Cloud Cost Explorer, [reports][12], [budgets][11], and [anomalies][13], along with your entire organization's cloud provider and [SaaS costs][2]. -You can view daily Datadog costs in [Cloud Cost Management][3](CCM), and access additional Datadog cost capabilities like [Cost Summary][5] and [Cost Chargebacks][6] on the [**Plan & Usage** page][7]. +You can view daily Datadog costs in [Cloud Cost Management][3](CCM), and access additional Datadog cost capabilities like [Cost Summary][5] and [Cost Chargebacks][6] on the [{{< ui >}}Plan & Usage{{< /ui >}} page][7]. There is **no additional charge** for Datadog Costs, and it is available for both CCM and non-CCM customers with a direct contract through Datadog or an External Marketplace drawdown contract. @@ -43,7 +43,7 @@ After Datadog Costs is enabled, users need the following permission to view the ## Enabling Datadog Costs -To activate Datadog Costs, navigate to the [**Plan & Usage** page][7] and click **Get Started** in the modal to "View Datadog Costs in Cloud Cost Management". Alternatively, you can contact your account representative or [Datadog Support][8]. +To activate Datadog Costs, navigate to the [{{< ui >}}Plan & Usage{{< /ui >}} page][7] and click {{< ui >}}Get Started{{< /ui >}} in the modal to "View Datadog Costs in Cloud Cost Management". Alternatively, you can contact your account representative or [Datadog Support][8]. After opting in to Datadog Costs, a confirmation message appears and cost data starts populating in the CCM Explorer within 2-3 hours. @@ -53,11 +53,11 @@ Daily Datadog cost data is available to sub-organizations with the [Sub Organiza ## Visualize and break down costs -Costs in Cloud Cost Management may not match the estimated month-to-date (MTD) costs on the [**Plan & Usage** page][7] because Plan & Usage costs are cumulative and prorated monthly. Only Cloud Cost Management provides daily cost calculations. +Costs in Cloud Cost Management may not match the estimated month-to-date (MTD) costs on the [{{< ui >}}Plan & Usage{{< /ui >}} page][7] because Plan & Usage costs are cumulative and prorated monthly. Only Cloud Cost Management provides daily cost calculations. -Datadog cost data has an expected data delay of 48 hours and is available for the past 15 months. Prior month Datadog charges are finalized around the 16th of each month. Before costs are finalized, the **Usage Charges Only: Enabled** toggle represents estimated usage-based charges only. When charges are finalized, the **Usage Charges Only: Disabled** toggle also includes any adjustment records. These adjustments are applied to the prior month and reflect the finalized cost amounts. +Datadog cost data has an expected data delay of 48 hours and is available for the past 15 months. Prior month Datadog charges are finalized around the 16th of each month. Before costs are finalized, the {{< ui >}}Usage Charges Only: Enabled{{< /ui >}} toggle represents estimated usage-based charges only. When charges are finalized, the {{< ui >}}Usage Charges Only: Disabled{{< /ui >}} toggle also includes any adjustment records. These adjustments are applied to the prior month and reflect the finalized cost amounts. -Cost data can be used in dashboards and notebooks under the **Cloud Cost** data source. Create dashboards to monitor daily costs, identify trends, and optimize resource usage. +Cost data can be used in dashboards and notebooks under the {{< ui >}}Cloud Cost{{< /ui >}} data source. Create dashboards to monitor daily costs, identify trends, and optimize resource usage. {{< img src="cloud_cost/datadog_costs/dashboard.png" alt="Datadog costs as an option for the Cloud Cost data source in a dashboard" style="width:100%;" >}} diff --git a/content/en/cloud_cost_management/planning/budgets.md b/content/en/cloud_cost_management/planning/budgets.md index 15bc39bc015..bbaa68d5d30 100644 --- a/content/en/cloud_cost_management/planning/budgets.md +++ b/content/en/cloud_cost_management/planning/budgets.md @@ -17,8 +17,8 @@ Set up budgets and enable engineering teams to visualize how they are tracking a You can create two types of budgets: -- **Basic**: A flat, single-level budget for tracking your cloud costs. -- **Hierarchical**: A two-level, parent-child budget for tracking costs in a way that mirrors your organization's structure. For example, if your organization has departments made up of many teams, you can budget on the department (parent) and team (child) levels and track budget health at both levels. In addition, this option allows you to create a single budget instead of needing to create multiple budgets. +- {{< ui >}}Basic{{< /ui >}}: A flat, single-level budget for tracking your cloud costs. +- {{< ui >}}Hierarchical{{< /ui >}}: A two-level, parent-child budget for tracking costs in a way that mirrors your organization's structure. For example, if your organization has departments made up of many teams, you can budget on the department (parent) and team (child) levels and track budget health at both levels. In addition, this option allows you to create a single budget instead of needing to create multiple budgets. ## Set up budgets @@ -28,23 +28,23 @@ You can create two types of budgets: To create a basic budget: 1. Navigate to [**Cloud Cost > Plan > Budgets**][1], or create a budget through the [API][2] or [Terraform][3]. -1. Click the **Create a New Budget** button. -1. Click **Basic** to create a basic budget. -1. You can either add budget information by **uploading a CSV** using the provided template in the UI, or **enter your budget directly** using the details below. +1. Click the {{< ui >}}Create a New Budget{{< /ui >}} button. +1. Click {{< ui >}}Basic{{< /ui >}} to create a basic budget. +1. You can either add budget information by {{< ui >}}uploading a CSV{{< /ui >}} using the provided template in the UI, or {{< ui >}}enter your budget directly{{< /ui >}} using the details below. {{< img src="cloud_cost/budgets/budget-upload-your-csv.mp4" alt="Choose whether to add budget information by uploading a CSV or enter it directly within the UI" video="true">}} - - **Budget Name**: Enter a name for your budget. - - **Start Date**: Enter a start date for the budget (this can be a past month). Budgets are set at the month level. - - **End Date**: Set an end date for the budget (can be in the future). - - **Provider(s)**: Budget on any combination of AWS, Azure, Google Cloud, Oracle Cloud, or other SaaS (including Datadog or custom costs). - - **Dimension to budget by**: Specify a dimension to track the budget, along with its corresponding values. For example, if you wanted to create budgets for the top 4 teams, you would select "team" in the first dropdown, and the specific teams in the second dropdown. + - {{< ui >}}Budget Name{{< /ui >}}: Enter a name for your budget. + - {{< ui >}}Start Date{{< /ui >}}: Enter a start date for the budget (this can be a past month). Budgets are set at the month level. + - {{< ui >}}End Date{{< /ui >}}: Set an end date for the budget (can be in the future). + - {{< ui >}}Provider(s){{< /ui >}}: Budget on any combination of AWS, Azure, Google Cloud, Oracle Cloud, or other SaaS (including Datadog or custom costs). + - {{< ui >}}Dimension to budget by{{< /ui >}}: Specify a dimension to track the budget, along with its corresponding values. For example, if you wanted to create budgets for the top 4 teams, you would select "team" in the first dropdown, and the specific teams in the second dropdown. -1. Fill in all budgets in the table. To apply the same values from the first month to the rest of the months, enter a value in the first column of a row and click the **copy** button. +1. Fill in all budgets in the table. To apply the same values from the first month to the rest of the months, enter a value in the first column of a row and click the {{< ui >}}copy{{< /ui >}} button. {{< img src="cloud_cost/budgets/budget-copy-paste.png" alt="Budget Creation View: fill in budget details." style="width:100%;" >}} -1. Click **Save**. +1. Click {{< ui >}}Save{{< /ui >}}. [1]: https://app.datadoghq.com/cost/plan/budgets [2]: /api/latest/cloud-cost-management/#create-or-update-a-budget @@ -57,23 +57,23 @@ To create a basic budget: To create a hierarchical budget: 1. Navigate to [**Cloud Cost > Plan > Budgets**][1], or create a budget through the [API][2]. -1. Click the **Create a New Budget** button. -1. Click **Hierarchical** to create a hierarchical budget. +1. Click the {{< ui >}}Create a New Budget{{< /ui >}} button. +1. Click {{< ui >}}Hierarchical{{< /ui >}} to create a hierarchical budget. 1. Enter your budget information using the details below. - - **Budget Name**: Enter a name for your budget. - - **Start Date**: Enter a start date for the budget (this can be a past month). Budgets are set at the month level. - - **End Date**: Set an end date for the budget (can be in the future). - - **Scope to Provider(s)**: Budget on any combination of AWS, Azure, Google Cloud, Oracle Cloud, or other SaaS (including Datadog or custom costs). - - **Parent Level**: Select the parent-level tag. - - **Child Level**: Select child-level tag. - - **Dimension to budget by**: Specify a dimension to track the budget, along with its corresponding values. For example, if you wanted to create budgets for the top 4 teams, you would select "team" in the first dropdown, and the specific teams in the second dropdown. + - {{< ui >}}Budget Name{{< /ui >}}: Enter a name for your budget. + - {{< ui >}}Start Date{{< /ui >}}: Enter a start date for the budget (this can be a past month). Budgets are set at the month level. + - {{< ui >}}End Date{{< /ui >}}: Set an end date for the budget (can be in the future). + - {{< ui >}}Scope to Provider(s){{< /ui >}}: Budget on any combination of AWS, Azure, Google Cloud, Oracle Cloud, or other SaaS (including Datadog or custom costs). + - {{< ui >}}Parent Level{{< /ui >}}: Select the parent-level tag. + - {{< ui >}}Child Level{{< /ui >}}: Select child-level tag. + - {{< ui >}}Dimension to budget by{{< /ui >}}: Specify a dimension to track the budget, along with its corresponding values. For example, if you wanted to create budgets for the top 4 teams, you would select "team" in the first dropdown, and the specific teams in the second dropdown. -1. Fill in all budgets in the table. To apply the same values from the first month to the rest of the months, enter a value in the first column of a row and click the **copy** button. +1. Fill in all budgets in the table. To apply the same values from the first month to the rest of the months, enter a value in the first column of a row and click the {{< ui >}}copy{{< /ui >}} button. {{< img src="cloud_cost/budgets/budget-copy-paste.png" alt="Budget Creation View: fill in budget details." style="width:100%;" >}} -1. Click **Save**. +1. Click {{< ui >}}Save{{< /ui >}}. [1]: https://app.datadoghq.com/cost/plan/budgets [2]: /api/latest/cloud-cost-management/#create-or-update-a-budget @@ -83,28 +83,28 @@ To create a hierarchical budget: ## View budget status The [Budgets page][1] lists all of your organization's budgets, highlighting the budget creator, any budgets that have gone over, -and other relevant details. Click on **View Performance** to investigate the budget, and understand what might be causing you to go over budget. +and other relevant details. Click on {{< ui >}}View Performance{{< /ui >}} to investigate the budget, and understand what might be causing you to go over budget. {{< img src="cloud_cost/budgets/budget-list-1.png" alt="List all budgets">}} -From a **View Performance** page of an individual budget, you can toggle the view option from the top left: +From a {{< ui >}}View Performance{{< /ui >}} page of an individual budget, you can toggle the view option from the top left:
You cannot view budget versus actuals before 15 months, since cost metrics are retained for 15 months.
-- You can view the budget status for the **current month**: +- You can view the budget status for the {{< ui >}}current month{{< /ui >}}: {{< img src="cloud_cost/budgets/budget-status-month-2.png" alt="Budget Status View: view current month">}} -- Or you can view the budget status for the **entire duration (all)**: +- Or you can view the budget status for the {{< ui >}}entire duration (all){{< /ui >}}: {{< img src="cloud_cost/budgets/budget-status-all-2.png" alt="Budget Status View: view total budget">}} To investigate budgets: -1. From the individual budget page, filter budgets using the dropdown at the top, or "Apply filter" in the table to investigate the dimensions that are over budget. +1. From the individual budget page, filter budgets using the dropdown at the top, or {{< ui >}}Apply filter{{< /ui >}} in the table to investigate the dimensions that are over budget. {{< img src="cloud_cost/budgets/budget-investigate-3.png" alt="Use the dropdown filter or Apply Filter option in the table to investigate over-budget dimensions.">}} -2. Click **Copy Link** to share the budget with others to help understand why budgets are going over. Or, share budgets with finance so that they can understand how you're tracking against budgets. +2. Click {{< ui >}}Copy Link{{< /ui >}} to share the budget with others to help understand why budgets are going over. Or, share budgets with finance so that they can understand how you're tracking against budgets. ## Modify or delete a budget To modify a budget, click the edit icon on the Budgets page. @@ -119,11 +119,11 @@ To delete a budget, click the trash icon on the Budgets page. You can add a budget to dashboards in two ways: -- Create a budget report and click **Share > Save to dashboard**. +- Create a budget report and click {{< ui >}}Share{{< /ui >}} > {{< ui >}}Save to dashboard{{< /ui >}}. {{< img src="cloud_cost/budgets/budget-share-from-dashboard.png" alt="Click Share and Save to dashboard to add a budget report to a dashboard" style="width:100%;">}} -- From a dashboard, add the **Budget Summary** widget. +- From a dashboard, add the {{< ui >}}Budget Summary{{< /ui >}} widget. {{< img src="cloud_cost/budgets/budgets-widgets.png" alt="Search and add the Budget Summary widget from any dashboard" style="width:100%;">}} @@ -133,9 +133,9 @@ Learn how to [create a budget-based monitor][2]. ## View forecasts in budgets -Budget cards automatically display forecast information when available, showing projected costs for each budget period. If forecasted costs are projected to exceed your budget, the budget status indicates **Projected Over** to help you take action before going over budget. +Budget cards automatically display forecast information when available, showing projected costs for each budget period. If forecasted costs are projected to exceed your budget, the budget status indicates {{< ui >}}Projected Over{{< /ui >}} to help you take action before going over budget. -To view detailed forecast information in a budget, click **View Performance** and toggle **Show Forecast** to visualize predicted costs alongside actual spending. +To view detailed forecast information in a budget, click {{< ui >}}View Performance{{< /ui >}} and toggle {{< ui >}}Show Forecast{{< /ui >}} to visualize predicted costs alongside actual spending. Learn more about how [forecasting][3] works and data requirements. diff --git a/content/en/cloud_cost_management/planning/commitment_programs.md b/content/en/cloud_cost_management/planning/commitment_programs.md index 6ee634d3d1e..08e2528847d 100644 --- a/content/en/cloud_cost_management/planning/commitment_programs.md +++ b/content/en/cloud_cost_management/planning/commitment_programs.md @@ -41,9 +41,9 @@ Review these Key Performance Indicators (KPIs) for your cloud providers and serv {{< img src="cloud_cost/planning/commitments-inventory.png" alt="Commitments Overview dashboard showing key savings metrics and a bar chart comparing commitment costs to equivalent on-demand costs over time." style="width:100%;" >}} -- **Effective Savings Rate (ESR)**: Percentage of cost savings achieved by your discount programs compared to on-demand prices, factoring in both utilized and underutilized commitments. +- {{< ui >}}Effective Savings Rate (ESR){{< /ui >}}: Percentage of cost savings achieved by your discount programs compared to on-demand prices, factoring in both utilized and underutilized commitments. - _Example: Your RIs may offer a 62% discount, but if your ESR is only 45%, underutilized commitments are reducing your actual savings._ -- **Realized Savings**: Total dollar amount saved by using commitment programs versus on-demand rates. +- {{< ui >}}Realized Savings{{< /ui >}}: Total dollar amount saved by using commitment programs versus on-demand rates. - _Example: You spent $10,000 on cloud services last month, but would have spent $14,000 at on-demand rates, so your absolute savings is $4,000._ ## On-demand hot-spots @@ -52,9 +52,9 @@ On-demand hot-spots highlight areas with high on-demand costs, which may indicat {{< img src="cloud_cost/planning/commitments-on-demand-2.png" alt="On-Demand Hot-Spots table for AWS RDS showing region, instance family, DB engine, coverage percentage, and on-demand cost." style="width:100%;" >}} -Use the **Cost** and **Hours** tabs to toggle between on-demand spend in dollars or usage in hours. Use the available filters to narrow results—filters vary based on the selected product. +Use the {{< ui >}}Cost{{< /ui >}} and {{< ui >}}Hours{{< /ui >}} tabs to toggle between on-demand spend in dollars or usage in hours. Use the available filters to narrow results—filters vary based on the selected product. -The table columns correspond to the filters for the selected product, showing the dimensions that characterize the on-demand usage (such as region, instance family, or database engine), along with **Coverage** (percentage of usage covered by commitments) and **On-Demand Cost** (sorted in descending order to surface the highest-spend hot-spots first). +The table columns correspond to the filters for the selected product, showing the dimensions that characterize the on-demand usage (such as region, instance family, or database engine), along with {{< ui >}}Coverage{{< /ui >}} (percentage of usage covered by commitments) and {{< ui >}}On-Demand Cost{{< /ui >}} (sorted in descending order to surface the highest-spend hot-spots first). ## Commitments inventory @@ -62,13 +62,13 @@ Commitments Inventory provides a detailed view of commitments active during the {{< img src="cloud_cost/planning/commitments-inventory-1.png" alt="Commitments Inventory section showing the Savings Plans tab with a utilization chart and a table of EC2 savings plan commitments." style="width:100%;" >}} -Use the **Savings Plans** and **Reserved Instances** tabs to switch between commitment types. Each tab shows: +Use the {{< ui >}}Savings Plans{{< /ui >}} and {{< ui >}}Reserved Instances{{< /ui >}} tabs to switch between commitment types. Each tab shows: -- **Utilization**: Percentage of the commitment type being used during the selected period. -- **Unused spend**: Total spend on unused commitments. -- **Daily chart**: Tracks used and unused commitment spend alongside the utilization rate over time. +- {{< ui >}}Utilization{{< /ui >}}: Percentage of the commitment type being used during the selected period. +- {{< ui >}}Unused spend{{< /ui >}}: Total spend on unused commitments. +- {{< ui >}}Daily chart{{< /ui >}}: Tracks used and unused commitment spend alongside the utilization rate over time. -Use the **Only show Expiring** checkbox to filter the table to commitments nearing their end date. +Use the {{< ui >}}Only show Expiring{{< /ui >}} checkbox to filter the table to commitments nearing their end date. The table lists your active commitments. Columns vary depending on the product and commitment type, but common columns include: @@ -82,7 +82,7 @@ The table lists your active commitments. Columns vary depending on the product a | End Date | Date the commitment expires. | | Utilization | Percentage of the commitment used during the selected period. | -Use the **Columns** button to show or hide additional columns. +Use the {{< ui >}}Columns{{< /ui >}} button to show or hide additional columns. ## Least used savings plans @@ -90,15 +90,15 @@ Least Used Savings Plans helps you identify which savings plans are generating t {{< img src="cloud_cost/planning/commitment-programs-least-used-savings-plans-1.png" alt="Least Used Savings Plans section showing a bar chart of daily average unused savings plan spend by day of week, a table of the most wasteful savings plans with waste amount, utilization, and ARN, and a heat map of hourly unused committed spend percentage by day of week." style="width:100%;" >}} -**Daily average unused Savings Plans**: A bar chart showing the average daily cost of unused savings plan spend for each day of the week. Use this to spot patterns, such as higher waste on weekends when workloads may be lower. +{{< ui >}}Daily average unused Savings Plans{{< /ui >}}: A bar chart showing the average daily cost of unused savings plan spend for each day of the week. Use this to spot patterns, such as higher waste on weekends when workloads may be lower. -**Savings Plans with most waste**: A table listing underutilized savings plans, sorted by total waste. Columns include: +{{< ui >}}Savings Plans with most waste{{< /ui >}}: A table listing underutilized savings plans, sorted by total waste. Columns include: -- **Waste**: Total dollar amount of unused committed spend during the selected period. -- **Utilization**: Percentage of the savings plan being used, shown as a percentage and progress bar. -- **Savings Plan ARN**: Unique identifier for the savings plan. +- {{< ui >}}Waste{{< /ui >}}: Total dollar amount of unused committed spend during the selected period. +- {{< ui >}}Utilization{{< /ui >}}: Percentage of the savings plan being used, shown as a percentage and progress bar. +- {{< ui >}}Savings Plan ARN{{< /ui >}}: Unique identifier for the savings plan. -**Hourly unused committed spend percentage**: A heat map showing the percentage of committed spend that went unused, broken down by hour (UTC) and day of week. Darker cells indicate higher unused percentages, making it possible to identify specific time windows where commitments are consistently underused. +{{< ui >}}Hourly unused committed spend percentage{{< /ui >}}: A heat map showing the percentage of committed spend that went unused, broken down by hour (UTC) and day of week. Darker cells indicate higher unused percentages, making it possible to identify specific time windows where commitments are consistently underused. ## Example use cases @@ -107,8 +107,8 @@ Least Used Savings Plans helps you identify which savings plans are generating t **Scenario**: Your Effective Savings Rate (ESR) is lower than expected, even though your coverage is high. **How to use commitment programs**: -1. Go to the **Commitments Overview** and check the utilization KPI. -2. In the **Commitments inventory**, sort by utilization in ascending order to identify the least-used commitments. For savings plans, also check the **Savings Plans with most waste** table in the [Least used savings plans](#least-used-savings-plans) section. +1. Go to the {{< ui >}}Commitments Overview{{< /ui >}} and check the utilization KPI. +2. In the {{< ui >}}Commitments inventory{{< /ui >}}, sort by utilization in ascending order to identify the least-used commitments. For savings plans, also check the {{< ui >}}Savings Plans with most waste{{< /ui >}} table in the [Least used savings plans](#least-used-savings-plans) section. 3. Reallocate workloads to use these commitments more effectively, or consider modifying or selling unused commitments if your cloud provider allows it. ### Plan for expiring commitments @@ -116,7 +116,7 @@ Least Used Savings Plans helps you identify which savings plans are generating t **Scenario**: Several Reserved Instances are expiring soon, and you want to avoid unexpected on-demand charges. **How to use commitment programs**: -1. In the **Commitments Explorer**, review the list of commitments and their expiration dates. +1. In the {{< ui >}}Commitments Explorer{{< /ui >}}, review the list of commitments and their expiration dates. 2. Use the filters to focus on soon-to-expire commitments. 3. Plan renewals or replacements in advance to maintain coverage and maximize savings. @@ -125,7 +125,7 @@ Least Used Savings Plans helps you identify which savings plans are generating t **Scenario**: Your cloud costs show consistently high on-demand usage for a particular service or region. **How to use commitment programs**: -1. Use **On-demand hot-spots** to identify which services, regions, or accounts have significant and steady on-demand costs. +1. Use {{< ui >}}On-demand hot-spots{{< /ui >}} to identify which services, regions, or accounts have significant and steady on-demand costs. 2. Analyze usage patterns to confirm they are predictable. 3. Purchase new commitments to cover the consistent usage and reduce costs. @@ -134,7 +134,7 @@ Least Used Savings Plans helps you identify which savings plans are generating t **Scenario**: You have underutilized savings plans and high on-demand costs running in parallel. **How to use commitment programs**: -1. Use the **Least used savings plans** section to identify recurring patterns of low utilization—for example, consistently unused capacity on certain days or hours. +1. Use the {{< ui >}}Least used savings plans{{< /ui >}} section to identify recurring patterns of low utilization—for example, consistently unused capacity on certain days or hours. 2. Identify on-demand workloads that could be scheduled during those low-utilization windows to take advantage of unused savings plan coverage. 3. Shift or reschedule those workloads to reduce on-demand spend and improve savings plan utilization. diff --git a/content/en/cloud_cost_management/planning/forecasting.md b/content/en/cloud_cost_management/planning/forecasting.md index ced999468e1..bcbe7ae6dc6 100644 --- a/content/en/cloud_cost_management/planning/forecasting.md +++ b/content/en/cloud_cost_management/planning/forecasting.md @@ -40,8 +40,8 @@ Cloud Cost Management uses forecasting algorithms to generate cost to generate c You can generate forecasts for various time horizons and rollup intervals to match your planning needs: -- **Forecast periods**: Predict costs for the next billing period, current month, current year, or a custom date range based on your historical spending data. -- **Rollup intervals**: View forecasts at daily or monthly intervals depending on your analysis requirements. +- {{< ui >}}Forecast periods{{< /ui >}}: Predict costs for the next billing period, current month, current year, or a custom date range based on your historical spending data. +- {{< ui >}}Rollup intervals{{< /ui >}}: View forecasts at daily or monthly intervals depending on your analysis requirements. ### Data requirements @@ -54,17 +54,17 @@ To generate accurate forecasts, CCM requires: Navigate to [**Cloud Cost > Analyze > Reports**][1] in Datadog to enable forecasts in your budget reports. -1. Create a report or open an existing **Budget** report. -2. In the left panel, toggle **Show forecast** to enable forecasting. -3. Select the forecast period from the **Until end of** dropdown (next period, current month, current year, or a custom range). +1. Create a report or open an existing {{< ui >}}Budget{{< /ui >}} report. +2. In the left panel, toggle {{< ui >}}Show forecast{{< /ui >}} to enable forecasting. +3. Select the forecast period from the {{< ui >}}Until end of{{< /ui >}} dropdown (next period, current month, current year, or a custom range). {{< img src="cloud_cost/forecasts/budget_report_forecast-2.png" alt="Budget report showing the forecast toggle in the left panel and forecasted costs displayed with historical data" style="width:100%;" >}} The report displays: -- **Forecast toggle and controls**: Located in the left panel to enable forecasting and select the time period. -- **Historical costs**: Your actual spending shown in solid colors. -- **Forecasted costs**: Predicted costs shown with a hatched pattern. -- **Forecast summary card**: Shows the total forecasted cost for the selected period. +- {{< ui >}}Forecast toggle and controls{{< /ui >}}: Located in the left panel to enable forecasting and select the time period. +- {{< ui >}}Historical costs{{< /ui >}}: Your actual spending shown in solid colors. +- {{< ui >}}Forecasted costs{{< /ui >}}: Predicted costs shown with a hatched pattern. +- {{< ui >}}Forecast summary card{{< /ui >}}: Shows the total forecasted cost for the selected period. ## View forecasts in budgets @@ -72,18 +72,18 @@ Navigate to [**Cloud Cost > Plan > Budgets**][2] in Datadog to view forecasts in Budget cards automatically display forecast information when available, showing projected costs for each budget period. -If forecasted costs are projected to exceed your budget, the budget status indicates **Projected Over** to help you take action before going over budget. +If forecasted costs are projected to exceed your budget, the budget status indicates {{< ui >}}Projected Over{{< /ui >}} to help you take action before going over budget. {{< img src="cloud_cost/forecasts/budget-list-with-forecast.png" alt="Budget list showing forecast values on budget cards" style="width:100%;" >}} To view detailed forecast information: -1. From the Budgets page, click **View Performance** on any budget to open the detailed budget view. -2. In the budget performance view, toggle **Show Forecast** to enable forecasting. +1. From the Budgets page, click {{< ui >}}View Performance{{< /ui >}} on any budget to open the detailed budget view. +2. In the budget performance view, toggle {{< ui >}}Show Forecast{{< /ui >}} to enable forecasting. 3. The budget performance chart displays: - - **Actual costs**: Your current spending shown in solid colors. - - **Forecasted costs**: Predicted costs shown with a hatched pattern extending beyond your actual costs. - - **Forecasted Past**: A vertical line indicating where the forecast begins. + - {{< ui >}}Actual costs{{< /ui >}}: Your current spending shown in solid colors. + - {{< ui >}}Forecasted costs{{< /ui >}}: Predicted costs shown with a hatched pattern extending beyond your actual costs. + - {{< ui >}}Forecasted Past{{< /ui >}}: A vertical line indicating where the forecast begins. {{< img src="cloud_cost/forecasts/updated_budget_status_forecast-1.png" alt="Budget performance view showing the forecast toggle and forecasted costs displayed with a hatched pattern" style="width:100%;" >}} diff --git a/content/en/cloud_cost_management/recommendations/_index.md b/content/en/cloud_cost_management/recommendations/_index.md index 326047e662a..4d1e9e64c1d 100644 --- a/content/en/cloud_cost_management/recommendations/_index.md +++ b/content/en/cloud_cost_management/recommendations/_index.md @@ -692,9 +692,9 @@ For each cloud account that you would like to receive recommendations for: 1. Configure [Cloud Cost Management][2] to send billing data to Datadog. - For Azure, this requires using the App Registration method to collect billing data. 1. Enable [resource collection][3] for recommendations. - - For AWS, enable resource collection in the **Resource Collection** tab on the [AWS integration tile][4]. + - For AWS, enable resource collection in the {{< ui >}}Resource Collection{{< /ui >}} tab on the [AWS integration tile][4]. - For Azure, enable resource collection with the appropriate integration. If your organization is on the Datadog US3 site, the [Azure Native Integration][9] enables this automatically through metrics collection. For all other sites, enabling resource collection within the [Azure integration tile][8] is required. - - For GCP, enable resource collection in the **Resource Collection** tab on the [Google Cloud Platform integration tile][10]. + - For GCP, enable resource collection in the {{< ui >}}Resource Collection{{< /ui >}} tab on the [Google Cloud Platform integration tile][10]. 1. Install the [Datadog Agent][5] (required for Downsize recommendations). **Note**: Cloud Cost Recommendations supports billing in customers' non-USD currencies. @@ -705,20 +705,20 @@ Assign a status to each recommendation to track cost optimization progress acros | Status | Description | |--------|-------------| -| Open | (Default) The recommendation has not been triaged. | -| In Progress | Work is underway to address this recommendation. | -| Completed | The recommended action has been taken or is no longer relevant. | -| Dismissed | No work is planned for this recommendation over the time frame specified when dismissing. | +| {{< ui >}}Open{{< /ui >}} | (Default) The recommendation has not been triaged. | +| {{< ui >}}In Progress{{< /ui >}} | Work is underway to address this recommendation. | +| {{< ui >}}Completed{{< /ui >}} | The recommended action has been taken or is no longer relevant. | +| {{< ui >}}Dismissed{{< /ui >}} | No work is planned for this recommendation over the time frame specified when dismissing. | ### Change a recommendation status -1. Click a recommendation in the [**Cloud Cost Recommendations**][1] list to open the side panel. +1. Click a recommendation in the [{{< ui >}}Cloud Cost Recommendations{{< /ui >}}][1] list to open the side panel. 1. Use the status dropdown to select a new status. ## Recommendation action-taking You can act on recommendations to save money and optimize costs. Cloud Cost Recommendations support Jira, 1-click Workflow Automation, and Datadog Case Management. Unused EBS and GP2 EBS volume recommendations also support 1-click Workflow Automation. See the following details for each action-taking option: -- **Jira**: Create Jira issues directly from the recommendation side panel or by selecting multiple recommendations in the "Active Recommendations" list and clicking "Create Jira issue." Created issues are tagged and link back to the recommendation in Datadog. +- **Jira**: Create Jira issues directly from the recommendation side panel or by selecting multiple recommendations in the {{< ui >}}Active Recommendations{{< /ui >}} list and clicking {{< ui >}}Create Jira issue{{< /ui >}}. Created issues are tagged and link back to the recommendation in Datadog. To filter recommendations by Jira status, use the following query options: - `@jira_issues.issue_key:*` - Show only recommendations with a Jira issue @@ -731,9 +731,9 @@ You can act on recommendations to save money and optimize costs. Cloud Cost Reco Bits AI Dev Agent is in Preview. To sign up, click Request Access and complete the form. {{< /callout >}} -- **1-click Workflow Automation actions**: Actions are available for a limited set of recommendations, allowing users to execute suggested actions, such as clicking "Delete EBS Volume", directly within Cloud Cost Management. -- **Datadog Case Management**: Users can go to the recommendation side panel and click "Create Case" to generate a case to manage and take action on recommendations. -- **Dismiss**: Use "Dismiss" in the recommendation side panel to hide a recommendation for a chosen time frame and provide a reason. Dismissed recommendations move to the "Dismissed" tab. +- **1-click Workflow Automation actions**: Actions are available for a limited set of recommendations, allowing users to execute suggested actions, such as clicking {{< ui >}}Delete EBS Volume{{< /ui >}}, directly within Cloud Cost Management. +- **Datadog Case Management**: Users can go to the recommendation side panel and click {{< ui >}}Create Case{{< /ui >}} to generate a case to manage and take action on recommendations. +- **Dismiss**: Use {{< ui >}}Dismiss{{< /ui >}} in the recommendation side panel to hide a recommendation for a chosen time frame and provide a reason. Dismissed recommendations move to the {{< ui >}}Dismissed{{< /ui >}} tab. ## Recommendation and resource descriptions diff --git a/content/en/cloud_cost_management/recommendations/custom_recommendations.md b/content/en/cloud_cost_management/recommendations/custom_recommendations.md index 486531346a1..9873916d63d 100644 --- a/content/en/cloud_cost_management/recommendations/custom_recommendations.md +++ b/content/en/cloud_cost_management/recommendations/custom_recommendations.md @@ -28,11 +28,11 @@ With custom recommendations, you can:
Customizations reflect within 24 hours, when recommendations are generated next.
-To access custom recommendations, go to [**Cloud Cost > Settings > Configure Recommendations**][2]. +To access custom recommendations, go to [{{< ui >}}Cloud Cost{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Configure Recommendations{{< /ui >}}][2]. On this page, you can see a list of out-of-the-box recommendations that can be customized. -Click a recommendation, then click **Create New Configuration** to get started. +Click a recommendation, then click {{< ui >}}Create New Configuration{{< /ui >}} to get started. ### Step 1: Set custom metric thresholds @@ -46,20 +46,20 @@ Adjust the evaluation time frame to match your business's seasonality or operati ### Step 3: Apply this rule to all resources or add a filter -You can select whether to apply the rule to **All Resources** or **Some Resources** in your environment. +You can select whether to apply the rule to {{< ui >}}All Resources{{< /ui >}} or {{< ui >}}Some Resources{{< /ui >}} in your environment. -If you select **Some Resources**, you can filter resources by tag (for example, `team`, `service`, or `environment`) to target specific parts of your business. +If you select {{< ui >}}Some Resources{{< /ui >}}, you can filter resources by tag (for example, `team`, `service`, or `environment`) to target specific parts of your business. ### Step 4: (optional) Label and document the customization Use this step to add a reason and unique name to your configuration so you can audit and reference this recommendation later. -- **Reason:** Provide a reason for your customization to support future audits and maintain a clear record of changes. -- **Name:** Enter a descriptive name for the configuration to identify and locate this recommendation in the future. +- {{< ui >}}Reason{{< /ui >}}: Provide a reason for your customization to support future audits and maintain a clear record of changes. +- {{< ui >}}Name{{< /ui >}}: Enter a descriptive name for the configuration to identify and locate this recommendation in the future. ### Step 5: Save the recommendation -Click **Save** to save your customized recommendation. Recommendations that have already been customized **once** are labeled **Configured**. +Click {{< ui >}}Save{{< /ui >}} to save your customized recommendation. Recommendations that have already been customized **once** are labeled {{< ui >}}Configured{{< /ui >}}. ## Updating custom recommendations @@ -67,11 +67,11 @@ You can update a custom recommendation at any time to reflect changes in your bu To update a custom recommendation: -1. Navigate to [**Cloud Cost > Settings > Configure Recommendations**][2]. +1. Navigate to [{{< ui >}}Cloud Cost{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Configure Recommendations{{< /ui >}}][2]. 2. Go to the customized recommendation. 3. Modify the parameters as needed. -4. Click **Save**. -5. In the confirmation popup, click **Yes, save custom parameters** to apply your changes. +4. Click {{< ui >}}Save{{< /ui >}}. +5. In the confirmation popup, click {{< ui >}}Yes, save custom parameters{{< /ui >}} to apply your changes. ## Further reading {{< partial name="whats-next/whats-next.html" >}} diff --git a/content/en/cloud_cost_management/reporting/_index.md b/content/en/cloud_cost_management/reporting/_index.md index 9882b697f0d..43f161ee875 100644 --- a/content/en/cloud_cost_management/reporting/_index.md +++ b/content/en/cloud_cost_management/reporting/_index.md @@ -45,16 +45,16 @@ Use **[Cost Explorer][13]** for flexible investigation and **Cost Reports** for ## Create a CCM report 1. Go to [**Cloud Cost > Analyze > Reports**][5] in Datadog. -1. Click **New Report** to start from scratch, or select a template from the gallery to accelerate your workflow. +1. Click {{< ui >}}New Report{{< /ui >}} to start from scratch, or select a template from the gallery to accelerate your workflow. {{< img src="cloud_cost/cost_reports/create-new-report-1.png" alt="Create a new report or from a template" style="width:100%;" >}} **Available Templates:** - - **AWS Spend by Service Name**: Understand your EC2, S3, and Lambda costs. - - **Azure Spend by Service Name**: Break down costs by Azure services like Virtual Machines and Azure Monitor. - - **GCP Spend by Service Name**: Break down costs by GCP services like Compute Engine, BigQuery, and Kubernetes Engine. - - **Datadog Spend by Product**: Break down costs by Datadog products like Infrastructure Hosts, Custom Metrics, and Indexed Logs. - - **Spend by Provider**: Compare costs across AWS, Azure, Google Cloud, Oracle Cloud, and more. + - {{< ui >}}AWS Spend by Service Name{{< /ui >}}: Understand your EC2, S3, and Lambda costs. + - {{< ui >}}Azure Spend by Service Name{{< /ui >}}: Break down costs by Azure services like Virtual Machines and Azure Monitor. + - {{< ui >}}GCP Spend by Service Name{{< /ui >}}: Break down costs by GCP services like Compute Engine, BigQuery, and Kubernetes Engine. + - {{< ui >}}Datadog Spend by Product{{< /ui >}}: Break down costs by Datadog products like Infrastructure Hosts, Custom Metrics, and Indexed Logs. + - {{< ui >}}Spend by Provider{{< /ui >}}: Compare costs across AWS, Azure, Google Cloud, Oracle Cloud, and more. ## Customizing your report @@ -64,8 +64,8 @@ Use **[Cost Explorer][13]** for flexible investigation and **Cost Reports** for Select the type of report you want to build: -- **Cost**: Understand where your money is being spent across services, regions, teams, and so on. -- **Budget**: Track spending against predefined budget targets and forecast future costs. +- {{< ui >}}Cost{{< /ui >}}: Understand where your money is being spent across services, regions, teams, and so on. +- {{< ui >}}Budget{{< /ui >}}: Track spending against predefined budget targets and forecast future costs. ### Apply filters @@ -85,23 +85,23 @@ Use filters to include only the specific costs you want to allocate, such as by ### Change how you see your data - Select a **visualization option**: - - **Bar chart**: Compare costs across multiple categories side by side, so you can identify top cost drivers. - - **Pie chart**: Shows the percentage share of each segment, ideal for understanding the relative proportion of costs among a small number of categories. - - **Treemap**: Displays hierarchical data and the relative size of many categories at once, so you can see both the overall structure and the largest contributors in a single view. + - {{< ui >}}Bar chart{{< /ui >}}: Compare costs across multiple categories side by side, so you can identify top cost drivers. + - {{< ui >}}Pie chart{{< /ui >}}: Shows the percentage share of each segment, ideal for understanding the relative proportion of costs among a small number of categories. + - {{< ui >}}Treemap{{< /ui >}}: Displays hierarchical data and the relative size of many categories at once, so you can see both the overall structure and the largest contributors in a single view. - Change the **table view**: - - **Summary**: A consolidated, overall picture of your costs. - - **Day over day**, **week over week** or **month over month**: Analyze how your costs change on a day to day, week to week, or month to month basis and identify trends or unusual fluctuations. + - {{< ui >}}Summary{{< /ui >}}: A consolidated, overall picture of your costs. + - {{< ui >}}Day over day{{< /ui >}}, {{< ui >}}week over week{{< /ui >}} or {{< ui >}}month over month{{< /ui >}}: Analyze how your costs change on a day to day, week to week, or month to month basis and identify trends or unusual fluctuations. - Update the **time frame** and **comparison time frame**: - Choose your time frame to set the reporting period you want to analyze. - Add a comparison period to spot cost changes: - - **Default comparison**: Automatically compares to the immediately preceding period (for example, this week vs. last week). - - **Flexible comparison**: Select any arbitrary period—like a year ago or a custom date range—to identify seasonal patterns. Both periods must be the same type (for example, week to week, month to month), though the actual number of days may vary when comparing months of different lengths. + - {{< ui >}}Default comparison{{< /ui >}}: Automatically compares to the immediately preceding period (for example, this week vs. last week). + - {{< ui >}}Flexible comparison{{< /ui >}}: Select any arbitrary period—like a year ago or a custom date range—to identify seasonal patterns. Both periods must be the same type (for example, week to week, month to month), though the actual number of days may vary when comparing months of different lengths. ### Advanced options (optional) -- **Show usage charges only**: Choose to include all spend (fees, taxes, refunds) or focus on usage charges only. -- **Cost type**: Choose a cost type that best matches your reporting, analysis, or financial management needs. Review the definitions for each cost type based on your provider: [AWS][7], [Azure][8], [Google Cloud][9], [Custom][10]. +- {{< ui >}}Show usage charges only{{< /ui >}}: Choose to include all spend (fees, taxes, refunds) or focus on usage charges only. +- {{< ui >}}Cost type{{< /ui >}}: Choose a cost type that best matches your reporting, analysis, or financial management needs. Review the definitions for each cost type based on your provider: [AWS][7], [Azure][8], [Google Cloud][9], [Custom][10]. **Note**: The availability of these options vary depending on the provider(s) selected. diff --git a/content/en/cloud_cost_management/reporting/dashboards.md b/content/en/cloud_cost_management/reporting/dashboards.md index 4ccc569d614..bd5edf40508 100644 --- a/content/en/cloud_cost_management/reporting/dashboards.md +++ b/content/en/cloud_cost_management/reporting/dashboards.md @@ -44,7 +44,7 @@ You can add cost visualizations to your dashboards using several methods: 1. Navigate to [**Cloud Cost > Analyze > Explorer**][7] in Datadog. 2. Build your cost query using filters, groupings, and time ranges. -3. Click **More** and select **Export to Dashboard**. +3. Click {{< ui >}}More{{< /ui >}} and select {{< ui >}}Export to Dashboard{{< /ui >}}. 4. Choose an existing dashboard or create one. 5. Customize the widget title and settings. @@ -56,7 +56,7 @@ The cost widget appears on your dashboard and updates automatically based on you 1. Go to [**Cloud Cost > Analyze > Reports**][8] in Datadog. 2. Open an existing saved report or create one. -3. Click **More** and select **Export to Dashboard**. +3. Click {{< ui >}}More{{< /ui >}} and select {{< ui >}}Export to Dashboard{{< /ui >}}. 4. Choose an existing dashboard or create one. 5. Customize the widget title and settings. @@ -65,11 +65,11 @@ The cost widget appears on your dashboard and updates automatically based on you ### Create a cost widget directly on a dashboard 1. Open any dashboard or create one. -2. Click **Add Widgets** or **Edit Dashboard**. +2. Click {{< ui >}}Add Widgets{{< /ui >}} or {{< ui >}}Edit Dashboard{{< /ui >}}. 3. Search for and select a cost widget from the widget tray: - - **Cost Summary**: Visualize cost trends over time with customizable filters and groupings - - **Cost Budget**: Track spending against budget targets and forecast future costs - - **Cloud Cost**: Create custom cost queries with advanced filtering options + - {{< ui >}}Cost Summary{{< /ui >}}: Visualize cost trends over time with customizable filters and groupings + - {{< ui >}}Cost Budget{{< /ui >}}: Track spending against budget targets and forecast future costs + - {{< ui >}}Cloud Cost{{< /ui >}}: Create custom cost queries with advanced filtering options {{< img src="cloud_cost/reporting/dashboards/create-cost-widget-from-dashboard.png" alt="Dashboard widget tray displaying Cost Summary, Cost Budget, and Cloud Cost widget options" style="width:100%;" >}} 4. Configure your cost query, filters, and visualization options. diff --git a/content/en/cloud_cost_management/reporting/explorer.md b/content/en/cloud_cost_management/reporting/explorer.md index 3516f51d99d..7d73d95ddbd 100644 --- a/content/en/cloud_cost_management/reporting/explorer.md +++ b/content/en/cloud_cost_management/reporting/explorer.md @@ -29,13 +29,13 @@ Use the Cost Explorer to: 1. Navigate to [**Cloud Cost > Analyze > Explorer**][1] in Datadog. 2. Build a search query using the query editor or dropdown filters: - - Use the **Provider** dropdown to select one or more cloud providers - - Click **+ Filter** to add filters for services, tags, regions, teams, and other attributes + - Use the {{< ui >}}Provider{{< /ui >}} dropdown to select one or more cloud providers + - Click {{< ui >}}+ Filter{{< /ui >}} to add filters for services, tags, regions, teams, and other attributes - Type directly in the search bar for more advanced queries {{< img src="cloud_cost/reporting/reporting-overview-1.png" alt="The Cloud Cost Explorer query builder showing provider selection, cost type filters, tag search, service filters, and group by options" style="width:100%;" >}} -3. Group your cost data by clicking **Group by** and selecting dimensions like: +3. Group your cost data by clicking {{< ui >}}Group by{{< /ui >}} and selecting dimensions like: - Provider name - Service name - Resource tags (such as `team`, `env`, `project`) @@ -48,7 +48,7 @@ Use the Cost Explorer to: ## Cost Change Summary side panel -Click any row in the table at the bottom of the Explorer to open the **Cost Change Summary panel** for that specific provider, service, or resource. The panel highlights what and who may be driving cost changes for the current period versus the prior period. +Click any row in the table at the bottom of the Explorer to open the {{< ui >}}Cost Change Summary panel{{< /ui >}} for that specific provider, service, or resource. The panel highlights what and who may be driving cost changes for the current period versus the prior period. The panel contains four general sections: - Cost change summary @@ -62,7 +62,7 @@ At the top, you can see the **total cost** for the current period and the dollar ### Investigate the change -Use the **Change Details** and **Investigate Further** sections to: +Use the {{< ui >}}Change Details{{< /ui >}} and {{< ui >}}Investigate Further{{< /ui >}} sections to: - **Instantly identify cost anomalies**: Unexpected deviations in cost, calculated against historical data, are automatically highlighted in red, allowing you to focus your investigation on critical trends. @@ -73,11 +73,11 @@ Use the **Change Details** and **Investigate Further** sections to: ### Collaborate and monitor - **Contact the responsible team**: - - Review the **Associated Team(s)** section to identify which teams own the resources driving the cost change (inferred from tags like `team:shopist`). Follow up with the listed teams (for example, Shopist, Platform, Cloud-Networks) to gain full context for the change. - - Click **Send Notebook** to share the full cost investigation context directly with the team, allowing them to capture findings, add annotations, and track the investigation thread. + - Review the {{< ui >}}Associated Team(s){{< /ui >}} section to identify which teams own the resources driving the cost change (inferred from tags like `team:shopist`). Follow up with the listed teams (for example, Shopist, Platform, Cloud-Networks) to gain full context for the change. + - Click {{< ui >}}Send Notebook{{< /ui >}} to share the full cost investigation context directly with the team, allowing them to capture findings, add annotations, and track the investigation thread. - **Filter by tags**: - - Use **Associated Tags** to see all tags contributing to the cost line item. + - Use {{< ui >}}Associated Tags{{< /ui >}} to see all tags contributing to the cost line item. - Click any tag value (like `account:demo` or a specific `aws_account`) to refine your search and filter the entire Explorer to show only resources with that tag. - **Create a monitor**: @@ -85,29 +85,29 @@ Use the **Change Details** and **Investigate Further** sections to: ## Refine your results -Click **Refine Results** to access advanced filtering options that help you focus on specific cost patterns. +Click {{< ui >}}Refine Results{{< /ui >}} to access advanced filtering options that help you focus on specific cost patterns. {{< img src="cloud_cost/reporting/refine-results.png" alt="The Refine Results panel shows filtering options including Usage Charges Only, Complete Days Only, Total Cost, Dollar Change, and Percent Change" style="width:100%;" >}} -**Complete Days Only** +{{< ui >}}Complete Days Only{{< /ui >}} : Exclude the past two days of cost data, which may be incomplete. Use this option for accurate historical analysis. -**Total Cost** +{{< ui >}}Total Cost{{< /ui >}} : Filter the data to view costs within a specific dollar range (for example, show only resources costing more than $1,000). -**Dollar Change** +{{< ui >}}Dollar Change{{< /ui >}} : Display only cost changes within a specified dollar change range (for example, show services with a $500+ increase). -**Percent Change** +{{< ui >}}Percent Change{{< /ui >}} : Display only cost changes within a specified percentage range (for example, show resources with a 20%+ cost increase). ## Change data views The Cost Explorer displays your cost data as a timeseries graph with a table breakdown. You can change how the graph displays data by selecting from the following views: -- **Costs ($)**: View total costs in dollars over time -- **Change trends (%)**: View cost changes as percentage increases or decreases -- **Change trends ($)**: View cost changes in dollar amounts +- {{< ui >}}Costs ($){{< /ui >}}: View total costs in dollars over time +- {{< ui >}}Change trends (%){{< /ui >}}: View cost changes as percentage increases or decreases +- {{< ui >}}Change trends ($){{< /ui >}}: View cost changes in dollar amounts {{< img src="cloud_cost/reporting/change-view.png" alt="Dropdown menu showing three view options: Costs in $, Change trends in %, and Change trends in $" style="width:100%;" >}} @@ -120,35 +120,35 @@ Below the graph, the table displays costs broken down by your selected grouping {{< img src="cloud_cost/reporting/table-display-options.png" alt="Table display options showing Summary and Breakdown view modes, column visibility toggles, and Top changes only filter" style="width:100%;" >}} **View modes** -- **Summary**: View aggregated costs across all time periods for a high-level overview -- **Breakdown**: See costs broken down by time period (daily, weekly, or monthly depending on your selected time range) +- {{< ui >}}Summary{{< /ui >}}: View aggregated costs across all time periods for a high-level overview +- {{< ui >}}Breakdown{{< /ui >}}: See costs broken down by time period (daily, weekly, or monthly depending on your selected time range) **Filters** -- **Top changes only**: Enable this checkbox to filter the table and show only the resources or services with the largest cost increases or decreases +- {{< ui >}}Top changes only{{< /ui >}}: Enable this checkbox to filter the table and show only the resources or services with the largest cost increases or decreases **Column visibility** Show or hide columns in the table to focus on the metrics that matter: -- **Total**: Total aggregated costs for each resource or service -- **Dollar change trends**: Cost changes in dollar amounts over time -- **Change trends**: Percentage-based cost changes over time +- {{< ui >}}Total{{< /ui >}}: Total aggregated costs for each resource or service +- {{< ui >}}Dollar change trends{{< /ui >}}: Cost changes in dollar amounts over time +- {{< ui >}}Change trends{{< /ui >}}: Percentage-based cost changes over time ## Export and share After analyzing costs in the Explorer, you can: ### Export to csv -Download your cost data for offline analysis, reporting, or sharing with stakeholders. Click the **Export** button and select **Download as CSV**. +Download your cost data for offline analysis, reporting, or sharing with stakeholders. Click the {{< ui >}}Export{{< /ui >}} button and select {{< ui >}}Download as CSV{{< /ui >}}. ### Create a dashboard widget Save your current query as a dashboard widget to monitor costs alongside other metrics: -1. Click **Export** and select **Export to Dashboard**. +1. Click {{< ui >}}Export{{< /ui >}} and select {{< ui >}}Export to Dashboard{{< /ui >}}. 2. Choose an existing dashboard or create one. 3. Customize the widget title and settings. ### Create a cost monitor Set up alerts based on your current query to get notified when costs exceed thresholds or change unexpectedly: -1. Click **Export** and select **Create Monitor**. +1. Click {{< ui >}}Export{{< /ui >}} and select {{< ui >}}Create Monitor{{< /ui >}}. 2. Configure alert conditions (for example, when costs exceed $10,000 or increase by 20%). 3. Set notification channels (email, Slack, PagerDuty). diff --git a/content/en/cloud_cost_management/reporting/scheduled_reports.md b/content/en/cloud_cost_management/reporting/scheduled_reports.md index 1a92cdcf9f6..b69a307f82d 100644 --- a/content/en/cloud_cost_management/reporting/scheduled_reports.md +++ b/content/en/cloud_cost_management/reporting/scheduled_reports.md @@ -27,23 +27,23 @@ For emails, the report PDF is included as an email attachment and/or as a link, ## Schedule a CCM report 1. Go to [**Cloud Cost > Analyze > Reports**][1] in Datadog. 2. [Create a report][2] or select an existing report. -3. Click **Share**, then **Schedule Report**. +3. Click {{< ui >}}Share{{< /ui >}}, then {{< ui >}}Schedule Report{{< /ui >}}. {{< img src="cloud_cost/cost_reports/share_scheduled_report-1.png" alt="Click the Share button and Schedule Report on an individual report page." style="width:90%;" >}} 4. In the configuration modal that opens: - Set your schedule (when and how often the report should be sent) - Enter a title for your schedule 5. Add recipients: - - **Email recipients**: Enter email addresses. Your Datadog account is automatically added, but you can remove it by hovering over it and clicking the trash icon. + - {{< ui >}}Email recipients{{< /ui >}}: Enter email addresses. Your Datadog account is automatically added, but you can remove it by hovering over it and clicking the trash icon. **Note:** Enterprise and Pro accounts can send reports to recipients outside of their organizations. You can control which email domains are able to receive reports by configuring your [domain allowlist][4]. - - **Slack recipients**: Select your Slack workspace and channel from the dropdowns. If no workspaces appear, make sure you have the Datadog [Slack Integration][2] installed. All public channels within the Slack workspace are listed automatically. For private channels, invite the Datadog Slack bot first. You can test the connection by clicking the **Send Test Message** button. + - {{< ui >}}Slack recipients{{< /ui >}}: Select your Slack workspace and channel from the dropdowns. If no workspaces appear, make sure you have the Datadog [Slack Integration][2] installed. All public channels within the Slack workspace are listed automatically. For private channels, invite the Datadog Slack bot first. You can test the connection by clicking the {{< ui >}}Send Test Message{{< /ui >}} button. - - **Microsoft Teams recipients**: Select the **Microsoft Teams** tab, then choose a **Tenant**, **Team**, and **Channel** from the available dropdowns. Ensure the [Microsoft Teams integration][5] is installed in your Datadog organization and the Datadog app is added to the target Team in Microsoft Teams. To send a test message, add a channel recipient and click **Send Test Message**. + - {{< ui >}}Microsoft Teams recipients{{< /ui >}}: Select the {{< ui >}}Microsoft Teams{{< /ui >}} tab, then choose a {{< ui >}}Tenant{{< /ui >}}, {{< ui >}}Team{{< /ui >}}, and {{< ui >}}Channel{{< /ui >}} from the available dropdowns. Ensure the [Microsoft Teams integration][5] is installed in your Datadog organization and the Datadog app is added to the target Team in Microsoft Teams. To send a test message, add a channel recipient and click {{< ui >}}Send Test Message{{< /ui >}}. ## Managing schedules -A single Cloud Cost (CCM) Report can have multiple schedules with different settings, allowing you to inform different stakeholder groups interested in the same cost data. To view existing schedules, click **Share** and select **Manage Schedules**. +A single Cloud Cost (CCM) Report can have multiple schedules with different settings, allowing you to inform different stakeholder groups interested in the same cost data. To view existing schedules, click {{< ui >}}Share{{< /ui >}} and select {{< ui >}}Manage Schedules{{< /ui >}}. From the configuration modal that opens, you can: - Pause existing schedules @@ -56,8 +56,8 @@ From the configuration modal that opens, you can: ## Viewing schedules To see all Cloud Cost (CCM) Report schedules across your organization: -1. Navigate to [**Cloud Cost > Analyze > Reports**][1] and click the **Report Schedules** tab. -2. Use the "My schedules" toggle to switch between your personal schedules and all organization schedules. +1. Navigate to [**Cloud Cost > Analyze > Reports**][1] and click the {{< ui >}}Report Schedules{{< /ui >}} tab. +2. Use the {{< ui >}}My schedules{{< /ui >}} toggle to switch between your personal schedules and all organization schedules. {{< img src="cloud_cost/cost_reports/cost-report-schedules-view-4.png" alt="View all Cost Report Schedules." style="width:100%;" >}} diff --git a/content/en/cloud_cost_management/setup/aws.md b/content/en/cloud_cost_management/setup/aws.md index 866148c75f2..7ca5d1077ae 100644 --- a/content/en/cloud_cost_management/setup/aws.md +++ b/content/en/cloud_cost_management/setup/aws.md @@ -37,9 +37,9 @@ Navigate to [Setup & Configuration][7], add an AWS account and follow the steps **Note**: Datadog recommends configuring a Cost and Usage Report from an [AWS **management account**][2] for cost visibility into related **member accounts**. If you send a Cost and Usage Report from an AWS **member account**, ensure that you have selected the following options in your **management account's** [preferences][3]: -- **Linked Account Access** -- **Linked Account Refunds and Credits** -- **Linked Account Discounts** +- {{< ui >}}Linked Account Access{{< /ui >}} +- {{< ui >}}Linked Account Refunds and Credits{{< /ui >}} +- {{< ui >}}Linked Account Discounts{{< /ui >}} These settings ensure complete cost accuracy by allowing periodic cost calculations against the AWS Cost Explorer. @@ -53,19 +53,19 @@ These settings ensure complete cost accuracy by allowing periodic cost calculati The CloudFormation stack can be configured in three ways depending on your existing AWS resources: -* **New setup**: Select **Create Cost and Usage Report** to create both the report and its S3 bucket -* **Existing bucket**: Select **Create Cost and Usage Report** and unselect **Create S3 Bucket** to use an existing S3 bucket -* **Existing report**: Unselect **Create Cost and Usage Report** to import an existing Cost and Usage Report +* **New setup**: Select {{< ui >}}Create Cost and Usage Report{{< /ui >}} to create both the report and its S3 bucket +* **Existing bucket**: Select {{< ui >}}Create Cost and Usage Report{{< /ui >}} and unselect {{< ui >}}Create S3 Bucket{{< /ui >}} to use an existing S3 bucket +* **Existing report**: Unselect {{< ui >}}Create Cost and Usage Report{{< /ui >}} to import an existing Cost and Usage Report ### Configure the Cost and Usage Report settings Enter the following details for your Cost and Usage Report: -* **Bucket Name**: The S3 bucket name where the report files are stored. -* **Bucket Region**: The AWS [region code][100] of the region containing your S3 bucket. For example, `us-east-1`. -* **Export Path Prefix**: The S3 path prefix where report files are stored. +* {{< ui >}}Bucket Name{{< /ui >}}: The S3 bucket name where the report files are stored. +* {{< ui >}}Bucket Region{{< /ui >}}: The AWS [region code][100] of the region containing your S3 bucket. For example, `us-east-1`. +* {{< ui >}}Export Path Prefix{{< /ui >}}: The S3 path prefix where report files are stored. * **Note:** The following prefix formats are not supported: empty, starting with `/` (such as `/` or `/cost`), or ending with `/` (such as `cost/`). Prefixes containing `/` in the middle are supported (such as `cost/hourly`). -* **Export Name**: The name of your Cost and Usage Report. +* {{< ui >}}Export Name{{< /ui >}}: The name of your Cost and Usage Report. **Note**: - These values either locate your existing Cost and Usage Report, or define the settings for newly created resources. @@ -84,9 +84,9 @@ Enter the following details for your Cost and Usage Report: The Terraform configuration supports three setups depending on your existing AWS resources: -* **New setup**: Select **Create Cost and Usage Report** to create both the report and its S3 bucket -* **Existing bucket**: Select **Create Cost and Usage Report** and unselect **Create S3 Bucket** to use an existing S3 bucket -* **Existing bucket and report**: Unselect **Create Cost and Usage Report** and **Create S3 Bucket** to use an existing report and S3 bucket +* **New setup**: Select {{< ui >}}Create Cost and Usage Report{{< /ui >}} to create both the report and its S3 bucket +* **Existing bucket**: Select {{< ui >}}Create Cost and Usage Report{{< /ui >}} and unselect {{< ui >}}Create S3 Bucket{{< /ui >}} to use an existing S3 bucket +* **Existing bucket and report**: Unselect {{< ui >}}Create Cost and Usage Report{{< /ui >}} and {{< ui >}}Create S3 Bucket{{< /ui >}} to use an existing report and S3 bucket **Note**: If using an existing bucket, verify that AWS has permission to write CURs to it. If not, you may need to update your bucket's policy. @@ -94,11 +94,11 @@ The Terraform configuration supports three setups depending on your existing AWS Enter the following details for your Cost and Usage Report: -* **Bucket Name**: The S3 bucket name where the report files are stored. -* **Bucket Region**: The AWS [region code][100] of the region containing your S3 bucket. For example, `us-east-1`. -* **Export Path Prefix**: The S3 path prefix where report files are stored. +* {{< ui >}}Bucket Name{{< /ui >}}: The S3 bucket name where the report files are stored. +* {{< ui >}}Bucket Region{{< /ui >}}: The AWS [region code][100] of the region containing your S3 bucket. For example, `us-east-1`. +* {{< ui >}}Export Path Prefix{{< /ui >}}: The S3 path prefix where report files are stored. * **Note:** The following prefix formats are not supported: empty, starting with `/` (such as `/` or `/cost`), or ending with `/` (such as `cost/`). Prefixes containing `/` in the middle are supported (such as `cost/hourly`). -* **Export Name**: The name of your Cost and Usage Report. +* {{< ui >}}Export Name{{< /ui >}}: The name of your Cost and Usage Report. **Note**: - These values either locate your existing Cost and Usage Report, or define the settings for newly created resources. @@ -109,7 +109,7 @@ Enter the following details for your Cost and Usage Report: ### Copy generated Terraform HCL and apply changes -In the CCM Terraform setup UI, follow the instructions in the **Apply Terraform Configuration** step. Resolve any issues that appear while running `terraform plan` or `terraform apply` before returning to CCM to confirm account creation. +In the CCM Terraform setup UI, follow the instructions in the {{< ui >}}Apply Terraform Configuration{{< /ui >}} step. Resolve any issues that appear while running `terraform plan` or `terraform apply` before returning to CCM to confirm account creation. {{% /tab %}} @@ -119,34 +119,34 @@ In the CCM Terraform setup UI, follow the instructions in the **Apply Terraform ### Prerequisite: generate a Cost and Usage Report -[Create a Legacy Cost and Usage Report][201] in AWS under the **Data Exports** section. +[Create a Legacy Cost and Usage Report][201] in AWS under the {{< ui >}}Data Exports{{< /ui >}} section. -Select the Export type **Legacy CUR export**. +Select the Export type {{< ui >}}Legacy CUR export{{< /ui >}}. Select the following content options: -* Export type: **Legacy CUR export** -* **Include resource IDs** -* **Split cost allocation data** (Enables ECS Cost Allocation. You must also opt in to [AWS Split Cost Allocation][210] in Cost Explorer preferences). -* **"Refresh automatically"** +* Export type: {{< ui >}}Legacy CUR export{{< /ui >}} +* {{< ui >}}Include resource IDs{{< /ui >}} +* {{< ui >}}Split cost allocation data{{< /ui >}} (Enables ECS Cost Allocation. You must also opt in to [AWS Split Cost Allocation][210] in Cost Explorer preferences). +* {{< ui >}}Refresh automatically{{< /ui >}} Select the following delivery options: -* Time granularity: **Hourly** -* Report versioning: **Create new report version** -* Compression type: **GZIP** or **Parquet** +* Time granularity: {{< ui >}}Hourly{{< /ui >}} +* Report versioning: {{< ui >}}Create new report version{{< /ui >}} +* Compression type: {{< ui >}}GZIP{{< /ui >}} or {{< ui >}}Parquet{{< /ui >}} ### Locate the Cost and Usage Report -If you have navigated away from the report that you created in the prerequisites section, follow AWS documentation to [view your Data Exports][204]. Select the legacy CUR export that you created, then select **Edit** to see the details of the export. +If you have navigated away from the report that you created in the prerequisites section, follow AWS documentation to [view your Data Exports][204]. Select the legacy CUR export that you created, then select {{< ui >}}Edit{{< /ui >}} to see the details of the export. To enable Datadog to locate the Cost and Usage Report, complete the fields with their corresponding details: -* **Bucket Name**: This is the name of the **S3 bucket** in the Data export storage settings section. -* **Bucket Region**: This is the region your bucket is located. For example, `us-east-1`. -* **Export Path Prefix**: This is the **S3 path prefix** in the Data export storage settings section. +* {{< ui >}}Bucket Name{{< /ui >}}: This is the name of the S3 bucket in the Data export storage settings section. +* {{< ui >}}Bucket Region{{< /ui >}}: This is the region your bucket is located. For example, `us-east-1`. +* {{< ui >}}Export Path Prefix{{< /ui >}}: This is the S3 path prefix in the Data export storage settings section. * **Note:** The following prefix formats are not supported: empty, starting with `/` (such as `/` or `/cost`), or ending with `/` (such as `cost/`). Prefixes containing `/` in the middle are supported (such as `cost/hourly`). -* **Export Name**: This is the **Export name** in the Export name section. +* {{< ui >}}Export Name{{< /ui >}}: This is the Export name in the Export name section. **Note**: Datadog only supports legacy Cost and Usage Reports (CURs) generated by AWS. Do not modify or move the files generated by AWS, or attempt to provide access to files generated by a third party. @@ -213,11 +213,11 @@ To enable Datadog to locate the Cost and Usage Report, complete the fields with Attach the new S3 policy to the Datadog integration role. -1. Navigate to **Roles** in the AWS IAM console. +1. Navigate to {{< ui >}}Roles{{< /ui >}} in the AWS IAM console. 2. Locate the role used by the Datadog integration. By default it is named **DatadogIntegrationRole**, but the name may vary if your organization has renamed it. Click the role name to open the role summary page. -3. Click **Attach policies**. +3. Click {{< ui >}}Attach policies{{< /ui >}}. 4. Enter the name of the S3 bucket policy created above. -5. Click **Attach policy**. +5. Click {{< ui >}}Attach policy{{< /ui >}}. **Note**: It may take between 48 and 72 hours for all available data to populate in your Datadog organization after a complete Cost and Usage Report is generated. If 72 hours have passed and the data has still not yet populated, contact [Datadog Support][18]. @@ -240,11 +240,11 @@ Using Account Filtering requires an AWS management account. You can configure ac #### Configure account filters for an existing account -Navigate to [**Cloud Cost** > **Settings**, select **Accounts**][17], and then click **Manage Account** for the management account you want to filter. +Navigate to [**Cloud Cost** > **Settings**, select **Accounts**][17], and then click {{< ui >}}Manage Account{{< /ui >}} for the management account you want to filter. {{< img src="cloud_cost/account_filtering/manage_account.png" alt="Manage Account button on account card" style="width:100%;" >}} -Click **Billing dataset** to access the Account Filtering UI. +Click {{< ui >}}Billing dataset{{< /ui >}} to access the Account Filtering UI. {{< img src="cloud_cost/account_filtering/account_filtering.png" alt="Account Filtering UI to filter AWS member accounts" style="width:100%;" >}} diff --git a/content/en/cloud_cost_management/setup/azure.md b/content/en/cloud_cost_management/setup/azure.md index 792906f0aee..5ca7e9440ba 100644 --- a/content/en/cloud_cost_management/setup/azure.md +++ b/content/en/cloud_cost_management/setup/azure.md @@ -55,20 +55,20 @@ Use the dropdown to select the scope type for your account. CCM supports the bil The Terraform configuration supports three setups depending on your existing Azure resources: -* **New setup**: Select **Create storage account and container** to create a storage account, container, and cost exports. -* **Existing storage account and container**: Unselect **Create storage account and container** and select **Create cost exports** to use existing storage but create new cost exports. +* **New setup**: Select {{< ui >}}Create storage account and container{{< /ui >}} to create a storage account, container, and cost exports. +* **Existing storage account and container**: Unselect {{< ui >}}Create storage account and container{{< /ui >}} and select {{< ui >}}Create cost exports{{< /ui >}} to use existing storage but create new cost exports. * **Existing storage account, container, and cost exports**: Unselect both options to use existing storage and cost exports. ### Configure the scope and export details Enter the following details for your configuration: -* **Billing account or Subscription ID**: Depending on the scope selected in Step 1, the relevant billing account ID or subscription ID. -* **Resource group name**: The name of your existing resource group in the selected scope. A pre-existing resource group is required for the Terraform setup. -* **Location**: The Azure location of your resource group. For example, `East US 2`. -* **Storage account and container name**: Depending on the resources you have selected to create, the names of your new or pre-existing storage account and container. -* **Actual cost export name and path**: The name and path of your actual cost export. -* **Amortized cost export name and path**: The name and path of your amortized cost export. +* {{< ui >}}Billing account or Subscription ID{{< /ui >}}: Depending on the scope selected in Step 1, the relevant billing account ID or subscription ID. +* {{< ui >}}Resource group name{{< /ui >}}: The name of your existing resource group in the selected scope. A pre-existing resource group is required for the Terraform setup. +* {{< ui >}}Location{{< /ui >}}: The Azure location of your resource group. For example, `East US 2`. +* {{< ui >}}Storage account and container name{{< /ui >}}: Depending on the resources you have selected to create, the names of your new or pre-existing storage account and container. +* {{< ui >}}Actual cost export name and path{{< /ui >}}: The name and path of your actual cost export. +* {{< ui >}}Amortized cost export name and path{{< /ui >}}: The name and path of your amortized cost export. * **Note:** The following prefix formats are not supported: empty, starting with `/` (such as `/` or `/cost`), or ending with `/` (such as `cost/`). Prefixes containing `/` in the middle are supported (such as `cost/hourly`). ### Copy generated Azure resource Terraform HCL and apply changes @@ -79,15 +79,15 @@ After the fields in Step 2 are complete, Step 3 enables and displays the generat {{< img src="cloud_cost/setup/azure_toggle_file_partitioning.png" alt="Toggle on file partitioning for both exports" style="width:50%" >}} -Open the Azure console link to locate your cost exports. If needed, change the current scope to the correct one for your exports. For both actual and amortized exports, select them and click **Edit** to toggle on File Partitioning if not already enabled. +Open the Azure console link to locate your cost exports. If needed, change the current scope to the correct one for your exports. For both actual and amortized exports, select them and click {{< ui >}}Edit{{< /ui >}} to toggle on File Partitioning if not already enabled. {{< img src="cloud_cost/run_now.png" alt="Click Run Now button in export side panel to generate exports" style="width:50%" >}} -Save the File Partitioning changes and click **Run Now**. Return to CCM once both export runs have succeeded. +Save the File Partitioning changes and click {{< ui >}}Run Now{{< /ui >}}. Return to CCM once both export runs have succeeded. ### Copy generated Datadog HCL and apply changes -Follow the instructions in the **Apply Datadog Terraform HCL** step. Resolve any issues that appear while running `terraform plan` or `terraform apply` before returning to CCM to confirm account creation. +Follow the instructions in the {{< ui >}}Apply Datadog Terraform HCL{{< /ui >}} step. Resolve any issues that appear while running `terraform plan` or `terraform apply` before returning to CCM to confirm account creation. {{% /tab %}} @@ -99,20 +99,20 @@ Follow the instructions in the **Apply Datadog Terraform HCL** step. Resolve any You need to generate exports for two data types: **actual** and **amortized**. Datadog recommends using the same storage container for both exports. -1. Navigate to [Cost Management | Configuration][5] under Azure portal's **Tools** > **Cost Management** > **Settings** > **Configuration** and click **Exports**. +1. Navigate to [Cost Management | Configuration][5] under Azure portal's {{< ui >}}Tools{{< /ui >}} > {{< ui >}}Cost Management{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Configuration{{< /ui >}} and click {{< ui >}}Exports{{< /ui >}}. {{< img src="cloud_cost/azure_export_path.png" alt="In Azure portal highlighting Exports option in navigation" style="width:100%" >}} 2. Select the export scope located next to the search filter. - **Note:** The scope must be **billing account**, **subscription**, or **resource group**. -3. After the scope is selected, click **Schedule export**. + **Note:** The scope must be {{< ui >}}billing account{{< /ui >}}, {{< ui >}}subscription{{< /ui >}}, or {{< ui >}}resource group{{< /ui >}}. +3. After the scope is selected, click {{< ui >}}Schedule export{{< /ui >}}. {{< img src="cloud_cost/azure_exports_page.png" alt="In Azure portal highlighting the export scope and schedule button" style="width:100%" >}} -4. Select the **Cost and usage (actual + amortized)** template +4. Select the {{< ui >}}Cost and usage (actual + amortized){{< /ui >}} template {{< img src="cloud_cost/azure_new_export.png" alt="New export page with template and manual options highlighted" style="width:100%" >}} -5. Click **Edit** on each export and confirm the following details: - - Frequency: **Daily export of month-to-date costs** +5. Click {{< ui >}}Edit{{< /ui >}} on each export and confirm the following details: + - Frequency: {{< ui >}}Daily export of month-to-date costs{{< /ui >}} - Dataset version: - Supported versions: `2021-10-01`, `2021-01-01`, `2020-01-01` - Unsupported versions: `2019-10-01` @@ -120,21 +120,21 @@ You need to generate exports for two data types: **actual** and **amortized**. D 6. Enter an "Export prefix" for the new exports. For example, enter `datadog` to avoid conflicts with existing exports. -7. In the **Destination** tab, select the following details: - - Choose **Azure blob storage** as the storage type. +7. In the {{< ui >}}Destination{{< /ui >}} tab, select the following details: + - Choose {{< ui >}}Azure blob storage{{< /ui >}} as the storage type. - Choose a storage account, container, and directory for the exports. - **Note:** Do not use special characters like `.` in these fields. - **Note:** Billing exports can be stored in any subscription. If you are creating exports for multiple subscriptions, Datadog recommends storing them in the same storage account. Export names must be unique. - - Choose **CSV** or **Parquet** as the format. - - Choose the compression type. For **CSV**: **Gzip** and **None** are supported. For **Parquet**: **Snappy** and **None** are supported. - - Ensure that **File partitioning** is checked. - - Ensure that **Overwrite data** is not checked. - - **Note:** Datadog does not support the Overwrite Data setting. If the setting was previously checked, make sure to clean the files in the directory or move them to another one. + - Choose {{< ui >}}CSV{{< /ui >}} or {{< ui >}}Parquet{{< /ui >}} as the format. + - Choose the compression type. For {{< ui >}}CSV{{< /ui >}}: {{< ui >}}Gzip{{< /ui >}} and {{< ui >}}None{{< /ui >}} are supported. For {{< ui >}}Parquet{{< /ui >}}: {{< ui >}}Snappy{{< /ui >}} and {{< ui >}}None{{< /ui >}} are supported. + - Ensure that {{< ui >}}File partitioning{{< /ui >}} is checked. + - Ensure that {{< ui >}}Overwrite data{{< /ui >}} is not checked. + - **Note:** Datadog does not support the {{< ui >}}Overwrite data{{< /ui >}} setting. If the setting was previously checked, make sure to clean the files in the directory or move them to another one. {{< img src="cloud_cost/improved_export_destination_2.png" alt="Export Destination with File partitioning and Overwrite data settings" >}} -8. On the **Review + create** tab, select **Create**. -9. Generate the first exports manually by clicking **Run Now**. Wait for successful completion before continuing. +8. On the {{< ui >}}Review + create{{< /ui >}} tab, select {{< ui >}}Create{{< /ui >}}. +9. Generate the first exports manually by clicking {{< ui >}}Run Now{{< /ui >}}. Wait for successful completion before continuing. {{< img src="cloud_cost/run_now.png" alt="Click Run Now button in export side panel to generate exports" style="width:50%" >}} @@ -146,12 +146,12 @@ Grant Datadog read access to the storage account where your exports are saved. 1. In the Exports tab, click on the export's Storage Account to navigate to it. 2. Click the Containers tab. 3. Choose the storage container your bills are in. -4. Select the Access Control (IAM) tab, and click **Add**. -5. Choose **Add role assignment**. -6. Choose **Storage Blob Data Reader**, then click Next. +4. Select the {{< ui >}}Access Control (IAM){{< /ui >}} tab, and click {{< ui >}}Add{{< /ui >}}. +5. Choose {{< ui >}}Add role assignment{{< /ui >}}. +6. Choose {{< ui >}}Storage Blob Data Reader{{< /ui >}}, then click {{< ui >}}Next{{< /ui >}}. 7. Assign these permissions to one of the App Registrations you have connected with Datadog. - - Click **Select members**, pick the name of the App Registration, and click **Select**. **Note**: If you do not see your App Registration listed, start typing the name for the UI to update and show it, if it is available. - - Select **Review + assign**. + - Click {{< ui >}}Select members{{< /ui >}}, pick the name of the App Registration, and click {{< ui >}}Select{{< /ui >}}. **Note**: If you do not see your App Registration listed, start typing the name for the UI to update and show it, if it is available. + - Select {{< ui >}}Review + assign{{< /ui >}}. If your exports are in different storage containers, repeat steps one to seven for the other storage container. @@ -160,23 +160,23 @@ If your exports are in different storage containers, repeat steps one to seven f 1. In the Exports tab, click on the export's Storage Account to navigate to it. 2. Click the Containers tab. 3. Choose the storage container your bills are in. -4. Select the Access Control (IAM) tab, and click **Add**. -5. Choose **Add role assignment**. -6. Choose **Storage Blob Data Reader**, then click Next. +4. Select the {{< ui >}}Access Control (IAM){{< /ui >}} tab, and click {{< ui >}}Add{{< /ui >}}. +5. Choose {{< ui >}}Add role assignment{{< /ui >}}. +6. Choose {{< ui >}}Storage Blob Data Reader{{< /ui >}}, then click {{< ui >}}Next{{< /ui >}}. 7. Assign these permissions to one of the App Registrations you have connected with Datadog. - - Click **Select members**, pick the name of the App Registration, and click **Select**. - - Select **Review + assign**. + - Click {{< ui >}}Select members{{< /ui >}}, pick the name of the App Registration, and click {{< ui >}}Select{{< /ui >}}. + - Select {{< ui >}}Review + assign{{< /ui >}}. If your exports are in different storage containers, repeat steps one to seven for the other storage container. {{% /collapse-content %}} ### Configure Cost Management Reader access -**Note:** You do not need to configure this access if your scope is **Billing Account**. +**Note:** You do not need to configure this access if your scope is {{< ui >}}Billing Account{{< /ui >}}. 1. Navigate to your [subscriptions][1] and click your subscription's name. -2. Select the Access Control (IAM) tab. -3. Click **Add**, then **Add role assignment**. -4. Choose **Cost Management Reader**, then click Next. +2. Select the {{< ui >}}Access Control (IAM){{< /ui >}} tab. +3. Click {{< ui >}}Add{{< /ui >}}, then {{< ui >}}Add role assignment{{< /ui >}}. +4. Choose {{< ui >}}Cost Management Reader{{< /ui >}}, then click {{< ui >}}Next{{< /ui >}}. 5. Assign these permissions to the app registration. This helps ensure complete cost accuracy by allowing periodic cost calculations against Microsoft Cost Management. @@ -205,14 +205,14 @@ Azure exports cost data starting from the month you created the export. Datadog 1. Wait up to 24 hours for cost data to appear in Datadog to ensure the integration is working end-to-end before beginning the backfill process. **Note:** If you have already completed setup, and cost data is appearing in Datadog, you can proceed directly to the backfill steps below. 1. Manually export an **actual** and **amortized** report for each calendar month. For example, for June 2025: 1. Edit the Export - 2. Change Export Type to "One-time export" - 3. Set From to 06-01-2025 **Note:** This must be the first day of the month. - 4. Set End to 06-30-2025 **Note:** This must be the last day of the month. + 2. Change Export Type to {{< ui >}}One-time export{{< /ui >}} + 3. Set {{< ui >}}From{{< /ui >}} to 06-01-2025 **Note:** This must be the first day of the month. + 4. Set {{< ui >}}End{{< /ui >}} to 06-30-2025 **Note:** This must be the last day of the month. 5. Save the export **Note:** This automatically runs the export 6. Wait for the export to finish running 1. Revert both the **actual** and **amortized** exports to their original state to resume daily exports: 1. Edit the Export - 2. Change Export Type to "Daily export of month-to-date costs" + 2. Change Export Type to {{< ui >}}Daily export of month-to-date costs{{< /ui >}} 3. Save the export Datadog automatically discovers and ingests this data, and it should appear in Datadog within 24 hours. diff --git a/content/en/cloud_cost_management/setup/custom.md b/content/en/cloud_cost_management/setup/custom.md index d98d5caad89..8ca8d4f9030 100644 --- a/content/en/cloud_cost_management/setup/custom.md +++ b/content/en/cloud_cost_management/setup/custom.md @@ -237,7 +237,7 @@ After your data is formatted to the requirements above, upload your CSV and JSON In Datadog: 1. Navigate to [**Cloud Cost > Settings > Custom Costs**][3]. -1. Click the **+ Upload Costs** button. +1. Click the {{< ui >}}+ Upload Costs{{< /ui >}} button. {{< img src="cloud_cost/upload_file.png" alt="Upload a CSV or JSON file to Datadog" style="width:80%" >}} diff --git a/content/en/cloud_cost_management/setup/google_cloud.md b/content/en/cloud_cost_management/setup/google_cloud.md index 73856d9d598..c21312b9ca2 100644 --- a/content/en/cloud_cost_management/setup/google_cloud.md +++ b/content/en/cloud_cost_management/setup/google_cloud.md @@ -35,7 +35,7 @@ Navigate to [Setup & Configuration][3], add a Google Cloud Platform account and
The Datadog Google Cloud Platform integration allows Cloud Costs to automatically monitor all projects this service account has access to. -To limit infrastructure monitoring hosts for these projects, apply tags to the hosts. Then define whether the tags should be included or excluded from monitoring in the Limit Metric Collection Filters section of the integration page. +To limit infrastructure monitoring hosts for these projects, apply tags to the hosts. Then define whether the tags should be included or excluded from monitoring in the {{< ui >}}Limit Metric Collection Filters{{< /ui >}} section of the integration page.
{{< img src="cloud_cost/gcp_integration_limit_metric_collection.png" alt="Limit metric collection filters section configured in the Google Cloud Platform integration page" >}} @@ -47,7 +47,7 @@ The }} @@ -63,7 +63,7 @@ The following permissions allow Datadog to access and transfer the billing expor - Enable the [BigQuery Data Transfer Service][5]. 1. Open the BigQuery Data Transfer API page in the API library. 2. From the dropdown menu, select the project that contains the service account. - 3. Click the ENABLE button. + 3. Click the {{< ui >}}ENABLE{{< /ui >}} button. **Note:** BigQuery Data Transfer API needs to be enabled on the Google Project that contains the service account. @@ -82,7 +82,7 @@ The following permissions allow Datadog to access and transfer the billing expor #### Configure export BigQuery dataset access [Add the service account as a principal on the export BigQuery dataset resource][8]: 1. In the Explorer pane on the BigQuery page, expand your project and select the export BigQuery dataset. -2. Click **Sharing > Permissions** and then **add principal**. +2. Click {{< ui >}}Sharing{{< /ui >}} > {{< ui >}}Permissions{{< /ui >}} and then {{< ui >}}add principal{{< /ui >}}. 3. In the new principals field, enter the service account. 4. Using the select a role list, assign a role with the following permissions: * `bigquery.datasets.get` @@ -106,7 +106,7 @@ Data is extracted regularly from your Detailed Usage Cost BigQuery dataset to th #### Configure bucket access [Add the service account as a principal on the GCS bucket resource][6]: 1. Navigate to the Cloud Storage Buckets page in the Google Cloud console, and select your bucket. -2. Select the permissions tab and click the **grant access** button. +2. Select the permissions tab and click the {{< ui >}}grant access{{< /ui >}} button. 3. In the new principals field, enter the service account. 4. Assign a role with the following permissions: * `storage.buckets.get` diff --git a/content/en/cloud_cost_management/setup/saas_costs.md b/content/en/cloud_cost_management/setup/saas_costs.md index bc141beccad..da38d864525 100644 --- a/content/en/cloud_cost_management/setup/saas_costs.md +++ b/content/en/cloud_cost_management/setup/saas_costs.md @@ -55,7 +55,7 @@ See the respective documentation for your cloud provider: ### Configure your SaaS accounts -Navigate to [**Cloud Cost** > **Settings**, select **Accounts**][8] and then click **Configure** on a provider to collect cost data. +Navigate to [**Cloud Cost** > **Settings**, select **Accounts**][8] and then click {{< ui >}}Configure{{< /ui >}} on a provider to collect cost data. {{< img src="cloud_cost/saas_costs/all_accounts.png" alt="Add your accounts with AWS, Azure, Google Cloud to collect cost data. You can also add your accounts for Fastly, Snowflake, Confluent Cloud, MongoDB, Databricks, OpenAI, Twilio, and GitHub" style="width:100%" >}} @@ -65,13 +65,13 @@ Navigate to [**Cloud Cost** > **Settings**, select **Accounts**][8] and then cli 1. Find your [Snowflake account URL][102]. {{< img src="integrations/snowflake/snowflake_account_url.png" alt="The account menu with the copy account URL option selected in the Snowflake UI" style="width:100%;" >}} -2. Navigate to the [Snowflake integration tile][101] in Datadog and click **Add Snowflake Account**. -3. Enter your Snowflake account URL in the `Account URL` field. For example: `https://xyz12345.us-east-1.snowflakecomputing.com`. -4. Under the **Connect your Snowflake account** section, click the toggle to enable Snowflake in Cloud Cost Management. -5. Enter your Snowflake user name in the `User Name` field. +2. Navigate to the [Snowflake integration tile][101] in Datadog and click {{< ui >}}Add Snowflake Account{{< /ui >}}. +3. Enter your Snowflake account URL in the {{< ui >}}Account URL{{< /ui >}} field. For example: `https://xyz12345.us-east-1.snowflakecomputing.com`. +4. Under the {{< ui >}}Connect your Snowflake account{{< /ui >}} section, click the toggle to enable Snowflake in Cloud Cost Management. +5. Enter your Snowflake user name in the {{< ui >}}User Name{{< /ui >}} field. 6. Follow step 4 of the [Snowflake integration][103] page to create a Datadog-specific role and user to monitor Snowflake. 7. Follow step 5 of the [Snowflake integration][103] page to configure the key-value pair authentication. -8. Click **Save**. +8. Click {{< ui >}}Save{{< /ui >}}. Snowflake cost data from the past 6 months is available in Cloud Cost Management within 24 hours. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). @@ -91,7 +91,7 @@ To use query tags within cost management, ensure the following: - The `query_tag` string must be JSON parsable. Specifically, this means that the string is processable by the native `PARSE_JSON` function. -- An allowlist of keys must be provided in the Snowflake integration tile. These keys map to the first layer of the JSON-formatted `query_tag` field. This allowlist appears in the form of a comma-separated list of strings for example: `tag_1,tag_2,tag_3`. Ensure that strings contain only alphanumeric characters, underscores, hyphens, and periods. You can enter this information into the Snowflake tile, under **Resources Collected -> Cloud Cost Management -> Collected Query Tags**. +- An allowlist of keys must be provided in the Snowflake integration tile. These keys map to the first layer of the JSON-formatted `query_tag` field. This allowlist appears in the form of a comma-separated list of strings for example: `tag_1,tag_2,tag_3`. Ensure that strings contain only alphanumeric characters, underscores, hyphens, and periods. You can enter this information into the Snowflake tile, under {{< ui >}}Resources Collected{{< /ui >}} > {{< ui >}}Cloud Cost Management{{< /ui >}} > {{< ui >}}Collected Query Tags{{< /ui >}}. **Note**: Select your query tags with data magnitude in mind. Appropriate query tags are ones that have low to medium group cardinality (for example: team, user, service). Selecting a query tag with high group cardinality (such as unique UUID associated with job executions) can result in bottlenecking issues for both data ingestion and frontend rendering. @@ -117,11 +117,11 @@ Notes: {{% tab "Databricks" %}} -1. Navigate to the [Databricks integration tile][101] in Datadog and click **Configure**. +1. Navigate to the [Databricks integration tile][101] in Datadog and click {{< ui >}}Configure{{< /ui >}}. 2. Enter the workspace name, url, client ID, and client secret corresponding to your Databricks service principal. -3. Under the **Select products to set up integration** section, click the toggle for each account to enable Databricks `Cloud Cost Management`. +3. Under the {{< ui >}}Select products to set up integration{{< /ui >}} section, click the toggle for each account to enable Databricks `Cloud Cost Management`. 4. Enter a `System Tables SQL Warehouse ID` corresponding to your Databricks instance's warehouse to query for system table billing data. -5. Click **Save Databricks Workspace**. +5. Click {{< ui >}}Save Databricks Workspace{{< /ui >}}. Your service principal requires read access to the [system tables](https://docs.databricks.com/aws/en/admin/system-tables/) within Unity Catalog. ```sql @@ -145,12 +145,12 @@ Your Databricks cost data for the past 15 months can be accessed in Cloud Cost M 1. Create an [admin API key][103] in your OpenAI account settings: - Log in to your [OpenAI Account][104]. - - Navigate to the [Admin Keys page][105] or go to **API keys** under **Organization settings** and select the **Admin keys** tab. - - Click **Create a new secret key** and copy the created admin API key. -2. Navigate to the [OpenAI integration tile][102] in Datadog and click **Add Account**. + - Navigate to the [Admin Keys page][105] or go to {{< ui >}}API keys{{< /ui >}} under {{< ui >}}Organization settings{{< /ui >}} and select the {{< ui >}}Admin keys{{< /ui >}} tab. + - Click {{< ui >}}Create a new secret key{{< /ui >}} and copy the created admin API key. +2. Navigate to the [OpenAI integration tile][102] in Datadog and click {{< ui >}}Add Account{{< /ui >}}. 3. Enter your OpenAI account name, input your admin API key, and optionally, specify tags. -4. Under the **Resources** section, click the toggle for each account to enable `OpenAI Billing Usage Data Collection`. -5. Click **Save**. +4. Under the {{< ui >}}Resources{{< /ui >}} section, click the toggle for each account to enable `OpenAI Billing Usage Data Collection`. +5. Click {{< ui >}}Save{{< /ui >}}. Your OpenAI cost data for the past 15 months can be accessed in Cloud Cost Management after 24 hours. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). @@ -175,8 +175,8 @@ Begin by getting an [Admin API key](https://docs.anthropic.com/en/api/administra ### 2. Configure the Datadog integration 1. In Datadog, go to [**Integrations > Anthropic Usage and Costs**](https://app.datadoghq.com/integrations?integrationId=anthropic-usage-and-costs). -2. On the **Configure** tab, under **Account details**, paste in the **Admin API Key** from Anthropic. -3. Click **Save**. +2. On the {{< ui >}}Configure{{< /ui >}} tab, under {{< ui >}}Account details{{< /ui >}}, paste in the {{< ui >}}Admin API Key{{< /ui >}} from Anthropic. +3. Click {{< ui >}}Save{{< /ui >}}. After you save your configuration, Datadog begins polling Anthropic usage and cost endpoints using this key, and populates metrics in your environment. @@ -186,7 +186,7 @@ After you save your configuration, Datadog begins polling Anthropic usage and co 1. Create a personal authorization token (classic), with the `manage_billing:enterprise` and `read:org` scopes on the [Personal Access Tokens][109] page in GitHub. 2. Navigate to the Datadog [GitHub Costs tile][108]. -3. Click **Add New**. +3. Click {{< ui >}}Add New{{< /ui >}}. 4. Enter an account name, your personal access token, and your enterprise name (in `enterprise-name` format), as well as any appropriate tags. 5. Click the checkmark button to save this account. @@ -202,10 +202,10 @@ Your GitHub cost data for the past 15 months can be accessed in Cloud Cost Manag {{% tab "Confluent Cloud" %}} 1. Create or acquire an API key with the [billing admin][102] role in Confluent Cloud. -2. Navigate to the [Confluent Cloud integration tile][101] in Datadog and click **Add Account**. +2. Navigate to the [Confluent Cloud integration tile][101] in Datadog and click {{< ui >}}Add Account{{< /ui >}}. 3. Enter your Confluent Cloud account name, API key, API secret, and optionally, specify tags. -4. Under the **Resources** section, click the toggle for `Collect cost data to view in Cloud Cost Management`. -5. Click **Save**. +4. Under the {{< ui >}}Resources{{< /ui >}} section, click the toggle for {{< ui >}}Collect cost data to view in Cloud Cost Management{{< /ui >}}. +5. Click {{< ui >}}Save{{< /ui >}}. Your Confluent Cloud cost data becomes available in Cloud Cost Management 24 hours after setup. This data automatically includes 12 months of history, the maximum provided by the Confluent billing API. Over the next three months, the data gradually expands to cover 15 months of history. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). @@ -221,9 +221,9 @@ If you wish to collect cluster-level tags or business metadata tags for your cos {{% tab "MongoDB" %}} 1. [Create an API token][101] in MongoDB with `Organizational Billing Viewer` permissions, and add `Organizational Read Only` permissions for cluster resource tags. -2. Navigate to the [MongoDB Cost Management integration tile][102] in Datadog and click **Add New**. +2. Navigate to the [MongoDB Cost Management integration tile][102] in Datadog and click {{< ui >}}Add New{{< /ui >}}. 3. Enter your MongoDB account name, public key, private key, organizational ID, and optionally, specify tags. -4. Click **Save**. +4. Click {{< ui >}}Save{{< /ui >}}. Your MongoDB cost data for the past 15 months can be accessed in Cloud Cost Management after 24 hours. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). @@ -237,13 +237,13 @@ Your MongoDB cost data for the past 15 months can be accessed in Cloud Cost Mana {{% tab "Elastic Cloud" %}} 1. Go to the [API Key][102] section in your Elastic Cloud organization's settings. -2. Click **Create New Key**. -3. Choose a **Name** and **Expiration Date** for your API key. -4. Select the **Billing Admin** role. -5. Click **Create Key** to generate the key. +2. Click {{< ui >}}Create New Key{{< /ui >}}. +3. Choose a {{< ui >}}Name{{< /ui >}} and {{< ui >}}Expiration Date{{< /ui >}} for your API key. +4. Select the {{< ui >}}Billing Admin{{< /ui >}} role. +5. Click {{< ui >}}Create Key{{< /ui >}} to generate the key. 6. Go to the [Elastic Cloud integration tile][101] in Datadog -7. Click **Add Account**. -8. Enter your **Elastic Cloud Organization ID** and **Billing API Key** in the account table. +7. Click {{< ui >}}Add Account{{< /ui >}}. +8. Enter your {{< ui >}}Elastic Cloud Organization ID{{< /ui >}} and {{< ui >}}Billing API Key{{< /ui >}} in the account table. Your Elastic Cloud cost data for the past 15 months can be accessed in Cloud Cost Management after 24 hours. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). @@ -257,9 +257,9 @@ Your Elastic Cloud cost data for the past 15 months can be accessed in Cloud Cos {{% tab "Fastly" %}} 1. Create an API token with at least the `"global:read"` scope and `"Billing"` role on the [Personal API tokens][101] page in Fastly. -2. Navigate to the [Fastly cost management integration tile][102] in Datadog and click **Add New**. +2. Navigate to the [Fastly cost management integration tile][102] in Datadog and click {{< ui >}}Add New{{< /ui >}}. 3. Enter your Fastly account name and API token. -4. Click **Save**. +4. Click {{< ui >}}Save{{< /ui >}}. Your Fastly cost data for the past 15 months can be accessed in Cloud Cost Management after 24 hours. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). @@ -271,10 +271,10 @@ Your Fastly cost data for the past 15 months can be accessed in Cloud Cost Manag {{% /tab %}} {{% tab "Twilio" %}} -1. Navigate to the [Twilio integration tile][101] in Datadog and click **Add Account**. -2. Under the **Resources** section, click the toggle for each account to enable `Twilio in Cloud Cost Management`. -3. Enter an `Account SID` for your Twilio account. -4. Click **Save**. +1. Navigate to the [Twilio integration tile][101] in Datadog and click {{< ui >}}Add Account{{< /ui >}}. +2. Under the {{< ui >}}Resources{{< /ui >}} section, click the toggle for each account to enable `Twilio in Cloud Cost Management`. +3. Enter an {{< ui >}}Account SID{{< /ui >}} for your Twilio account. +4. Click {{< ui >}}Save{{< /ui >}}. Your Twilio cost data for the past 15 months can be accessed in Cloud Cost Management after 24 hours. To access the available data collected by each SaaS Cost Integration, see the [Data Collected section](#data-collected). diff --git a/content/en/cloud_cost_management/tags/_index.md b/content/en/cloud_cost_management/tags/_index.md index 1bdbcdea25d..af6b46d3fbf 100644 --- a/content/en/cloud_cost_management/tags/_index.md +++ b/content/en/cloud_cost_management/tags/_index.md @@ -88,7 +88,7 @@ For example, a tag `Team:Engineering-Services` appears as `team:engineering-serv ## Override tag value normalization -Turn on **Tag Normalization** in the Tag Pipelines page to normalize all cost tag values to match the Metrics normalization. From the example above, you would see `team:engineering-services` everywhere. Tag normalization applies to user-defined tags from cloud costs. Tags created by Tag Pipelines are not normalized. For Azure, the `consumedservice` out-of-the-box tag is also normalized to lowercase. For all new users, the Tag Normalization toggle is enabled by default, with normalized tag values backfilled for the past 3 months automatically. To backfill normalized tags for a longer period up to 15 months, contact [Datadog support][13]. +Turn on {{< ui >}}Tag Normalization{{< /ui >}} in the Tag Pipelines page to normalize all cost tag values to match the Metrics normalization. From the example above, you would see `team:engineering-services` everywhere. Tag normalization applies to user-defined tags from cloud costs. Tags created by Tag Pipelines are not normalized. For Azure, the `consumedservice` out-of-the-box tag is also normalized to lowercase. For all new users, the Tag Normalization toggle is enabled by default, with normalized tag values backfilled for the past 3 months automatically. To backfill normalized tags for a longer period up to 15 months, contact [Datadog support][13]. Tag normalization allows you to: - View, filter and group Cost Recommendations and cost data with the same tag values @@ -119,7 +119,7 @@ Other tag sources (such as AWS Organization tags, integration tile tags, and sim ## Improving tagging 1. **Understand what tags exist** - Use the [Tag Explorer][5] to discover which tags are already available in your cost data. -2. **Identify gaps in cost allocation** - In the Explorer, group by any tag to see the cost allocated to that tag, or unallocated (which is displayed as `N/A`). Make sure to have "Container Allocated" enabled so that you are looking at a cost allocation that includes tags on pods. +2. **Identify gaps in cost allocation** - In the Explorer, group by any tag to see the cost allocated to that tag, or unallocated (which is displayed as `N/A`). Make sure to have {{< ui >}}Container Allocated{{< /ui >}} enabled so that you are looking at a cost allocation that includes tags on pods. 3. **Split up shared costs** - Use [Custom Allocation Rules][6] to split and assign shared costs back to teams, services, and more. You can use observability data to split costs accurately based on infrastructure usage. 4. **Address missing or incorrect tags** - Use [Tag Pipelines][4] to alias tags, or create a new tag, for incorrect tagging. For example, if your organization wants to use the standard `application` tag key, but teams use variations (like app, webapp, or apps), you can consolidate those tags to become `application` for more accurate cost reporting. 5. **Add new tags** - Use [Tag Pipelines][4] to automatically create new inferred tags that align with specific business logic, such as a `business-unit` tag based on team structure. diff --git a/content/en/cloud_cost_management/tags/multisource_querying.md b/content/en/cloud_cost_management/tags/multisource_querying.md index 9d0fa2c87ba..2f92cb08417 100644 --- a/content/en/cloud_cost_management/tags/multisource_querying.md +++ b/content/en/cloud_cost_management/tags/multisource_querying.md @@ -28,29 +28,29 @@ To use Multisource Querying, ensure you have configured [Cloud Cost Management][ ## Query your cost data -You can select multiple providers in the **Provider** field on the [**Explorer** page][6]. +You can select multiple providers in the {{< ui >}}Provider{{< /ui >}} field on the [**Explorer** page][6]. {{< img src="cloud_cost/multisource_querying/provider.png" alt="The Provider field below the search query on the Cloud Cost Explorer page" style="width:40%;" >}} -Dropdown filters like **Provider** and **Team** maintain flexibility and streamline the process of creating a search query so you can refine your cost data. To add a filter, click **+ Filter**. +Dropdown filters like {{< ui >}}Provider{{< /ui >}} and {{< ui >}}Team{{< /ui >}} maintain flexibility and streamline the process of creating a search query so you can refine your cost data. To add a filter, click {{< ui >}}+ Filter{{< /ui >}}. {{< img src="cloud_cost/multisource_querying/filters_2.png" alt="A search query that uses the Team filter and groups reports by service on the Cloud Cost Explorer page" style="width:100%;" >}} -Click **Refine Results** to access the following options and filter your cost data. +Click {{< ui >}}Refine Results{{< /ui >}} to access the following options and filter your cost data. -Usage Charges Only +{{< ui >}}Usage Charges Only{{< /ui >}} : Examine costs impacted by engineering teams, excluding credits, fees, and taxes. -Complete Days Only +{{< ui >}}Complete Days Only{{< /ui >}} : Exclude the past two days of cost data, which are incomplete. -Total Cost +{{< ui >}}Total Cost{{< /ui >}} : Filter the data to view costs within a specific cost range. -Dollar Change +{{< ui >}}Dollar Change{{< /ui >}} : Only display cost changes within a specified dollar change range. -Percent Change +{{< ui >}}Percent Change{{< /ui >}} : Only display cost changes within a specified percentage change range. {{< img src="cloud_cost/multisource_querying/refine_results_1.png" alt="Additional options to refine your cost data on the Cloud Cost Explorer page" style="width:100%;" >}} @@ -65,7 +65,7 @@ With Multisource Querying, you can create visualizations using cost data across ### Cost metric -Multisource Querying uses the `all.cost` metric, which combines all individual cloud and SaaS cost metrics into a unified view on the **Analytics** page. +Multisource Querying uses the `all.cost` metric, which combines all individual cloud and SaaS cost metrics into a unified view on the {{< ui >}}Analytics{{< /ui >}} page. **Note:** The `all.cost` metric does not include resource-level tags. To view costs by resource, use the specific cost metrics for each provider (such as `aws.cost.amortized`). When you filter to a specific provider in the search query, Datadog automatically switches to the corresponding provider-specific metric, enabling more granular querying of your cost data. diff --git a/content/en/cloud_cost_management/tags/tag_explorer.md b/content/en/cloud_cost_management/tags/tag_explorer.md index ffa2f7bdae3..836ef18df49 100644 --- a/content/en/cloud_cost_management/tags/tag_explorer.md +++ b/content/en/cloud_cost_management/tags/tag_explorer.md @@ -41,26 +41,26 @@ See the respective documentation for your cloud provider: ## Search and manage tags -Navigate to [**Cloud Cost** > **Settings** > **Tag Explorer**][2] to search for tags related to your cloud provider bills, custom costs, Datadog costs, SaaS cost integrations, and tag pipelines. +Navigate to [{{< ui >}}Cloud Cost{{< /ui >}} > {{< ui >}}Settings{{< /ui >}} > {{< ui >}}Tag Explorer{{< /ui >}}][2] to search for tags related to your cloud provider bills, custom costs, Datadog costs, SaaS cost integrations, and tag pipelines. {{< tabs >}} {{% tab "AWS" %}} -For AWS tags, select **AWS** from the dropdown menu on the top right corner. +For AWS tags, select {{< ui >}}AWS{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/aws_1.png" alt="Search through the list of AWS cost-related tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} {{% /tab %}} {{% tab "Azure" %}} -For Azure tags, select **Azure** from the dropdown menu on the top right corner. +For Azure tags, select {{< ui >}}Azure{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/azure_1.png" alt="Search through the list of Azure cost-related tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} {{% /tab %}} {{% tab "Google" %}} -For Google Cloud tags, select **Google** from the dropdown menu on the top right corner. +For Google Cloud tags, select {{< ui >}}Google{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/google_1.png" alt="Search through the list of Google Cloud cost-related tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -69,7 +69,7 @@ For Google Cloud tags, select **Google** from the dropdown menu on the top right
Daily Datadog costs are in Preview.
-For Datadog tags, select **Datadog** from the dropdown menu on the top right corner. +For Datadog tags, select {{< ui >}}Datadog{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/datadog_1.png" alt="Search through the list of your Datadog cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -78,7 +78,7 @@ For Datadog tags, select **Datadog** from the dropdown menu on the top right cor
Confluent Cloud costs are in Preview.
-For Confluent Cloud tags, select **Confluent Cloud** from the dropdown menu on the top right corner. +For Confluent Cloud tags, select {{< ui >}}Confluent Cloud{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/confluent_cloud_1.png" alt="Search through the list of your Confluent Cloud cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -87,7 +87,7 @@ For Confluent Cloud tags, select **Confluent Cloud** from the dropdown menu on t
Databricks costs are in Preview.
-For Databricks tags, select **Databricks** from the dropdown menu on the top right corner. +For Databricks tags, select {{< ui >}}Databricks{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/databricks_1.png" alt="Search through the list of your Databricks cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -96,7 +96,7 @@ For Databricks tags, select **Databricks** from the dropdown menu on the top rig
Fastly costs are in Preview.
-For Fastly tags, select **Fastly** from the dropdown menu on the top right corner. +For Fastly tags, select {{< ui >}}Fastly{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/fastly_1.png" alt="Search through the list of your Fastly cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -105,7 +105,7 @@ For Fastly tags, select **Fastly** from the dropdown menu on the top right corne
Elastic Cloud costs are in Preview.
-For Elastic Cloud tags, select **Elastic Cloud** from the dropdown menu on the top right corner. +For Elastic Cloud tags, select {{< ui >}}Elastic Cloud{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/elastic_cloud.png" alt="Search through the list of your Elastic Cloud cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -114,7 +114,7 @@ For Elastic Cloud tags, select **Elastic Cloud** from the dropdown menu on the t
MongoDB costs are in Preview.
-For MongoDB tags, select **MongoDB** from the dropdown menu on the top right corner. +For MongoDB tags, select {{< ui >}}MongoDB{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/mongodb_1.png" alt="Search through the list of your MongoDB cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -123,7 +123,7 @@ For MongoDB tags, select **MongoDB** from the dropdown menu on the top right cor
OpenAI costs are in Preview.
-For OpenAI tags, select **OpenAI** from the dropdown menu on the top right corner. +For OpenAI tags, select {{< ui >}}OpenAI{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/openai_1.png" alt="Search through the list of your OpenAI cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -132,7 +132,7 @@ For OpenAI tags, select **OpenAI** from the dropdown menu on the top right corne
Snowflake costs are in Preview.
-For Snowflake tags, select **Snowflake** from the dropdown menu on the top right corner. +For Snowflake tags, select {{< ui >}}Snowflake{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/snowflake_1.png" alt="Search through the list of your Snowflake cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -141,7 +141,7 @@ For Snowflake tags, select **Snowflake** from the dropdown menu on the top right
Twilio costs are in Preview.
-For Twilio tags, select **Twilio** from the dropdown menu on the top right corner. +For Twilio tags, select {{< ui >}}Twilio{{< /ui >}} from the dropdown menu on the top right corner. {{< img src="cloud_cost/tag_explorer/twilio_1.png" alt="Search through the list of your Twilio cost tags in the Tag Explorer and understand where the costs are coming from" style="width:100%;" >}} @@ -165,8 +165,8 @@ You can add or edit descriptions for any tag in the Tag Explorer to provide cont Tag descriptions are visible to all members of your organization and appear in the following locations: -- **Tag Explorer**: Descriptions are displayed in the tag table alongside each tag key. -- **Group-by selectors**: When selecting tags to group by across Cloud Cost Management, descriptions appear in the dropdown menu to help users choose the right tag. +- {{< ui >}}Tag Explorer{{< /ui >}}: Descriptions are displayed in the tag table alongside each tag key. +- {{< ui >}}Group-by selectors{{< /ui >}}: When selecting tags to group by across Cloud Cost Management, descriptions appear in the dropdown menu to help users choose the right tag. ## Further reading From 01c5e071009b2ed0d2ba8d921ac8655642a63ed0 Mon Sep 17 00:00:00 2001 From: cswatt Date: Mon, 20 Apr 2026 12:46:20 -0700 Subject: [PATCH 2/2] format for mdoc --- .../allocation/container_cost_allocation.mdoc.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md b/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md index 0f53f520185..379926372e9 100644 --- a/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md +++ b/content/en/cloud_cost_management/allocation/container_cost_allocation.mdoc.md @@ -23,7 +23,7 @@ Clouds Resources : CCM allocates costs for Kubernetes clusters and includes cost analysis for many associated resources such as Kubernetes persistent volumes used by your pods. -CCM displays costs for resources including CPU, memory, and more depending on the cloud and orchestrator you are using on the [{{< ui >}}Containers{{< /ui >}} page][1]. +CCM displays costs for resources including CPU, memory, and more depending on the cloud and orchestrator you are using on the [{% ui %}Containers{% /ui %} page][1]. {% img src="cloud_cost/container_cost_allocation/container_allocation.png" alt="Cloud cost allocation table showing requests and idle costs over the past month on the Containers page" style="width:100%;" /%} @@ -229,7 +229,7 @@ The cost of an EBS volume has three components: IOPS, throughput, and storage. E | Spend type | Description | | -----------| ----------- | | Usage | Cost of provisioned IOPS, throughput, or storage used by workloads. Storage cost is based on the maximum amount of volume storage used that day, while IOPS and throughput costs are based on the average amount of volume storage used that day. | -| Workload idle | Cost of provisioned IOPS, throughput, or storage that are reserved and allocated but not used by workloads. Storage cost is based on the maximum amount of volume storage used that day, while IOPS and throughput costs are based on the average amount of volume storage used that day. This is the difference between the total resources requested and the average usage. **Note:** This tag is only available if you have enabled `Resource Collection` in your [AWS Integration][21]. To prevent being charged for `Cloud Security Posture Management`, ensure that during the `Resource Collection` setup, the {{< ui >}}Cloud Security Posture Management{{< /ui >}} box is unchecked. | +| Workload idle | Cost of provisioned IOPS, throughput, or storage that are reserved and allocated but not used by workloads. Storage cost is based on the maximum amount of volume storage used that day, while IOPS and throughput costs are based on the average amount of volume storage used that day. This is the difference between the total resources requested and the average usage. **Note:** This tag is only available if you have enabled {% ui %}Resource Collection{% /ui %} in your [AWS Integration][21]. To prevent being charged for {% ui %}Cloud Security Posture Management{% /ui %}, ensure that during the {% ui %}Resource Collection{% /ui %} setup, the {% ui %}Cloud Security Posture Management{% /ui %} box is unchecked. | | Cluster idle | Cost of provisioned IOPS, throughput, or storage that are not reserved by any pods that day. This is the difference between the total cost of the resources and what is allocated to workloads. | **Note**: Persistent volume allocation is only supported in Kubernetes clusters, and is only available for pods that are part of a Kubernetes StatefulSet. @@ -294,22 +294,22 @@ Cluster idle costs (identified by `allocated_spend_type:cluster_idle`) represent To configure cluster idle allocation, go to the [Cluster Idle Allocation settings][22] page and follow these steps: -1. Click {{< ui >}}Enable cluster idle allocation{{< /ui >}}. +1. Click {% ui %}Enable cluster idle allocation{% /ui %}. 1. Select a redistribution level: - {{< ui >}}Cluster{{< /ui >}} + {% ui %}Cluster{% /ui %} : Redistributes idle costs at the cluster level. - {{< ui >}}Node{{< /ui >}} + {% ui %}Node{% /ui %} : Redistributes idle costs at the node level. Datadog also allocates to the `kube_node_name` tag. - {{< ui >}}Nodepool{{< /ui >}} + {% ui %}Nodepool{% /ui %} : Redistributes idle costs at the nodepool level. Select a nodepool tag. 1. Optionally, select up to two additional destination tags. -1. Click {{< ui >}}Save{{< /ui >}}. +1. Click {% ui %}Save{% /ui %}. -To disable cluster idle allocation, return to the [Cluster Idle Allocation settings][22] page and click {{< ui >}}Disable{{< /ui >}}. +To disable cluster idle allocation, return to the [Cluster Idle Allocation settings][22] page and click {% ui %}Disable{% /ui %}. **Note**: Any settings change, including disabling, re-enabling, or modifying the redistribution level, re-backfills the last 3 months of data with the latest settings.