Skip to content

[Bug]: Infinite Loading and TypeError in Jaeger UI with No Traces #7423

@AdhamMGaber9

Description

@AdhamMGaber9

What happened?

Issue Brief: Infinite Loading and TypeError in Jaeger UI with No Traces
Description
When querying the Jaeger UI with a time range containing no trace data (e.g., for service service-1), the UI enters an infinite loading state and throws a TypeError: Cannot read properties of undefined (reading 'forEach') in the console. This issue appears to be related to the handling of operations metrics (e.g., service_latencies) when no traces are available, potentially due to NaN values or undefined responses from the metrics backend.

Actual Behavior

  • The UI enters an infinite loading state, failing to respond.
  • The browser console logs the following error:
index-KA9E3UZq.js:123 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'forEach')
    at index-KA9E3UZq.js:123:170602
    at Array.forEach (<anonymous>)
    at fetchOpsMetricsDone (index-KA9E3UZq.js:123:170533)
    at index-KA9E3UZq.js:98:1725309
    at index-KA9E3UZq.js:98:1725820
    at Array.reduce (<anonymous>)
    at index-KA9E3UZq.js:98:1725790
    at index-KA9E3UZq.js:98:1726443
    at index-KA9E3UZq.js:98:205551
    at F (index-KA9E3UZq.js:98:204406)

Steps to reproduce

  1. Deploy Jaeger using the image jaegertracing/jaeger:latest with an OpenTelemetry Collector forwarding traces to jaeger.xx.svc.cluster.local:4317.
  2. Ensure no trace data is generated for a specific service (e.g., mx-instant-api-catalog-alt) during a given time range (e.g., 02:30 AM to 02:40 AM EEST, August 08, 2025).
  3. Access the Jaeger UI
  4. Select the service service-1 and set the time range to a period with no traces.
  5. Observe the UI behavior and check the browser console for errors.

Expected behavior

  • The Jaeger UI should load within a reasonable timeout (e.g., 10-15 seconds) and display a message such as "No trace results. Try another query." without throwing JavaScript errors.
  • Operations metrics (e.g., service_latencies) should either be omitted or handled gracefully when no trace data is available.

Relevant log output

Actual Behavior

- The UI enters an infinite loading state, failing to respond.
- 
- The browser console logs the following error:


index-KA9E3UZq.js:123 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'forEach')
    at index-KA9E3UZq.js:123:170602
    at Array.forEach (<anonymous>)
    at fetchOpsMetricsDone (index-KA9E3UZq.js:123:170533)
    at index-KA9E3UZq.js:98:1725309
    at index-KA9E3UZq.js:98:1725820
    at Array.reduce (<anonymous>)
    at index-KA9E3UZq.js:98:1725790
    at index-KA9E3UZq.js:98:1726443
    at index-KA9E3UZq.js:98:205551
    at F (index-KA9E3UZq.js:98:204406)

Screenshot

No response

Additional context

Possible Solutions:

  • Add a check in the Jaeger UI (fetchOpsMetricsDone) to handle undefined data before forEach (e.g., if (data && Array.isArray(data)) data.forEach(...)).

  • Modify Jaeger’s metric aggregation to return a default value (e.g., 0) or null when no traces are available.

  • Update the OpenTelemetry Collector to filter out spans contributing to NaN metrics (e.g., via filterprocessor on error attributes).

Jaeger backend version

v2.9.0 (latest(

SDK

otel/opentelemetry-collector-contrib:latest

Pipeline

No response

Stogage backend

Elasticsearch

Operating system

linux

Deployment model

k8s deployment

Deployment configs

---
# ConfigMap for Jaeger SPM configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: jaeger-spm-config
  labels:
    app: jaeger
    repo: tracing
data:
  config-spm-elasticsearch.yaml: |
    service:
      extensions: [jaeger_storage, jaeger_query]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [jaeger_storage_exporter]
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [jaeger_storage_exporter]
      telemetry:
        logs:
          level: debug
    extensions:
      jaeger_query:
        storage:
          traces: elasticsearch_trace_storage
          metrics: elasticsearch_trace_storage
      jaeger_storage:
        backends:
          elasticsearch_trace_storage: &elasticsearch_config
            elasticsearch:
              server_urls:
                - http://es-telemetry:9200
        metric_backends:
          elasticsearch_trace_storage: *elasticsearch_config
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
          http:
            endpoint: "0.0.0.0:4318"
    processors:
      batch: {}
    exporters:
      jaeger_storage_exporter:
        trace_storage: elasticsearch_trace_storage
        metric_storage: elasticsearch_trace_storage
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    noon/deploy-groups: all
    pulumi.com/patchForce: 'true'
    pulumi.com/skipAwait: 'true'
  labels:
    name: opentelemetry-collector-conf
    repo: tracing
  name: opentelemetry-collector-conf
data:
  opentelemetry-collector-config: |
    exporters:
      debug:
        sampling_initial: 5
        sampling_thereafter: 200
        verbosity: detailed
      otlp/jaeger:
        endpoint: jaeger.xxx.svc.cluster.local:4317
        tls:
          insecure: true
    extensions:
      health_check:
        endpoint: ${env:MY_POD_IP}:13133
    processors:
      batch:
        send_batch_max_size: 200
        send_batch_size: 200
        timeout: 5s
      groupbytrace:
        num_traces: 1000000
        wait_duration: 30s
      tail_sampling:
        decision_wait: 30s
        num_traces: 1000000
        policies:
        - name: keep_all_spans
          type: always_sample
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            cors:
              allowed_origins:
              - http://*
              - https://*
            endpoint: ${env:MY_POD_IP}:4318
    service:
      extensions:
      - health_check
      pipelines:
        traces:
          exporters:
          - otlp/jaeger
          processors:
          - groupbytrace
          - tail_sampling
          - batch
          receivers:
          - otlp
      telemetry:
        logs:
          level: debug
---
# OpenTelemetry Collector Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: opentelemetry-collector
  name: opentelemetry-collector
spec:
  minReadySeconds: 30
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: opentelemetry-collector
      repo: opentelemetry-collector
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: opentelemetry-collector
        repo: opentelemetry-collector
    spec:
      affinity: null
      containers:
      - command:
        - /otelcol-contrib
        - --config=/conf/opentelemetry-collector-config.yaml
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: K8S_SA_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        envFrom: []
        image: otel/opentelemetry-collector-contrib:latest
        imagePullPolicy: Always
        name: opentelemetry-collector
        ports:
        - containerPort: 4317
          protocol: TCP
        - containerPort: 4318
          protocol: TCP
        - containerPort: 55678
          name: grpc-opencensus
          protocol: TCP
        resources:
          requests:
            cpu: 1
            memory: 2Gi
        volumeMounts:
        - mountPath: /conf
          name: opentelemetry-collector-config-vol
      nodeSelector:
        pool: default
      serviceAccountName: tracing
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          items:
          - key: opentelemetry-collector-config
            path: opentelemetry-collector-config.yaml
          name: opentelemetry-collector-conf
        name: opentelemetry-collector-config-vol
---
# OpenTelemetry Collector Service
apiVersion: v1
kind: Service
metadata:
  labels:
    name: opentelemetry-collector
    repo: tracing
  name: opentelemetry-collector
spec:
  ports:
  - name: otlp-grpc
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http
    port: 4318
    protocol: TCP
    targetPort: 4318
  selector:
    app: opentelemetry-collector
  type: ClusterIP
---
# Jaeger Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: jaeger
  name: jaeger
spec:
  minReadySeconds: 30
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: jaeger
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
      labels:
        app: jaeger
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: name
                  operator: In
                  values:
                  - jaeger
              topologyKey: topology.kubernetes.io/zone
            weight: 1
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: name
                  operator: In
                  values:
                  - jaeger
              topologyKey: kubernetes.io/hostname
            weight: 100
      containers:
      - env:
        - name: INFRA_APP_REPO
          value: tracing
        - name: K8S_SA_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        - name: BACKEND
          value: http://127.0.0.1:16686
        - name: RATELIMIT_ENABLED
          value: '0'
        envFrom:
        - configMapRef:
            name: team-config-authproxy
        imagePullPolicy: Always
        name: authproxy-team
        ports:
        - containerPort: 8082
          name: http
          protocol: TCP
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        startupProbe:
          failureThreshold: 3
          httpGet:
            path: /auth/hc
            port: 8082
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        volumeMounts:
        - mountPath: /credentials
          name: team-authproxy-credentials
      - env:
        - name: SPAN_STORAGE_TYPE
          value: elasticsearch
        - name: ES_SERVER_URLS
          value: http://es-telemetry:9200
        - name: METRICS_STORAGE_TYPE
          value: elasticsearch
        - name: ES_TAGS_AS_FIELDS_ALL
          value: 'true'
        - name: ES_INDEX_PREFIX
          value: jaeger
        - name: ES_NUM_SHARDS
          value: '5'
        - name: ES_NUM_REPLICAS
          value: '1'
        - name: ES_TIMEOUT
          value: 10s  # Elasticsearch query timeout
        - name: QUERY_MAX_DURATION
          value: 10s  # Query service timeout
        - name: JAEGER_QUERY_LOG_LEVEL
          value: debug  # Enable detailed logging
        - name: ES_MAX_RETRIES
          value: "1"
        - name: DEPENDENCIES_ENABLED
          value: 'true'
        - name: DEPENDENCIES_CRON_SCHEDULE
          value: 0 */1 * * *
        - name: INFRA_APP_REPO
          value: tracing
        - name: K8S_SA_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.serviceAccountName
        args:
          - "--config=/conf/config-spm-elasticsearch.yaml"
        envFrom: []
        image: jaegertracing/jaeger:2.2.0
        imagePullPolicy: Always
        name: jaeger
        ports:
        - containerPort: 16686
          protocol: TCP
        - containerPort: 14268
          protocol: TCP
        - containerPort: 14250
          protocol: TCP
        - containerPort: 9411
          protocol: TCP
        - containerPort: 4317
          protocol: TCP
        - containerPort: 4318
          protocol: TCP
        resources:
          limits:
            cpu: 5
            memory: 12Gi
          requests:
            cpu: 1
            memory: 2Gi
        volumeMounts:
          - mountPath: /conf
            name: jaeger-config-vol
      nodeSelector:
        pool: default
      serviceAccountName: tracing
      terminationGracePeriodSeconds: 30
      volumes:
      - name: jaeger-config-vol
        configMap:
          name: jaeger-spm-config
          items:
            - key: config-spm-elasticsearch.yaml
              path: config-spm-elasticsearch.yaml
      - name: team-authproxy-credentials
        secret:
          defaultMode: 420
          secretName: team-authproxy-credentials
---
# Jaeger Service
apiVersion: v1
kind: Service
metadata:
  labels:
    name: jaeger
    repo: tracing
  name: jaeger
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8082
  - name: collector-grpc
    port: 14250
    protocol: TCP
    targetPort: 14250
  - name: collector-http
    port: 14268
    protocol: TCP
    targetPort: 14268
  - name: otlp-grpc
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http
    port: 4318
    protocol: TCP
    targetPort: 4318
  selector:
    app: jaeger
  type: ClusterIP

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions