OpenMCF logoOpenMCF

Loading...

Kubernetes SigNoz

Deploys the SigNoz observability platform on Kubernetes using the official SigNoz Helm chart, providing unified logs, metrics, and traces through an OpenTelemetry-native stack with configurable SigNoz UI, OpenTelemetry Collector, self-managed or external ClickHouse database, optional Kubernetes Gateway API ingress for both the UI and OTel Collector endpoints, and custom Helm value overrides.

What Gets Created

When you deploy a KubernetesSignoz resource, OpenMCF provisions:

  • Namespace — created only when createNamespace is true
  • SigNoz Helm Release — the full SigNoz stack (UI, API server, Ruler, Alertmanager, and Frontend) deployed via the signoz Helm chart from https://charts.signoz.io (chart version 0.52.0)
  • OpenTelemetry Collector — a multi-replica data ingestion gateway accepting traces, metrics, and logs over gRPC (port 4317) and HTTP (port 4318)
  • Self-Managed ClickHouse — an in-cluster ClickHouse deployment with configurable persistence, clustering (sharding and replication), and optional Zookeeper coordination; created only when database.isExternal is false
  • Zookeeper — coordination service for distributed ClickHouse clusters; created only when database.managedDatabase.zookeeper.isEnabled is true
  • SigNoz UI Gateway and Routes — a Kubernetes Gateway API Gateway, TLS Certificate (via cert-manager), HTTPS HTTPRoute, and HTTP-to-HTTPS redirect HTTPRoute for the SigNoz UI; created only when ingress.ui.enabled is true
  • OTel Collector Gateway and Routes — a separate Gateway API Gateway, TLS Certificate, HTTPS HTTPRoute, and HTTP-to-HTTPS redirect HTTPRoute for the OpenTelemetry Collector HTTP endpoint; created only when ingress.otelCollector.enabled is true

Prerequisites

  • Kubernetes credentials configured via environment variables or OpenMCF provider config
  • A Kubernetes namespace that already exists, or set createNamespace to true
  • A StorageClass available in the cluster if enabling ClickHouse persistence (most managed Kubernetes clusters provide a default)
  • Istio ingress gateway installed in the istio-ingress namespace if enabling ingress for the UI or OTel Collector
  • cert-manager with a ClusterIssuer matching your ingress domain if enabling ingress
  • Gateway API CRDs installed on the cluster if enabling ingress

Quick Start

Create a file signoz.yaml:

apiVersion: kubernetes.openmcf.org/v1
kind: KubernetesSignoz
metadata:
  name: my-signoz
  labels:
    openmcf.org/provisioner: pulumi
    pulumi.openmcf.org/organization: my-org
    pulumi.openmcf.org/project: my-project
    pulumi.openmcf.org/stack.name: dev.KubernetesSignoz.my-signoz
spec:
  namespace: observability
  createNamespace: true
  database:
    isExternal: false

Deploy:

openmcf apply -f signoz.yaml

This creates a SigNoz instance with a single SigNoz replica (1000m CPU / 2Gi memory limits), two OTel Collector replicas (2000m CPU / 4Gi memory limits), and a self-managed single-node ClickHouse with 20Gi persistent storage. No ingress is configured; access the UI via port-forward using the portForwardCommand stack output.

Configuration Reference

Required Fields

FieldTypeDescriptionValidation
namespaceStringValueOrRefKubernetes namespace for the SigNoz deployment. Can reference a KubernetesNamespace resource via valueFrom.Required
databaseobjectClickHouse database configuration. Must specify either self-managed or external mode.Required

Optional Fields

FieldTypeDefaultDescription
targetCluster.clusterKindenum—Kubernetes cluster kind. Valid values: AwsEksCluster, GcpGkeCluster, AzureAksCluster, DigitalOceanKubernetesCluster, CivoKubernetesCluster.
targetCluster.clusterNamestring—Name of the target Kubernetes cluster in the same environment.
createNamespaceboolfalseWhen true, creates the namespace before deploying resources.
signozContainer.replicasint321Number of SigNoz (UI/API/Ruler/Alertmanager) pods. Must be at least 1.
signozContainer.resources.limits.cpustring1000mMaximum CPU allocation for each SigNoz pod.
signozContainer.resources.limits.memorystring2GiMaximum memory allocation for each SigNoz pod.
signozContainer.resources.requests.cpustring200mMinimum guaranteed CPU for each SigNoz pod.
signozContainer.resources.requests.memorystring512MiMinimum guaranteed memory for each SigNoz pod.
signozContainer.image.repostring—Custom container image repository for the SigNoz binary.
signozContainer.image.tagstring—Custom container image tag for the SigNoz binary.
otelCollectorContainer.replicasint322Number of OpenTelemetry Collector pods. Must be at least 1.
otelCollectorContainer.resources.limits.cpustring2000mMaximum CPU allocation for each OTel Collector pod.
otelCollectorContainer.resources.limits.memorystring4GiMaximum memory allocation for each OTel Collector pod.
otelCollectorContainer.resources.requests.cpustring500mMinimum guaranteed CPU for each OTel Collector pod.
otelCollectorContainer.resources.requests.memorystring1GiMinimum guaranteed memory for each OTel Collector pod.
otelCollectorContainer.image.repostring—Custom container image repository for the OTel Collector.
otelCollectorContainer.image.tagstring—Custom container image tag for the OTel Collector.
database.isExternalboolfalseWhen true, connects to an existing external ClickHouse instance instead of deploying one in-cluster.
database.externalDatabase.hoststring—Hostname of the external ClickHouse instance. Required when database.isExternal is true.
database.externalDatabase.httpPortint328123HTTP port for the external ClickHouse instance.
database.externalDatabase.tcpPortint329000TCP port for the external ClickHouse native protocol.
database.externalDatabase.clusterNamestringclusterName of the distributed cluster in ClickHouse configuration.
database.externalDatabase.isSecureboolfalseWhether to use TLS when connecting to the external ClickHouse instance.
database.externalDatabase.usernamestring—Username for authenticating to the external ClickHouse. Required when database.isExternal is true.
database.externalDatabase.passwordKubernetesSensitiveValue—Password for the external ClickHouse. Supports value for a plain string or secretRef with name and key to reference an existing Kubernetes Secret. Required when database.isExternal is true.
database.managedDatabase.container.replicasint321Number of self-managed ClickHouse pods. Must be at least 1.
database.managedDatabase.container.resources.limits.cpustring2000mMaximum CPU for each ClickHouse pod.
database.managedDatabase.container.resources.limits.memorystring4GiMaximum memory for each ClickHouse pod.
database.managedDatabase.container.resources.requests.cpustring500mMinimum guaranteed CPU for each ClickHouse pod.
database.managedDatabase.container.resources.requests.memorystring1GiMinimum guaranteed memory for each ClickHouse pod.
database.managedDatabase.container.persistenceEnabledbooltrueEnables persistent storage for ClickHouse data.
database.managedDatabase.container.diskSizestring20GiSize of the PersistentVolumeClaim per ClickHouse pod. Required when persistenceEnabled is true. Must be a valid Kubernetes quantity (e.g., 20Gi). Cannot be modified after creation.
database.managedDatabase.container.image.repostring—Custom container image repository for ClickHouse.
database.managedDatabase.container.image.tagstring—Custom container image tag for ClickHouse.
database.managedDatabase.cluster.isEnabledboolfalseEnables distributed cluster mode with sharding and replication for ClickHouse.
database.managedDatabase.cluster.shardCountint32—Number of shards for distributed data storage. Must be at least 1 when clustering is enabled.
database.managedDatabase.cluster.replicaCountint32—Number of replicas per shard for data redundancy. Must be at least 1 when clustering is enabled.
database.managedDatabase.zookeeper.isEnabledboolfalseEnables Zookeeper deployment for distributed ClickHouse coordination. Must be true when clustering is enabled.
database.managedDatabase.zookeeper.container.replicasint321Number of Zookeeper pods. Use an odd number (3 or 5) for production quorum.
database.managedDatabase.zookeeper.container.resources.limits.cpustring500mMaximum CPU for each Zookeeper pod.
database.managedDatabase.zookeeper.container.resources.limits.memorystring512MiMaximum memory for each Zookeeper pod.
database.managedDatabase.zookeeper.container.resources.requests.cpustring100mMinimum guaranteed CPU for each Zookeeper pod.
database.managedDatabase.zookeeper.container.resources.requests.memorystring256MiMinimum guaranteed memory for each Zookeeper pod.
database.managedDatabase.zookeeper.container.diskSizestring8GiPersistent volume size per Zookeeper pod. Must be a valid Kubernetes quantity.
database.managedDatabase.zookeeper.container.image.repostring—Custom container image repository for Zookeeper.
database.managedDatabase.zookeeper.container.image.tagstring—Custom container image tag for Zookeeper.
ingress.ui.enabledboolfalseCreates Gateway API resources for external SigNoz UI access with TLS termination and HTTP-to-HTTPS redirect.
ingress.ui.hostnamestring—Hostname for external SigNoz UI access (e.g., signoz.example.com). Required when ingress.ui.enabled is true.
ingress.otelCollector.enabledboolfalseCreates Gateway API resources for external OTel Collector HTTP endpoint access with TLS termination.
ingress.otelCollector.hostnamestring—Hostname for external OTel Collector HTTP endpoint (e.g., otel-ingest.example.com). Required when ingress.otelCollector.enabled is true.
helmValuesmap<string, string>—Additional key-value pairs passed to the SigNoz Helm chart for advanced customization. See SigNoz Helm chart documentation for available options.

Note on namespace: The namespace field is a StringValueOrRef. You can provide a plain string value directly, or use valueFrom to reference the output of another OpenMCF resource (e.g., a KubernetesNamespace).

Examples

Development SigNoz with Reduced Resources

A lightweight SigNoz instance for development and testing with smaller resource allocations and a single OTel Collector replica:

apiVersion: kubernetes.openmcf.org/v1
kind: KubernetesSignoz
metadata:
  name: dev-signoz
  labels:
    openmcf.org/provisioner: pulumi
    pulumi.openmcf.org/organization: my-org
    pulumi.openmcf.org/project: my-project
    pulumi.openmcf.org/stack.name: dev.KubernetesSignoz.dev-signoz
spec:
  namespace: dev-observability
  createNamespace: true
  signozContainer:
    replicas: 1
    resources:
      limits:
        cpu: "500m"
        memory: "1Gi"
      requests:
        cpu: "100m"
        memory: "256Mi"
  otelCollectorContainer:
    replicas: 1
    resources:
      limits:
        cpu: "500m"
        memory: "1Gi"
      requests:
        cpu: "100m"
        memory: "256Mi"
  database:
    isExternal: false
    managedDatabase:
      container:
        replicas: 1
        resources:
          limits:
            cpu: "1000m"
            memory: "2Gi"
          requests:
            cpu: "250m"
            memory: "512Mi"
        persistenceEnabled: true
        diskSize: "10Gi"

Production SigNoz with Clustered ClickHouse and Ingress

A production-grade deployment with ClickHouse clustering (2 shards, 2 replicas), Zookeeper quorum, and external access for both the UI and OTel Collector:

apiVersion: kubernetes.openmcf.org/v1
kind: KubernetesSignoz
metadata:
  name: prod-signoz
  labels:
    openmcf.org/provisioner: pulumi
    pulumi.openmcf.org/organization: my-org
    pulumi.openmcf.org/project: my-project
    pulumi.openmcf.org/stack.name: prod.KubernetesSignoz.prod-signoz
spec:
  namespace: observability
  signozContainer:
    replicas: 2
    resources:
      limits:
        cpu: "2000m"
        memory: "4Gi"
      requests:
        cpu: "500m"
        memory: "1Gi"
  otelCollectorContainer:
    replicas: 4
    resources:
      limits:
        cpu: "4000m"
        memory: "8Gi"
      requests:
        cpu: "1000m"
        memory: "2Gi"
  database:
    isExternal: false
    managedDatabase:
      container:
        replicas: 2
        resources:
          limits:
            cpu: "4000m"
            memory: "16Gi"
          requests:
            cpu: "1000m"
            memory: "4Gi"
        persistenceEnabled: true
        diskSize: "200Gi"
      cluster:
        isEnabled: true
        shardCount: 2
        replicaCount: 2
      zookeeper:
        isEnabled: true
        container:
          replicas: 3
          resources:
            limits:
              cpu: "500m"
              memory: "512Mi"
            requests:
              cpu: "100m"
              memory: "256Mi"
          diskSize: "10Gi"
  ingress:
    ui:
      enabled: true
      hostname: signoz.example.com
    otelCollector:
      enabled: true
      hostname: otel-ingest.example.com

SigNoz with External ClickHouse

Connect SigNoz to an existing external ClickHouse instance instead of deploying one in-cluster. The password is referenced from a pre-existing Kubernetes Secret:

apiVersion: kubernetes.openmcf.org/v1
kind: KubernetesSignoz
metadata:
  name: shared-signoz
  labels:
    openmcf.org/provisioner: pulumi
    pulumi.openmcf.org/organization: my-org
    pulumi.openmcf.org/project: my-project
    pulumi.openmcf.org/stack.name: prod.KubernetesSignoz.shared-signoz
spec:
  namespace: observability
  signozContainer:
    replicas: 2
    resources:
      limits:
        cpu: "2000m"
        memory: "4Gi"
      requests:
        cpu: "500m"
        memory: "1Gi"
  otelCollectorContainer:
    replicas: 3
    resources:
      limits:
        cpu: "2000m"
        memory: "4Gi"
      requests:
        cpu: "500m"
        memory: "1Gi"
  database:
    isExternal: true
    externalDatabase:
      host: clickhouse.shared-infra.svc.cluster.local
      httpPort: 8123
      tcpPort: 9000
      clusterName: cluster
      isSecure: false
      username: signoz
      password:
        secretRef:
          name: clickhouse-credentials
          key: password
  ingress:
    ui:
      enabled: true
      hostname: signoz.example.com

Using Foreign Key References

Reference an OpenMCF-managed namespace instead of hardcoding the name:

apiVersion: kubernetes.openmcf.org/v1
kind: KubernetesSignoz
metadata:
  name: team-signoz
  labels:
    openmcf.org/provisioner: pulumi
    pulumi.openmcf.org/organization: my-org
    pulumi.openmcf.org/project: my-project
    pulumi.openmcf.org/stack.name: prod.KubernetesSignoz.team-signoz
spec:
  namespace:
    valueFrom:
      kind: KubernetesNamespace
      name: observability-namespace
      field: spec.name
  database:
    isExternal: false
    managedDatabase:
      container:
        persistenceEnabled: true
        diskSize: "50Gi"

SigNoz with Custom Helm Values

Override additional Helm chart values for advanced customization, such as configuring retention policies:

apiVersion: kubernetes.openmcf.org/v1
kind: KubernetesSignoz
metadata:
  name: custom-signoz
  labels:
    openmcf.org/provisioner: pulumi
    pulumi.openmcf.org/organization: my-org
    pulumi.openmcf.org/project: my-project
    pulumi.openmcf.org/stack.name: staging.KubernetesSignoz.custom-signoz
spec:
  namespace: observability
  createNamespace: true
  database:
    isExternal: false
  helmValues:
    "signoz.alertmanager.enabled": "true"
    "queryService.replicaCount": "2"

Stack Outputs

After deployment, the following outputs are available in status.outputs:

OutputTypeDescription
namespacestringKubernetes namespace where SigNoz is deployed
signozServicestringKubernetes Service name for the SigNoz UI and API (format: {name}-signoz)
otelCollectorServicestringKubernetes Service name for the OpenTelemetry Collector (format: {name}-otel-collector)
portForwardCommandstringkubectl port-forward command for local access to SigNoz UI on port 8080
kubeEndpointstringCluster-internal FQDN for SigNoz UI (e.g., my-signoz-signoz.observability.svc.cluster.local:8080)
externalHostnamestringPublic hostname for external SigNoz UI access, only set when ingress.ui.enabled is true
internalHostnamestringInternal hostname for VPC-internal SigNoz access
otelCollectorGrpcEndpointstringCluster-internal FQDN for OTel Collector gRPC ingestion (e.g., my-signoz-otel-collector.observability.svc.cluster.local:4317)
otelCollectorHttpEndpointstringCluster-internal FQDN for OTel Collector HTTP ingestion (e.g., my-signoz-otel-collector.observability.svc.cluster.local:4318)
otelCollectorExternalGrpcHostnamestringPublic hostname for OTel Collector gRPC endpoint, only set when OTel Collector ingress is configured
otelCollectorExternalHttpHostnamestringPublic hostname for OTel Collector HTTP endpoint, only set when ingress.otelCollector.enabled is true
clickhouseEndpointstringCluster-internal ClickHouse endpoint (e.g., my-signoz-clickhouse.observability.svc.cluster.local:8123), only set when using self-managed ClickHouse
clickhouseUsernamestringClickHouse username for authentication (always admin), only set when using self-managed ClickHouse
clickhousePasswordSecret.namestringName of the Kubernetes Secret containing the ClickHouse password (format: {name}-clickhouse), only set when using self-managed ClickHouse
clickhousePasswordSecret.keystringKey within the ClickHouse password Secret (always admin-password), only set when using self-managed ClickHouse

Related Components

  • KubernetesNamespace — provides the target namespace via valueFrom reference
  • KubernetesClickHouse — standalone ClickHouse deployment that can be used as an external database for SigNoz
  • KubernetesDeployment — application deployments instrumented with OpenTelemetry SDKs that send telemetry to SigNoz
  • KubernetesIstio — provides the Istio ingress gateway used by SigNoz Gateway API resources
  • KubernetesGatewayApiCrds — installs the Gateway API CRDs required for SigNoz ingress configuration

Next article

Kubernetes Solr

Kubernetes Solr Deploys an Apache Solr cluster on Kubernetes using the Solr Operator's SolrCloud custom resource, with a co-located ZooKeeper ensemble for cluster coordination, persistent storage for both Solr and ZooKeeper data, and optional external access through Istio Gateway API ingress with automatic TLS via cert-manager. What Gets Created When you deploy a KubernetesSolr resource, OpenMCF provisions: Namespace — created only when createNamespace is true SolrCloud Custom Resource — a Solr...
Read next article
Presets
1 ready-to-deploy configurationView presets →