Datadog Exporter
Please review the Collector's security documentation, which contains recommendations on securing sensitive information such as the API key required by this exporter.
The Datadog Exporter now skips APM stats computation by default. It is recommended to only use the Datadog Connector in order to compute APM stats.
To temporarily revert to the previous behavior, disable the exporter.datadogexporter.DisableAPMStats
feature gate. Example: otelcol --config=config.yaml --feature-gates=-exporter.datadogexporter.DisableAPMStats
Find the full configs of Datadog exporter and their usage in collector.yaml. More example configs can be found in the official documentation.
FAQs
Why am I getting errors 413 - Request Entity Too Large, how do I fix it?
This error indicates the payload size sent by the Datadog exporter exceeds the size limit (see previous examples https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16834, https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/17566).
This is usually caused by the pipeline batching too many telemetry data before sending to the Datadog exporter. To fix that, try lowering send_batch_size
and send_batch_max_size
in your batchprocessor config. You might want to have a separate batch processor dedicated for datadog exporter if other exporters expect a larger batch size, e.g.
processors:
batch: # To be used by other exporters
timeout: 1s
# Default value for send_batch_size is 8192
batch/datadog:
send_batch_max_size: 100
send_batch_size: 10
timeout: 10s
...
service:
pipelines:
metrics:
receivers: ...
processors: [batch/datadog]
exporters: [datadog]
The exact values for send_batch_size
and send_batch_max_size
depends on your specific workload. Also note that, Datadog intake has different payload size limits for the 3 signal types:
Fall back to the Zorkian metric client with feature gate
Since v0.69.0, the Datadog exporter has switched to use the native metric client datadog-api-client-go
for metric export instead of Zorkian client by default. While datadog-api-client-go
fixed several issues that are present in Zorkian client, there is a performance regression with it compared to Zorkian client especially under high metric volume. If you observe memory or throughput issues in the Datadog exporter with datadog-api-client-go
, you can configure the Datadog exporter to fall back to the Zorkian client by disabling the feature gate exporter.datadogexporter.metricexportnativeclient
, e.g.
otelcol --config=config.yaml --feature-gates=-exporter.datadogexporter.metricexportnativeclient
Note that we are currently migrating the Datadog metrics exporter to use the metrics serializer instead. The feature flag exporter.datadogexporter.metricexportnativeclient
will be deprecated and eventually removed in the future, following the feature lifecycle.
Remap OTel’s service.name attribute to service for logs
NOTE this workaround is only needed when feature gate exporter.datadogexporter.UseLogsAgentExporter
is disabled. This feature gate is enabled by default starting v0.108.0.
For Datadog Exporter versions 0.83.0 - v0.107.0, the service
field of OTel logs is populated as OTel semantic convention service.name
. However, service.name
is not one of the default service attributes in Datadog’s log preprocessing.
To get the service field correctly populated in your logs, you can specify service.name to be the source of a log’s service by setting a log service remapper processor.
How to add custom log source
In order to add a custom source to your OTLP logs, set resource attribute datadog.log.source
. This feature requires exporter.datadogexporter.UseLogsAgentExporter
feature flag to be enabled (now enabled by default).
Example:
processors:
transform/logs:
log_statements:
- context: resource
statements:
- set(attributes["datadog.log.source"], "otel")
My Collector K8s pod is getting rebooted on startup when I don't manually set a hostname under exporters::datadog::hostname
This is due to a bug with underlying hostname detection blocking the health_check
extension from responding to liveness/readiness probes on startup. To fix, either set hostname_detection_timeout
to be less than the pod/daemonset livenessProbe: failureThreshold * periodSeconds
so that the timeout for hostname detection on startup takes less time than the control plane waits before restarting the pod, or leave hostname_detection_timeout
at the default 25s
value and double-check the livenessProbe
and readinessProbe
settings and ensure that the control plane will in fact wait long enough for startup to complete before restarting the pod.
Hostname detection is currently required to initialize the Datadog Exporter, unless a hostname is specified manually under hostname
.