Skip to content

feat: add configurable metric export interval to lambda telemetry API receiver#2114

Open
herin049 wants to merge 2 commits intoopen-telemetry:mainfrom
herin049:feat/metrics-export-interval
Open

feat: add configurable metric export interval to lambda telemetry API receiver#2114
herin049 wants to merge 2 commits intoopen-telemetry:mainfrom
herin049:feat/metrics-export-interval

Conversation

@herin049
Copy link
Contributor

Followup to changes made in #2066. The changes in this PR add a configurable metric export interval to prevent metrics from being generated for every Lambda invocation, reducing the number of metric datapoints generated. Relevant tests have been added to test the new exporting behavior and configuration changes. Furthermore, relevant documentation has been added documenting how to configure metric temporality and export intervals.

@herin049 herin049 requested a review from a team as a code owner January 23, 2026 02:44
@herin049 herin049 requested a review from wpessers January 23, 2026 02:44
@wpessers
Copy link
Contributor

Good stuff! Getting back to this later today but overall seems good! 🚀

@wpessers wpessers added enhancement New feature or request go Pull requests that update Go code labels Jan 30, 2026
Copy link
Contributor

@wpessers wpessers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking about this change for a while now. And I know this PR is actually in response to my earlier comment on the previous PR for adding telemetryAPI metrics support: #2066 (comment)

But I think I've come around on the idea... Maybe it's fine to keep the behaviour as is. Because the metrics are still the same / correct whether you flush them once or multiple times. And probably the volume of metrics is acceptable so that we don't really need to be doing this aggregation before flushing?

If sending multiple metrics "exports" from the collector to your observability backend of choice is a concern, we have the batch processor for that, that can be configured to be used inside the telemetry pipelines of the collector.

TLDR: I'm thinking it's actually fine to always flush the metrics, the change (although nicely, cleanly implemented) introduces some extra complexity that feels a little bit unnecessary. What do you think @herin049 ? It's possible that I'm missing some nuances / reasons why it might still be a good idea to add this anyway.

@herin049
Copy link
Contributor Author

herin049 commented Feb 19, 2026

I've been thinking about this change for a while now. And I know this PR is actually in response to my earlier comment on the previous PR for adding telemetryAPI metrics support: #2066 (comment)

But I think I've come around on the idea... Maybe it's fine to keep the behaviour as is. Because the metrics are still the same / correct whether you flush them once or multiple times. And probably the volume of metrics is acceptable so that we don't really need to be doing this aggregation before flushing?

If sending multiple metrics "exports" from the collector to your observability backend of choice is a concern, we have the batch processor for that, that can be configured to be used inside the telemetry pipelines of the collector.

TLDR: I'm thinking it's actually fine to always flush the metrics, the change (although nicely, cleanly implemented) introduces some extra complexity that feels a little bit unnecessary. What do you think @herin049 ? It's possible that I'm missing some nuances / reasons why it might still be a good idea to add this anyway.

@wpessers Sorry for the late reply. I do believe that we still want to enable this feature since the batch processor does not actually do any "aggregation" of metrics in the sense that while it does batch calls to downstream exporters, it still results in the same number of metric data-points being created. This can be very problematic for many observability vendors because the billing model is based on the number of metric data-points ingested. By accumulating these metrics in memory and exporting periodically, we reduce the number of data-points at the cost of slightly less granular data (e.g. an export interval of 30s gives you a maximum resolution of 30s).

On a somewhat related note, the issue I described above can make user-defined metrics inside Lambda very expensive based on my experience at my employer, because every Lambda invocation flushes all metrics. Unfortunately, it doesn't seem like there are many good processors in the collector or collector-contrib repos that easily allow one to "downsample" the frequency of data-points generated, since it is assumed that aggregation should be primarily done in the SDK. Unfortunately, with Lambda, we don't have many options since we don't know when a Lambda instance will be torn down. I'm actually working on introducing some changes that will allow us to opt-in to not exporting on every invocation, and instead flush periodically while also handling the SIGTERM signal to flush remaining metrics before the Lambda instance is terminated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request go Pull requests that update Go code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments