Kibana version: 8.5
Elasticsearch version: 8.5
APM Server version: 8.5
APM Agent language and version: apm-aws-lambda(Nodejs), 1.5.8
Extension Layer ARN: arn:aws:lambda:ap-northeast-2:267093732750:layer:elastic-apm-extension-ver-1-5-8-x86_64:1
Install method: Self-hosted via Helm chart
Log shipping: via APM Extension (not using CloudWatch)
HI
I’m currently using the Elastic APM AWS Lambda extension together with the elastic-apm-node agent, as described in the official documentation.
APM data and logs are successfully collected and visible in Kibana via our self-hosted Elastic Stack.
What I'm trying to achieve:
I would like to inject additional context fields (e.g., user_id, called_path) into every log message emitted by my Lambda function — specifically in documents where data_stream.type: logs, so they appear as indexed fields in Kibana, just like faas.id, trace.id, etc.
What I already know:
- Using apm.addLabels() or apm.setCustomContext() works well for APM traces/spans.
- However, these custom fields do not appear in log documents (data_stream.type: logs).
- I know that wrapping console.log(JSON.stringify({...})) and parsing it later via Ingest Pipeline works, but this requires me to manually format every log call, which I’d like to avoid.
My Question
Is there any recommended way or best practice to automatically enrich every log message with custom fields (like user_id) without manually wrapping console.log()?
I'm looking for something like:
- A middleware, hook, or logger adapter to apply globally in the Lambda handler,
- Or some integration with the APM Extension or elastic-apm-node agent that allows injecting fields into log messages,
- A Lambda-level solution to centrally inject contextual metadata into all logs sent via the Extension.
Thanks in advance for your insights!