Skip to Main Content

MongoByte MongoDB Logo

Welcome to the new MongoDB Feedback Portal!

{Improvement: "Your idea"}
We’ve upgraded our system to better capture and act on your feedback.
Your feedback is meaningful and helps us build better products.

Status Submitted
Created by Nobuteru Kawano
Created on Dec 9, 2025

Need Exportable Metrics for Proactive Monitoring of JavaScript Heap Usage

What problem are you trying to solve?

Focus on the what and why of the need you have, not the how you'd like it solved.

We are a production user relying on Atlas Stream Processor, and we frequently encounter sudden Stream Processor Memory Errors due to the limitation of the internalQueryJavaScriptHeapSizeLimitMB parameter. Since we lack visibility into the internal JavaScript heap usage, we are forced into a reactive operational model. We must wait for an error to occur, resulting in significant, unplanned downtime and requiring us to open urgent support cases for a manual parameter increase. This current process is costly and destabilizes our production environment.

What would you like to see happen?

Describe the desired outcome or enhancement.

We require the exposure of internal JavaScript Heap Usage metrics from the Atlas Stream Processor. Specifically, we would like to see exportable metrics that show the Current JavaScript Heap Usage (in MB) and the Maximum Configured Heap Limit (internalQueryJavaScriptHeapSizeLimitMB). Ideally, these metrics should be accessible through Atlas Monitoring or an external monitoring integration (like Datadog).

Why is this important to you or your team?

Explain how the request adds value or solves a business need.

This feature is critical for enabling proactive operational management. Direct visibility into heap usage is essential for setting up predictive alerts. This would allow our team to detect when heap usage exceeds a critical safety threshold (e.g., 80% of the limit) and request a parameter increase before a production error or crash occurs. Ultimately, this will drastically reduce unplanned downtime, prevent the repeated opening of reactive support tickets, and ensure the stability and scalability of our real-time processing pipeline.

What steps, if any, are you taking today to manage this problem?

Today, we are forced to manage this problem reactively. We must then open a support ticket to request an increase to the internalQueryJavaScriptHeapSizeLimitMB parameter after the error has already impacted our production workflow.


  • Admin
    Joe Niemiec
    Dec 10, 2025

    This error does not come from Atlas Stream Processing but is the result of using $function that is being executed on the Atlas Cluster. If this error is being seen in Atlas Stream Processing its likely that the $source stage is using a pushdown pipeline that has $function in it. Will reassign this to feedback to Atlas Clusters.