Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
175 results found
-
Alert for WiredTiger Cache
Hi,
Can you please create an alert for WiredTiger metrics, such as used cache?
We had several cluster instances going over the 5% of used cached (dirty data) and would like to be notified when it happens.
Regards,
SergeiThis is needed in order to determine whether
7 votes -
Export Aggregation Results as Metrics to Prometheus
Add support for exporting MongoDB aggregation results as Prometheus metrics. This would allow users to track custom queries and dynamic data, enabling more granular and meaningful monitoring and alerting in Prometheus and Grafana.
6 votes -
Integrar la alerta de Replication lag de Atlas en el API de Prometheus
Se solicita incluir métricas en el API de prometheus para poder implementar la alarma de Replication lag
6 votes -
Uptime report for Atlas cluster
It would be great to have an uptime report be it weekly or monthly basis which can be extracted and presented anyone who would like to have an high level overview.
6 votes -
Trigger execution details
It would be very useful to have some details provided in Atlas to trace the execution of the trigger, like the _id and operation type at a minimum. When a trigger executes the only information we see in Atlas is [
"Sent to AWS EventBridge"
].6 votes -
Improve "Host has index suggestions"" alert by including the hostname
The "Host has index suggestions" alert notification does not contain any reference to the host/deployment that triggered it making it hard to figure out where it came from. A hostname reference would be very helpful.
6 votes -
Atlas activity feed
We want option to export Atlas activity feed in a file to export and work with it. Parse it etc ...
Even better, make a collection with it.
6 votes -
Allow Atlas to send additional metrics to New Relic via integration
This is a request to add some additional metrics to those that are currently sent from Atlas to New Relic when the integration is enabled.
The specific metrics of interest here are:
- Disk Space (%) Used (or Free)
- DB Storage
- Query TargetingHaving pointed out the above metrics specifically, I would also say: the closer that the metrics sent to New Relic via the Atlas integration can resemble those available in the Atlas cluster Metrics page, the better.
6 votes -
Add replica set name to cluster info main page
It would be nice to get quick access to the name of the replica set from the cluster overview page.
6 votes -
Atlas metrics granularity after 48 hours
For metrics older than 48 hours, the data is presented in 1-hour intervals. This level of granularity is often too coarse for a thorough examination of past events and trends. Such a broad view can obscure smaller yet significant details critical for understanding and resolving performance issues that occurred in the past.
Suggested Improvement:
having a smaller granularity value for historical metrics beyond the 48-hour timeframe. Providing data in smaller intervals would greatly enhance our ability to conduct in-depth analyses and diagnose past performance issues accurately. This would be particularly beneficial for conducting detailed investigations of historical data and identifying…
5 votes -
Last Database User Access
My team has found ourselves in the position of needing to iterate through the Database Access History Logs in order to discover whether or not a database user is still being used for the past t time periods.
We need to have this functionality in order to guarantee a seamless credential rotation.
With the current API this operation will require iterating over each cluster's access logs for the past t time periods. If I have "n" clusters and "t" time periods this is an O(n * t) runtime. I'm not really great at O() notation, but if t is greater…
5 votes -
CPU Steal % on Azure
The CPU Steal % metric and alert seems to be currently implemented only for AWS. Please implement it also on Azure, as there is no easy way currently to get alerted if the underlying burstable VM (e.g. for M20) runs out of credits ...
5 votes -
Monitoring for WiredTiger data handles
Add process measurements for monitoring WiredTiger data handles.
From Atlas documentation https://docs.atlas.mongodb.com/reference/atlas-limits/#collection-and-index-limits :
While there is no hard limit on the number of collections in a single cluster, the performance of a cluster might degrade if it serves a large number of collections and indexes. Larger collections have a greater impact on performance.
but now there is no way to get this information except
db.serverStatus()
5 votes -
track collection size
I would like to be able to track the size of our collections (i.e. # of records, total disk size) over time. It would be ideal if there was an automated way to do this with Atlas, perhaps with Charts ?
5 votes -
Add horizontal scroll bar to metrics tab
I can't see metrics for all of my nodes while in the metrics tabs unless I make the window very large. It would be much easier to analyze the health of my cluster if I could review metrics for all the nodes by scrolling horizontally, rather than having to switch between a grouping of nodes.
5 votes -
Use 24-hour time format in Metrics date pickers
Timestamps in Metrics plots are displayed in 24-hour time format, but the date/time filter inputs require 12-hour time (using am/pm).
12-hour time is not widely used around the world, and it is cumbersome to use in the date pickers especially give that other parts of the UI use 24-hour time.
Can we update these date pickers to use 24-hour time instead?
5 votes -
Allow NewRelic integration to use EU APIs instead of default US APIs for clusters in EU region
The NewRelic integration in MongoDB is using the default NewRelic API that is in non-EU region. There is a separate API endpoint for EU region, that is described at https://docs.newrelic.com/docs/using-new-relic/welcome-new-relic/get-started/introduction-eu-region-data-center#.
This is causing 'newRelicLicenseKey' failed to authenticate error for the MongoDB Atlas cluster setup in EU region.
5 votes -
Metric reporting private endpoint state
On Mongo Atlas platform we are able to see the status of both Atlas Private Endpoint and Azure Private Endpoint. It would be helpful to have these statuses available as a metric on the prometheus integration.
4 votes -
Expose Locking Metrics
One of our customers have asked if we can expose locking metrics through the Atlas UI to assist with investigating suspected lock contention issues.
We have this information in FTDC and our customer feels it would be helpful for them to have this information too.
4 votes -
Display EBS actual disk information (gp3)
Currently, clusters created on AWS have gp3 volumes according to support.
However :
* the configuration displays max IOPS 6000 for M50 clusters even though documentation states that iops scale 3:1 with the volume size in GB (1 GB = 3 IOPS) regardless of disk type for M30+ clusters (true with gp2 anwyay, for gp3 that's an enforcement by Atlas)
Documentation link : https://docs.atlas.mongodb.com/customize-storage/#configuring-the-iops-rate=> the configuration should at least display the actual IOPS (as it's also being used for alerting) - ideally it would allow us to have more IOPS than the minimum 3:1 enforcement.
- with gp2 we had…
4 votes
- Don't see your idea?