Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
175 results found
-
Last Database User Access
My team has found ourselves in the position of needing to iterate through the Database Access History Logs in order to discover whether or not a database user is still being used for the past t time periods.
We need to have this functionality in order to guarantee a seamless credential rotation.
With the current API this operation will require iterating over each cluster's access logs for the past t time periods. If I have "n" clusters and "t" time periods this is an O(n * t) runtime. I'm not really great at O() notation, but if t is greater…
5 votes -
Save ATLAS monitoring selection settings
Atlas monitoring has different setinngs: Primaries, Secondaries, All
it would be good if the setting ALL (and maybe others) could be saved so I do not need to select it again.
thanks
Rene Schwarz1 vote -
Expose Locking Metrics
One of our customers have asked if we can expose locking metrics through the Atlas UI to assist with investigating suspected lock contention issues.
We have this information in FTDC and our customer feels it would be helpful for them to have this information too.
4 votes -
Add support for HTTP based scraping without IP restriction
Prometheus supports a lot in the HTTP scraping space, including oauth and Bearer token based scrape targets.
Currently we use Grafana cloud that has a list of IP address that can scrape from. This "can" change, and if it does and we hardcode this access into Atlas based on IP then it will break the scrape.
Could Mongo add support for token/oauth based scraping. Where we provide these tokens in a HTTPS call?
I think this would solve the issues with things like private link, or peered network connections. It would also allow "any" prometheus server that can make outbound…
3 votes -
Allow switching between shards in the profiler and/or have a combined view
Since the profiler tabs are shard specific, it would radically improve the usability to:
a) Combine the profiler events to truly have a cluster level view so you don’t need to worry about shard specific views
b) Add a drop-down to the top of the page near the title that has the various shards listed in it, selecting a different shard brings you back to the same tab, but now with the changed view.
c) Consider tweaking the URL so that rather than using a SHA in the URL, you use the shard name, making it easy to manipulate the…2 votes -
Support export Prometheus metrics via remote_write
High throughput prometheus compatible database like mimir, thanos, and the likes of GCP's managed prometheus really like metrics sent to them via remotewrite https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remotewrite. Grafana cloud for instance doesn't allow you to configure any scrape rules. The only option is to push metrics to them.
Currently Atlas only allows you to scrape metrics, which means for a user to get these metrics into a highly available version of Prometheus you need to relay this off of an intermediary process (prometheus exporter or a standalone prometheus server that supports scraping)
I think a good addition to this capability would…
1 vote -
Allow the image export using Atlas Admin API
For the report purpose, we need to get the exported images of Status/Hardware/DBStast.
With personal authentication, I can get these images via URL:
https://cloud.mongodb.com/export/v2/charts/{status/hardware/dbstast}/{clusterID}/{hostID}/{format}/{title}?since={since}&until={until}&hostDbStatsName={DBName}However, I cannot get it programmatically with API Key authentication.
Let's allow the image export using Atlas Admin API.
Thanks!3 votes -
Cache last aggregation query for each collection
For each collection – on the aggregation tab, cache the last query which was run – keep it pre-populated in the textboxes.
It’s so easy to accidentally click away from this page and when you come back, having to start from scratch always takes a bit. Ideally I want to be able to leave this view and come back as though nothing had changed.
1 vote -
Re-run a query from the profiler tab
When using the profiler tab, if there is a $regex expression specified, it comes back in the logs as $regularExpression. Similar issues for ObjectID() vs generated $oid.
For some reason, you can’t take the query the profiler outputs and just plug it back in to mongo. You have to manipulate it.
Displaying a valid query from the profiler tab in which I could either click “Open query in aggregation viewer” and open the aggregation pane for that collection with this query pre-populated, and/ allow easy copy+pasting of a valid query elsewhere.
The first thing I've been doing with query optimization…
1 vote -
Display EBS actual disk information (gp3)
Currently, clusters created on AWS have gp3 volumes according to support.
However :
* the configuration displays max IOPS 6000 for M50 clusters even though documentation states that iops scale 3:1 with the volume size in GB (1 GB = 3 IOPS) regardless of disk type for M30+ clusters (true with gp2 anwyay, for gp3 that's an enforcement by Atlas)
Documentation link : https://docs.atlas.mongodb.com/customize-storage/#configuring-the-iops-rate=> the configuration should at least display the actual IOPS (as it's also being used for alerting) - ideally it would allow us to have more IOPS than the minimum 3:1 enforcement.
- with gp2 we had…
4 votes -
Send alert on mongodb database users changes and permission changes
This will be a important tool to get alerted if any databases get exposed or hacked.
3 votes -
Uptime report for Atlas cluster
It would be great to have an uptime report be it weekly or monthly basis which can be extracted and presented anyone who would like to have an high level overview.
5 votes -
Persist setProfilingLevel Setting in Atlas
Currently the profiling level resets to 0 every time the mongod process restarts. You can persist this with a self-hosted mongo by editing operationProfiling.mode in the conf file, but this is not possible in Atlas.
4 votes -
Send Alert when IP access list changes
Customer is requesting that an alert be sent out if project IP access list gets changed
22 votes -
Trigger execution details
It would be very useful to have some details provided in Atlas to trace the execution of the trigger, like the _id and operation type at a minimum. When a trigger executes the only information we see in Atlas is [
"Sent to AWS EventBridge"
].6 votes -
CPU Steal % on Azure
The CPU Steal % metric and alert seems to be currently implemented only for AWS. Please implement it also on Azure, as there is no easy way currently to get alerted if the underlying burstable VM (e.g. for M20) runs out of credits ...
5 votes -
Please remove the text-transform uppercase style on the graph axes.
I yelled out "WTF!" in the office when I saw that a steep peak in the graph indicated a query that took 8MS. Turns out it meant to say 8ms.
The M means Mega (one million). We, I assume, are all very familiar with it, and therefore I assume that I'm not the only one who automatically reads it as one million.
Note that I did not confuse the s and S. That is because the s and the S are much less visually distinct. Still it's technically incorrect too.
1 vote -
Allow threshold on "System Memory: Available" alert condition to be a percentage
In Atlas, it would be ideal if you could specify a percentage of total memory as the threshold for the "System Memory: Available Is..." rather than a literal number value. If it was possible to set this as a percentage, then the alert could be applied to all hosts in a project, rather than having to set hostname conditions and create a separate alert for each cluster.
10 votes -
Ability to get collection stats via the API
e.g. list collections, collection size, collection indexes, etc.
3 votes -
Increase monitoring metrics resolution retention time on views of 1min & 5mins
The speed at which usable monitoring metrics data (1m, 5m) is purged and reduced to 1hr is way too quick.
After 1hr, the ability to analyze these metrics that are crucial to investigating a performance issue were lost and hindered the root cause analysis.
I would suggest that 1m or at least 5m resolution metrics be retained for at least a week before being purged.
22 votes
- Don't see your idea?