Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

160 results found

  1. Allow to create Dashboard

    Please provide feature to create a dashboard for metrics like CPU, memory consumption etc for a cluster or more than one cluster in single graphs like for cpu one graph, for memory one graph like that just like monitoring tools provide for example zabbix screens and dashboards, stackdriver dashboards, dynatrace dashboards.

    This is to have a glance on our entire architecture instead of checking each and every cluster every time. This will be particularly useful during load tests or business peak period where we need to check more than one cluster performance.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Separate Atlas metric graphs of the outgoing traffic from the internal/replication traffic in Atlas cluster

    This is a request to separate Atlas metric graphs of the outgoing traffic from the internal/replication traffic in Atlas cluster

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Export profiler slow queries via Datadog integration

    The "profiler" tab in Atlas is very nice -- it gives a really nice over-all view of slow queries, with the ability to drill down into the individual calls.

    However, I don't check this page every day, and so it is sometimes hard to find out when we actually do have slow queries or commands pop up on it if it is not affecting our cluster.

    We use datadog as the central point for all of our monitoring. We have the Atlas <--> Datadog integration set up, which exports a number of useful stats. It would be very nice for…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Display Total Cluster Data Size in Atlas UI

    Atlas doesn't display the total data size of the databases hosted on the sharded cluster. This feature was available in Opsmanager and will be very helpful for the DBA's and the product development teams to glance at the data size of the cluster.

    Attached the screenshot for example.

    Nice to have or display the size of each database and also display the historical datasize metric for each database in a sharded cluster.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Metric on TTLDeleted documents

    Hi,
    We (Indeed, inc.) are one of your larger customers with more than $2M spend on Atlas in a year. We need a metric on number of TTL Deleted documents per second for an Atlas Cluster. We have similar metric on our standard mongodb plugin in Datadog and this is extremely useful to all our Product Teams because most of our clusters have TTL.
    Thanks.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Last Database User Access

    My team has found ourselves in the position of needing to iterate through the Database Access History Logs in order to discover whether or not a database user is still being used for the past t time periods.

    We need to have this functionality in order to guarantee a seamless credential rotation.

    With the current API this operation will require iterating over each cluster's access logs for the past t time periods. If I have "n" clusters and "t" time periods this is an O(n * t) runtime. I'm not really great at O() notation, but if t is greater…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Display EBS actual disk information (gp3)

    Currently, clusters created on AWS have gp3 volumes according to support.

    However :
    * the configuration displays max IOPS 6000 for M50 clusters even though documentation states that iops scale 3:1 with the volume size in GB (1 GB = 3 IOPS) regardless of disk type for M30+ clusters (true with gp2 anwyay, for gp3 that's an enforcement by Atlas)
    Documentation link : https://docs.atlas.mongodb.com/customize-storage/#configuring-the-iops-rate

    => the configuration should at least display the actual IOPS (as it's also being used for alerting) - ideally it would allow us to have more IOPS than the minimum 3:1 enforcement.

    • with gp2 we had…
    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Ability to get collection stats via the API

    e.g. list collections, collection size, collection indexes, etc.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Uptime report for Atlas cluster

    It would be great to have an uptime report be it weekly or monthly basis which can be extracted and presented anyone who would like to have an high level overview.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Display the Atlas-generated replica-set name in the cluster Dashboard

    This is an easy one, and it'll help a lot.

    Inconvenience:

    In the past, every time I created a cluster, the name of the replica-set was in line with the name of the cluster:
    e.g.:
    Cluster Name: asteroids-tracking
    Replicaset Name: asteroids-tracking-shard-0

    Now when I create a cluster, the name of the replica-set is a random set of characters prefixed by "atlas-"
    e.g.:
    Cluster Name: asteroids-tracking
    Replicaset Name: atlas-[randomChars]-shard-0

    This makes it unnecessarily difficult to find (in the Atlas Organization) the cluster referred to when I receive text message alerts. Mainly when you are in charge of so many clusters.
    Additionally,…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Database Access History: sort and export

    Would be nice to export the database access history and/or sort by a column.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. 3rd Party Hooks

    Within Atlas today, we have a number of channel options to send alerts (Ex. OpsGenie, Slack, VictorOps).

    I would like to see a hook integration provided by Atlas so any tool can recieve alerts from their cluster. Ex. I can recieve my cluster alert in Microsoft Teams.

    3rd party hooks can increase the stickiness of Atlas and improve freedoms for teams and their existing toolsets.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Manually replace / restart virtual machine

    Allow to replace / restart the underlying virtual machine of a node.

    Sometimes this is all that is needed to get a cluster out of an unhealthy state. Currently, only support seem to be able to do this.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Disk throughput in monitoring

    Currently we have disk IOPS in monitoring (both read and write).
    One of the metrics that play a role to decide whether to have a provisioned disk or not, at least with AWS hosting, is the disk bandwidth.
    For instance with a large enough disk, like 2000GB, I have max 250MB/s bandwidth with an unprovisioned (gp2) disk (the maximum), but could go to 500MB/s with a provisioned (io1) disk of that size.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. More detailed update status

    It would be amazingly helpful to see more detailed information on recovering nodes. Just knowing that the node is, for example, "81% of the way on initial sync" is much more informational (and lets users know that it isn't stuck) as compared to the node being in "Startup2" recovery.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Publish statistics in Atlas to analyze what is filling oplog

    It would be very useful to be able to see metrics/statistics about the contents of oplog. There are open-source tools like oplog analyzer (https://github.com/mhelmstetter/oplog-analyzer) that can be used, but it's a hassle to have to install it and run it in the same datacenter where the database is running (for performance).

    The statistics I'm most interested is what collections have most oplog documents, what kind of operations they have been and what is the total size that each collection currently utilizes from the oplog. This will help improving code to use less oplog.

    We've seen cases where bad…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. The page fault metric is not available in the Datadog integration

    The page fault metric is not available in the Datadog integration

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Monitoring Metrics on dhandle

    We'd like to monitor the WiredTiger dhandle over the time, directly from Cloud Atlas Monitoring view. It would allow to directly see the impact when updating cluster settings.

    We'd like also being able to configure alert triggers on it, the goal for us is being alerted when an excessive amount of files (collections & indexes) is loaded into the MongoDB Memory, to avoid reaching an Out Of Memory error.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Use additional metadata to differentiate processes

    Right now Ops Manager monitoring identifies MongoDB processes according to hostname:port. Unfortunately, if 2 processes have the same short hostname & port in the same Ops Manager project, they'll be treated the same even if they are actually different processes with different FQDN.

    Please either allow the use of additional characteristics (FQDN, replica set name, config server name, etc) for differentiating MongoDB processes or provide some way to tag 2 or more processes so monitoring doesn't accidentally miscategorize them as the same process.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base