Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

128 results found

  1. Monitoring Metrics on dhandle

    We'd like to monitor the WiredTiger dhandle over the time, directly from Cloud Atlas Monitoring view. It would allow to directly see the impact when updating cluster settings.

    We'd like also being able to configure alert triggers on it, the goal for us is being alerted when an excessive amount of files (collections & indexes) is loaded into the MongoDB Memory, to avoid reaching an Out Of Memory error.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Database Access History: sort and export

    Would be nice to export the database access history and/or sort by a column.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Display Total Cluster Data Size in Atlas UI

    Atlas doesn't display the total data size of the databases hosted on the sharded cluster. This feature was available in Opsmanager and will be very helpful for the DBA's and the product development teams to glance at the data size of the cluster.

    Attached the screenshot for example.

    Nice to have or display the size of each database and also display the historical datasize metric for each database in a sharded cluster.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Add support for HTTP based scraping without IP restriction

    Prometheus supports a lot in the HTTP scraping space, including oauth and Bearer token based scrape targets.

    Currently we use Grafana cloud that has a list of IP address that can scrape from. This "can" change, and if it does and we hardcode this access into Atlas based on IP then it will break the scrape.

    Could Mongo add support for token/oauth based scraping. Where we provide these tokens in a HTTPS call?

    I think this would solve the issues with things like private link, or peered network connections. It would also allow "any" prometheus server that can make outbound…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Send alert on mongodb database users changes and permission changes

    This will be a important tool to get alerted if any databases get exposed or hacked.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Ability to get collection stats via the API

    e.g. list collections, collection size, collection indexes, etc.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Display the Atlas-generated replica-set name in the cluster Dashboard

    This is an easy one, and it'll help a lot.

    Inconvenience:

    In the past, every time I created a cluster, the name of the replica-set was in line with the name of the cluster:
    e.g.:
    Cluster Name: asteroids-tracking
    Replicaset Name: asteroids-tracking-shard-0

    Now when I create a cluster, the name of the replica-set is a random set of characters prefixed by "atlas-"
    e.g.:
    Cluster Name: asteroids-tracking
    Replicaset Name: atlas-[randomChars]-shard-0

    This makes it unnecessarily difficult to find (in the Atlas Organization) the cluster referred to when I receive text message alerts. Mainly when you are in charge of so many clusters.
    Additionally,…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Manually replace / restart virtual machine

    Allow to replace / restart the underlying virtual machine of a node.

    Sometimes this is all that is needed to get a cluster out of an unhealthy state. Currently, only support seem to be able to do this.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Disk throughput in monitoring

    Currently we have disk IOPS in monitoring (both read and write).
    One of the metrics that play a role to decide whether to have a provisioned disk or not, at least with AWS hosting, is the disk bandwidth.
    For instance with a large enough disk, like 2000GB, I have max 250MB/s bandwidth with an unprovisioned (gp2) disk (the maximum), but could go to 500MB/s with a provisioned (io1) disk of that size.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. More detailed update status

    It would be amazingly helpful to see more detailed information on recovering nodes. Just knowing that the node is, for example, "81% of the way on initial sync" is much more informational (and lets users know that it isn't stuck) as compared to the node being in "Startup2" recovery.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Add a Transaction Commits / Sec Metric Graph to Atlas

    Add a graph in Atlas to display "transactions.totalCommitted" on a per second basis to the Atlas metrics UI. Customers that are using transactions are often more interested in the # of transaction commits per second than opcounters.

    I have had to use mongostat on a number customer evaluations because this metric is not available.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Publish statistics in Atlas to analyze what is filling oplog

    It would be very useful to be able to see metrics/statistics about the contents of oplog. There are open-source tools like oplog analyzer (https://github.com/mhelmstetter/oplog-analyzer) that can be used, but it's a hassle to have to install it and run it in the same datacenter where the database is running (for performance).

    The statistics I'm most interested is what collections have most oplog documents, what kind of operations they have been and what is the total size that each collection currently utilizes from the oplog. This will help improving code to use less oplog.

    We've seen cases where bad…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. The page fault metric is not available in the Datadog integration

    The page fault metric is not available in the Datadog integration

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Better message for "Removed Indexes" in Activity Feed during Atlas rolling index build

    During a rolling index build, it is expected to have 2 "Deployment configuration published" notifications - 1 for Added Indexes and another 1 for Removed Indexes. This is because an entry for this desired index is added to the automation config and then the automation agent builds it accordingly. Once the agent is done building it, we remove the entry for that index from the automation config. Indeed, this does not drop the index.

    However, from the term of "Removed Indexes", it can cause confusion that the index is dropped. Hence, this feature request is filed for clearer message so…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Disk Usage Breakdown

    When looking at the cluster page, I can see Disk Usage = 1.6GB, for example, but I don't know what the breakdown of that is. For instance, if I look at the same DB and add up the size of the collections themselves it's barely 1.5MB. So 1.6 Gig disk = 1.5 Meg data. I suspect this is oplog etc. but it is a pretty confusing potential red herring when diagnosing performance issues etc.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Expose more in depth metrics

    We have had cases where support has been able to investigate issues with our clusters that we ourselves can't due to access to metrics that are not available via atlas, but are avalible to support.

    All available metrics should be presented via the UI as well as the datadog add on (or even via tools we can use to get access to these metrics separately).

    Notable metrics include: system TcpExt DelayedACKs, system Tcp RetransSegs, ss metrics repl apply batches num, rs member _id 1 heartbeat time, ss ss global: queued writers

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Add connection pooling metrics for sharded clusters

    We recently ran into an issue where we hit the internal mongoS -> mongoD connection pool limit when reading from secondary's requiring Atlas support to increase the value of ShardingTaskExecutorPoolMaxSize.

    As a result it would be great to be able to monitor the internal mongoS -> mongoD connection pool usage so we can monitor it and set up alarms if it gets near the limit.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Expose hourly cost data as a metric for monitoring cluster cost

    The hourly cost of a cluster is already availble in the Atlas UI. Expose this same data as a metric for monitoring cluster cost. We understand it may not include the data transfer and some other costs but monitoring the spike or valleys in the monitor over time for a given cluster is helpful when autoscaling is turned on. Then we can also be able to set an alarm on the metric.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Premium Monitoring Granularity for lower tier clusters

    CURRENT STATE

    Premium Monitoring Granularity (10 second metrics) only available on M40 clusters or higher

    IMPACT

    Lower tiered environments (such as testing and staging) cannot have 10 second metrics granularity. Some customers export metrics to third parties such as Data Dog who only handle homogenous granularity of metrics.

    When Data Dog accepts different granularities e.g. 10 second granularity for PROD environments (M40+) and lower granularity for STAGE environments (lower than M40) - it leads to poor data integration and dashboards failing to load data properly.

    Customer does not have a reliable into their data since some environments send 10 second…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base