Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

169 results found

  1. Integrated/unique metrics vision for primary replica-set

    It is really hard to understand the behaviour of primary replica-set on bigger timeframes, when it was switched some times. Would be very helpful to be able to see the metrics in a integrated chart, to understand consequences of some actions like application changes, index creations, version upgrades, etc.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Prometheus database and collection metrics

    We checked the Prometheus metrics provided by MongoDB Atlas and didn't find metrics for the following:
    Database size

    Collection storage size

    Record per collection

    Indexes per collection

    Index size

    We would like to have this kind of metrics to add to dashboards.

    20 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Export profiler slow queries via Datadog integration

    The "profiler" tab in Atlas is very nice -- it gives a really nice over-all view of slow queries, with the ability to drill down into the individual calls.

    However, I don't check this page every day, and so it is sometimes hard to find out when we actually do have slow queries or commands pop up on it if it is not affecting our cluster.

    We use datadog as the central point for all of our monitoring. We have the Atlas <--> Datadog integration set up, which exports a number of useful stats. It would be very nice for…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Metric Charts Aggregated by Primary Node at the Time

    Providing an aggregated metrics based on the primary server. Right now in order to view history for the primary we have to compare the different charts based on failover, makes it cumbersome to view the active primary trends

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Use additional metadata to differentiate processes

    Right now Ops Manager monitoring identifies MongoDB processes according to hostname:port. Unfortunately, if 2 processes have the same short hostname & port in the same Ops Manager project, they'll be treated the same even if they are actually different processes with different FQDN.

    Please either allow the use of additional characteristics (FQDN, replica set name, config server name, etc) for differentiating MongoDB processes or provide some way to tag 2 or more processes so monitoring doesn't accidentally miscategorize them as the same process.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Display Total Cluster Data Size in Atlas UI

    Atlas doesn't display the total data size of the databases hosted on the sharded cluster. This feature was available in Opsmanager and will be very helpful for the DBA's and the product development teams to glance at the data size of the cluster.

    Attached the screenshot for example.

    Nice to have or display the size of each database and also display the historical datasize metric for each database in a sharded cluster.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Minimum Oplog Retention Period - Set Minimum Oplog Window

    The documentation states: "New in version 4.4: Starting in MongoDB 4.4, you can specify the minimum number of hours to preserve an oplog entry." https://www.mongodb.com/docs/manual/core/replica-set-oplog/#minimum-oplog-retention-period

    The mongodbatlas_cluster resource in the terraform mongodbatlas provider only provides for the oplog_size_mb argument. https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#oplog_size_mb

    It would be nice to have that exposed so that terraform can update that value.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Basic operational metrix support with instana and grafana

    Hi MongoDB Team,

    We are using grafana and instana SRE tools for application monitoring.

    But MongoDB did not exposing the basic statistics like Total number of records and record insertion / updation / selection rates (over a sepcific period of time ) with grafana and instana .

    Kindly note that grafana and instana are widely used as industry standard SRE tool and it will be really helpful if MongoDB team could also expose these kind of detailed statistics with grafana and instana.

    I would request you to prioritize this request since lack of these reports is creating an empty space…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. prometh

    Hi MongoDB Team,

    We are using grafana and instana SRE tools for application monitoring.

    But MongoDB did not exposing the basic statistics like Total number of records and record insertion / updation / selection rates (over a sepcific period of time ) with grafana and instana .

    Kindly note that grafana and instana are widely used as industry standard SRE tool and it will be really helpful if MongoDB team could also expose these kind of detailed statistics with grafana and instana.

    I would request you to prioritize this request since lack of these reports is creating an empty space…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Save ATLAS monitoring selection settings

    Atlas monitoring has different setinngs: Primaries, Secondaries, All

    it would be good if the setting ALL (and maybe others) could be saved so I do not need to select it again.

    thanks
    Rene Schwarz

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Metric on TTLDeleted documents

    Hi,
    We (Indeed, inc.) are one of your larger customers with more than $2M spend on Atlas in a year. We need a metric on number of TTL Deleted documents per second for an Atlas Cluster. We have similar metric on our standard mongodb plugin in Datadog and this is extremely useful to all our Product Teams because most of our clusters have TTL.
    Thanks.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Support export Prometheus metrics via remote_write

    High throughput prometheus compatible database like mimir, thanos, and the likes of GCP's managed prometheus really like metrics sent to them via remotewrite https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remotewrite. Grafana cloud for instance doesn't allow you to configure any scrape rules. The only option is to push metrics to them.

    Currently Atlas only allows you to scrape metrics, which means for a user to get these metrics into a highly available version of Prometheus you need to relay this off of an intermediary process (prometheus exporter or a standalone prometheus server that supports scraping)

    I think a good addition to this capability would…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Add metrics to monitor CPU credits for burstable performance Atlas clusters

    Add metrics to Atlas for tracking burstable CPU credit spend for M10 and M20 cluster tier instances. Additional add support for creating alerts based on these metrics.

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Add support for HTTP based scraping without IP restriction

    Prometheus supports a lot in the HTTP scraping space, including oauth and Bearer token based scrape targets.

    Currently we use Grafana cloud that has a list of IP address that can scrape from. This "can" change, and if it does and we hardcode this access into Atlas based on IP then it will break the scrape.

    Could Mongo add support for token/oauth based scraping. Where we provide these tokens in a HTTPS call?

    I think this would solve the issues with things like private link, or peered network connections. It would also allow "any" prometheus server that can make outbound…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Cache last aggregation query for each collection

    For each collection – on the aggregation tab, cache the last query which was run – keep it pre-populated in the textboxes.

    It’s so easy to accidentally click away from this page and when you come back, having to start from scratch always takes a bit. Ideally I want to be able to leave this view and come back as though nothing had changed.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Re-run a query from the profiler tab

    When using the profiler tab, if there is a $regex expression specified, it comes back in the logs as $regularExpression. Similar issues for ObjectID() vs generated $oid.

    For some reason, you can’t take the query the profiler outputs and just plug it back in to mongo. You have to manipulate it.

    Displaying a valid query from the profiler tab in which I could either click “Open query in aggregation viewer” and open the aggregation pane for that collection with this query pre-populated, and/ allow easy copy+pasting of a valid query elsewhere.

    The first thing I've been doing with query optimization…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Allow switching between shards in the profiler and/or have a combined view

    Since the profiler tabs are shard specific, it would radically improve the usability to:
    a) Combine the profiler events to truly have a cluster level view so you don’t need to worry about shard specific views
    b) Add a drop-down to the top of the page near the title that has the various shards listed in it, selecting a different shard brings you back to the same tab, but now with the changed view.
    c) Consider tweaking the URL so that rather than using a SHA in the URL, you use the shard name, making it easy to manipulate the…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Expose Locking Metrics

    One of our customers have asked if we can expose locking metrics through the Atlas UI to assist with investigating suspected lock contention issues.

    We have this information in FTDC and our customer feels it would be helpful for them to have this information too.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. prometheus integration to use PrivateLink

    There is a possibility to integrate Prometheus into an Atlas project.
    However, for enabling this integration, one needs to add Prometheus's IP address in the IP Access List.
    This procedure has 2 flaws in it:
    1. Prometheus runs as pods on some use-cases, meaning that its IP is ephemeral.
    2. For projects that work solely with PrivateLink enabled and no open IP in the IP Access List, one cannot use the Prometheus integration (already talked with support about that).

    The improvement here is to add the Prometheus integration to work as well in "PrivateLink-only" mode.

    73 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Last Database User Access

    My team has found ourselves in the position of needing to iterate through the Database Access History Logs in order to discover whether or not a database user is still being used for the past t time periods.

    We need to have this functionality in order to guarantee a seamless credential rotation.

    With the current API this operation will require iterating over each cluster's access logs for the past t time periods. If I have "n" clusters and "t" time periods this is an O(n * t) runtime. I'm not really great at O() notation, but if t is greater…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base