Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

181 results found

  1. Basic operational metrix support with instana and grafana

    Hi MongoDB Team,

    We are using grafana and instana SRE tools for application monitoring.

    But MongoDB did not exposing the basic statistics like Total number of records and record insertion / updation / selection rates (over a sepcific period of time ) with grafana and instana .

    Kindly note that grafana and instana are widely used as industry standard SRE tool and it will be really helpful if MongoDB team could also expose these kind of detailed statistics with grafana and instana.

    I would request you to prioritize this request since lack of these reports is creating an empty space…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. prometh

    Hi MongoDB Team,

    We are using grafana and instana SRE tools for application monitoring.

    But MongoDB did not exposing the basic statistics like Total number of records and record insertion / updation / selection rates (over a sepcific period of time ) with grafana and instana .

    Kindly note that grafana and instana are widely used as industry standard SRE tool and it will be really helpful if MongoDB team could also expose these kind of detailed statistics with grafana and instana.

    I would request you to prioritize this request since lack of these reports is creating an empty space…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Save ATLAS monitoring selection settings

    Atlas monitoring has different setinngs: Primaries, Secondaries, All

    it would be good if the setting ALL (and maybe others) could be saved so I do not need to select it again.

    thanks
    Rene Schwarz

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Metric on TTLDeleted documents

    Hi,
    We (Indeed, inc.) are one of your larger customers with more than $2M spend on Atlas in a year. We need a metric on number of TTL Deleted documents per second for an Atlas Cluster. We have similar metric on our standard mongodb plugin in Datadog and this is extremely useful to all our Product Teams because most of our clusters have TTL.
    Thanks.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Support export Prometheus metrics via remote_write

    High throughput prometheus compatible database like mimir, thanos, and the likes of GCP's managed prometheus really like metrics sent to them via remotewrite https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remotewrite. Grafana cloud for instance doesn't allow you to configure any scrape rules. The only option is to push metrics to them.

    Currently Atlas only allows you to scrape metrics, which means for a user to get these metrics into a highly available version of Prometheus you need to relay this off of an intermediary process (prometheus exporter or a standalone prometheus server that supports scraping)

    I think a good addition to this capability would…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Add metrics to monitor CPU credits for burstable performance Atlas clusters

    Add metrics to Atlas for tracking burstable CPU credit spend for M10 and M20 cluster tier instances. Additional add support for creating alerts based on these metrics.

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Add support for HTTP based scraping without IP restriction

    Prometheus supports a lot in the HTTP scraping space, including oauth and Bearer token based scrape targets.

    Currently we use Grafana cloud that has a list of IP address that can scrape from. This "can" change, and if it does and we hardcode this access into Atlas based on IP then it will break the scrape.

    Could Mongo add support for token/oauth based scraping. Where we provide these tokens in a HTTPS call?

    I think this would solve the issues with things like private link, or peered network connections. It would also allow "any" prometheus server that can make outbound…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Cache last aggregation query for each collection

    For each collection – on the aggregation tab, cache the last query which was run – keep it pre-populated in the textboxes.

    It’s so easy to accidentally click away from this page and when you come back, having to start from scratch always takes a bit. Ideally I want to be able to leave this view and come back as though nothing had changed.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Re-run a query from the profiler tab

    When using the profiler tab, if there is a $regex expression specified, it comes back in the logs as $regularExpression. Similar issues for ObjectID() vs generated $oid.

    For some reason, you can’t take the query the profiler outputs and just plug it back in to mongo. You have to manipulate it.

    Displaying a valid query from the profiler tab in which I could either click “Open query in aggregation viewer” and open the aggregation pane for that collection with this query pre-populated, and/ allow easy copy+pasting of a valid query elsewhere.

    The first thing I've been doing with query optimization…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Allow switching between shards in the profiler and/or have a combined view

    Since the profiler tabs are shard specific, it would radically improve the usability to:
    a) Combine the profiler events to truly have a cluster level view so you don’t need to worry about shard specific views
    b) Add a drop-down to the top of the page near the title that has the various shards listed in it, selecting a different shard brings you back to the same tab, but now with the changed view.
    c) Consider tweaking the URL so that rather than using a SHA in the URL, you use the shard name, making it easy to manipulate the…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Expose Locking Metrics

    One of our customers have asked if we can expose locking metrics through the Atlas UI to assist with investigating suspected lock contention issues.

    We have this information in FTDC and our customer feels it would be helpful for them to have this information too.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. prometheus integration to use PrivateLink

    There is a possibility to integrate Prometheus into an Atlas project.
    However, for enabling this integration, one needs to add Prometheus's IP address in the IP Access List.
    This procedure has 2 flaws in it:
    1. Prometheus runs as pods on some use-cases, meaning that its IP is ephemeral.
    2. For projects that work solely with PrivateLink enabled and no open IP in the IP Access List, one cannot use the Prometheus integration (already talked with support about that).

    The improvement here is to add the Prometheus integration to work as well in "PrivateLink-only" mode.

    104 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Last Database User Access

    My team has found ourselves in the position of needing to iterate through the Database Access History Logs in order to discover whether or not a database user is still being used for the past t time periods.

    We need to have this functionality in order to guarantee a seamless credential rotation.

    With the current API this operation will require iterating over each cluster's access logs for the past t time periods. If I have "n" clusters and "t" time periods this is an O(n * t) runtime. I'm not really great at O() notation, but if t is greater…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Please remove the text-transform uppercase style on the graph axes.

    I yelled out "WTF!" in the office when I saw that a steep peak in the graph indicated a query that took 8MS. Turns out it meant to say 8ms.

    The M means Mega (one million). We, I assume, are all very familiar with it, and therefore I assume that I'm not the only one who automatically reads it as one million.

    Note that I did not confuse the s and S. That is because the s and the S are much less visually distinct. Still it's technically incorrect too.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Send alert on mongodb database users changes and permission changes

    This will be a important tool to get alerted if any databases get exposed or hacked.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Display EBS actual disk information (gp3)

    Currently, clusters created on AWS have gp3 volumes according to support.

    However :
    * the configuration displays max IOPS 6000 for M50 clusters even though documentation states that iops scale 3:1 with the volume size in GB (1 GB = 3 IOPS) regardless of disk type for M30+ clusters (true with gp2 anwyay, for gp3 that's an enforcement by Atlas)
    Documentation link : https://docs.atlas.mongodb.com/customize-storage/#configuring-the-iops-rate

    => the configuration should at least display the actual IOPS (as it's also being used for alerting) - ideally it would allow us to have more IOPS than the minimum 3:1 enforcement.

    • with gp2 we had…
    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Support db.setLogLevel() in Atlas

    I'm testing wire compression in the drivers, the results of which is only set to verbose log levels (see: https://github.com/mongodb/specifications/blob/master/source/compression/OP_COMPRESSED.rst).

    I'm not sure how to validate wire compression is making its way to Atlas w/out this capability. The setting would revert when the server is restarted, so it shouldn't have too big an impact on our log file sizes.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Persist setProfilingLevel Setting in Atlas

    Currently the profiling level resets to 0 every time the mongod process restarts. You can persist this with a self-hosted mongo by editing operationProfiling.mode in the conf file, but this is not possible in Atlas.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base