Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

1310 results found

  1. Granular Permissions

    Right now Mongo Atlas allows you to assign two types of roles to all the users: Organization and Project, and for each set it gives you some predefined roles.

    The problem with this is you can't have any kind of granular control of what permission is assigned to each user. (e.g. to allow a user to create a trigger through Mongo Stitch it needs the Project Owner role).

    This is a major setback as I'm giving my coworkers more access than needed.

    A good solution would be to have something like the database access control in this part so we…

    450 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    started  ·  57 comments  ·  IAM  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Ability to stream logs to Cloudwatch Logs or Datadog

    There's no way to stream logs from MongoDB on Atlas right now. I should be able to stream logs, either to Datadog or Cloudwatch or something!

    249 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    48 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. I'd like to be able to connect to my cluster directly from the UI and have a shell-like experience i.e. run queries and operational commands

    It would be great if I could connect to and run commands against my cluster via the Atlas UI so I don't have to rely on downloading and running the shell locally.

    I'd also be more confident I was running commands against the right cluster as I would have less reliance on managing connection strings and differentiating between them. This would be helpful if I'm not on my regular machine and need to do some ad-hoc querying or debugging.

    164 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Option to restore one or more db from snap to cluster. Right now, it involves manual dump and restore

    Currently we can only restore full cluster from backup like snap using GUI interface. If we want to restore one or more specific db, it needs manual dump and restore from backup. if we have an option to restore specific db to cluster through GUI interface, it will be very useful.

    162 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    37 comments  ·  Backup  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    Hi all – We are currently developing collection and database-level restores. We expect this feature to be released in CY2025, but will update here once it is launched.

  5. Autoscaling with custom conditions to scale up( change 1 hour to adjustable time) or down(change fixed 24 hours to adjustable time)

    Currently autoscaling is predefined with conditions like
    scale up:
    The Average CPU Utilization has exceeded 75% for the past hour.
    The Memory Utilization has exceeded 75% for the past hour.
    scale down:
    The average CPU Utilization and Memory Utilization over the past 24 hours is below 50%.
    The cluster hasn't been scaled down (manually or automatically) in the past 24 hours.

    Following changes will save more resources and money:

    -This 24 hours time for scale down should be made flexible/adjustable to suit customers requirement.
    -One hour time to scale up should be adjustable to scale up faster. This may need…

    159 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    7 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Autoscaling improvements

    Hi, we have tried BETA of Autoscaling feature and we have some thoughts how to make it better. In its current setup its not really suitable for our production workloads. Here are some thoughts how to make it better:

    1. Define separate scaling steps At them moment the scaling step is always 1. Going from M10 -> M20 which is not really suitable for burst loads where going one step up might not be enough. Same goes for rapid scaling down

    Example:
    Scale range = M10 - M50
    Scale step up 4 = (M10 -> M50)
    Scale step down 2 =…

    149 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Create databases and collections via API thorough Terraform

    Create databases (DB) and collections via API thorough Terraform after the cluster has been created in Atlas. This would provide the ease of writing and running it in a single script before any data is loaded.

    111 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Force restart of unhealthy node

    Right now, there's a "test failover" option, which shuts down the primary and forces an election. However, the option is only available if the cluster is in a healthy state.

    If, for whatever reason, the cluster is unhealthy, it's impossible to manually restart the primary. It should be possible to force an election in an unhealthy state. Often, this is all that is required to get back into a healthy state (e.g. if the primary is in a CPU burning loop that was caused by an unexpected write pattern that has stopped.)

    108 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Administrate user rights for the Billing area

    Hi.
    We would like to have the possibility to administrate user rights also for the Billing area. In order to restrict all users seeing the invoices and to give this right only to particular people who are directly responsible for it. Please consider such feature for the next releases.

    84 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    13 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. prometheus integration to use PrivateLink

    There is a possibility to integrate Prometheus into an Atlas project.
    However, for enabling this integration, one needs to add Prometheus's IP address in the IP Access List.
    This procedure has 2 flaws in it:
    1. Prometheus runs as pods on some use-cases, meaning that its IP is ephemeral.
    2. For projects that work solely with PrivateLink enabled and no open IP in the IP Access List, one cannot use the Prometheus integration (already talked with support about that).

    The improvement here is to add the Prometheus integration to work as well in "PrivateLink-only" mode.

    83 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Allow an "Any Database" option for actions in custom roles

    Much like built-in roles have the ability to target all databases/any database, it would be ideal if collection actions could also target any database. Similarly to how, when adding collection actions to a custom role, if you leave the "collection" field blank, it applies to all collections in the specified DB, it would be great if you could leave the "database" field blank too (or add an "any database" option) and have the actions associated with the role be allowed on any database.

    This feature gap creates unnecessary maintenance overhead for clusters with large numbers of databases. This is particularly…

    82 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    12 comments  ·  IAM  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Add alerts on disk IOPS and CPU IOWAIT %

    To help catch heavy disk workloads before they become problematic, it would be great to have alerts on:
    - disk IOPS percentage utilization for disks without burstable IOPS
    - burst credit utilization for disks with burstable IOPS
    - CPU IOWAIT %

    73 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Alerts  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Live Migrate Individual Namespaces

    Mongomirror allows you to migrate particular databases or collections using the --includeDB and --includeNamespace parameters:

    https://docs.atlas.mongodb.com/reference/mongomirror/#cmdoption-includedb
    https://docs.atlas.mongodb.com/reference/mongomirror/#cmdoption-includenamespace

    It would be helpful if Live Migrate allowed for use of these options as well to migrate subsets of data, instead of only migrating an entire cluster.

    71 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Send Atlas logs to S3

    I would like to automatically send my cluster logs to my S3 bucket. I could then use Atlas Data Lake to query them and Charts to create visualizations on them, or inspect them with other tools.

    68 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Atlas auto-scaling configurable alerts

    It would be useful, and contribute to a complete alert experience, to have auto-scaling event alerts configurable in project alerts. Users should be able to send alerts to their target system of choice, and prevent numerous alerts to specific users.

    Currently auto-scaling alerts are only emails to project members (as per the following https://docs.atlas.mongodb.com/cluster-autoscaling/#acknowledge-auto-scaling-events).

    Examples of configurable alerts:
    - Scale-up unsuccessful due to max cluster-tier too low -> change email recipient of these alerts
    - Scale-up event has occurred -> send alert via SMS and Slack.

    67 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    14 comments  ·  Alerts  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Time based auto scaling

    Our production server workload is less after 5 pm till the next day morning at 6 am CST times. We are running MongoDB in full capacity even on off-peak times. It's not possible for someone to manually downscale the cluster during off-peak and I don't think the CPU utilization based auto-scaling is working for us. I would like to choose auto-scaling between CPU based or time based as set my own time window and choose the auto scale/down-sclae.

    64 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    7 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Include cluster labels in the billing usage details

    Similar to how AWS includes tags/labels in their billing dashboard, I'd like the ability for my cluster labels to be included as a column in my billing usage details (or even the exported CSV).

    This will allow me to categorize my billing usage as granular as clusters in addition to the current method which is by project.

    63 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    5 comments  ·  Billing  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Suggested actions for alerts

    Alerts are often very cryptic & don't provide any useful information. For example, when alerting that queries are returning too many rows, without informing me the query or at least the collection, that's very useless. I'm looking at a replication oplog window alert right now and I have no idea what to do about it.

    61 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    8 comments  ·  Alerts  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Add tunnelling to allow querying Cloud Backup snapshots

    The Legacy Backups have a feature to allow connecting (tunnelling) to a snapshot to query it. This allows querying a database snapshot which is great for quickly inspecting data in the past for troubleshooting. This would be handy to have for the Cloud Backups. With Cloud Backups instead I now have to download a snapshot, load it temporarily on a cluster and remove it when done.

    57 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    16 comments  ·  Backup  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Display replica set members' Availability Zone details

    The AZ details (ideally including the AZ ID) for each replica set member would be helpful to display in the Atlas UI, mainly to easily verify nodes are distributed across AZs.

    50 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1 3 4 5 65 66
  • Don't see your idea?

Feedback and Knowledge Base