Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

25 results found

  1. Auto downscale - Scaling down during peak moments

    The auto downscale checks for the average of usage for the last 4h. But, sometimes we have a cluster with little usage for almost all the 4 hours, but at the last 30, 15 minutes, it starts to consume more CPU (something higher than 60% or even higher).

    As the autoscale only checks at the last 4h it sometimes happen that a cluster does a auto downscale when it shouldn't. A litler time after it needs to scale back up because the usage goes higher than 90% (or at least higher than 75%)

    It would be much better if the…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Customize CPU threshold for Autoscaling

    Currently, to autoscale a cluster to the next tier, the "Average CPU Utilization must have exceeded 75% of available resources for the past hour."

    Problem:

    Some workloads remain near the 75% CPU threshold, given the nature of the workload. This can cause constant scaling events to occur which creates unnecessary noise.

    We don't want to turn autoscale off because this is a beneficial feature to keep enabled on the cluster.

    Solution:

    We want to be able to configure the CPU % threshold for cluster tier scaling. For example, rather than 75%, we'd like the option to set the threshold to…

    14 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Enable Replica Set Scaling Mode

    Currently, it's not possible to configure the replica set scaling mode for shards and other configs available in the UI through terraform (see attached screenshot).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Autoscaling with custom conditions to scale up( change 1 hour to adjustable time) or down(change fixed 24 hours to adjustable time)

    Currently autoscaling is predefined with conditions like
    scale up:
    The Average CPU Utilization has exceeded 75% for the past hour.
    The Memory Utilization has exceeded 75% for the past hour.
    scale down:
    The average CPU Utilization and Memory Utilization over the past 24 hours is below 50%.
    The cluster hasn't been scaled down (manually or automatically) in the past 24 hours.

    Following changes will save more resources and money:

    -This 24 hours time for scale down should be made flexible/adjustable to suit customers requirement.
    -One hour time to scale up should be adjustable to scale up faster. This may need…

    165 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    7 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. limit the size of tmp files created as part of $sort stage

    Our docs says, "Starting in MongoDB 6.0, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default."

    These 'temporary files' can take up ALL the disk space if auto storage scale isn't turned on and crash the cluster.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Storage autoscaling of storage should also occur when Atlas searches are being rebuild

    When the storage requirement exceeds the available storage when rebuilding Atlas searches a warning is issued and manual intervention is needed.

    I would like that autoscaling of storage will occur when needed for rebuilding Atlas indexes.
    This would save us time and no need to react to warnings.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Modify MongoDB Autoscaling Logic for CPU Usage Interval

    Currently, the autoscaling feature in MongoDB is configured to initiate scaling based on the last 1-hour CPU usage. The trigger for autoscaling is set at 75% CPU usage within this hourly interval. However, there is a requirement to adjust this logic to a more granular level.

    Request:

    The request is to modify the autoscaling logic in MongoDB to consider the last 30 minutes of CPU usage instead of the current 1-hour interval. Specifically, scaling should be triggered if the CPU usage exceeds 75% within the last 30 minutes.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. dynamic auto-downscale (configurable)

    MongoDB have automatic fixed limit for auto-downscale (50%) but sometimes is neccessary downscale with other amount, ie: in our case in the night the clusters have 55%, 60%... will be productive for us that the % limit for downscale will be parametrizable.
    If I configure 60% I know that all the nights the cluster auto-downscale to previus instance.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Allow autoscale for connections

    Allow autoscale for connections

    13 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. GCP eu-southwest-1 Madrid Atlas region

    Would love to deploy the cluster in the recent GCP eu-southwest-1 region. It is already running, and would be a plus for all Spanish customers reducing latency. It does not yet appear in the Atlas cluster creation page.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Exclude nodes from Autoscaling

    We run huge analytics, reports and queries over our analytics node. And its expected that analytics node is sometimes over immense CPU or Memory pressure.

    Current behaviour: When I enable Autoscaling, all the nodes in my cluster scales up when there is heavy load.
    Expected behaviour: I would like my analytics node to slow down or the query to timeout rather than scaling the entire cluster.

    Therefore, I should have an option to exclude the metrics of analytics node while deciding for autoscaling.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Time based auto scaling

    Our production server workload is less after 5 pm till the next day morning at 6 am CST times. We are running MongoDB in full capacity even on off-peak times. It's not possible for someone to manually downscale the cluster during off-peak and I don't think the CPU utilization based auto-scaling is working for us. I would like to choose auto-scaling between CPU based or time based as set my own time window and choose the auto scale/down-sclae.

    68 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    7 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Scaling cluster connections independent of cluster size/tier

    Currently whenever you want to increase Atlas cluster connections, you have to scale up Cluster Tiers. When an application doesn't require additional disk storage but only additional connections the additional cost is difficult to justify.

    17 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. See the history of autoscaling

    Most of my clusters keep autoscaling (scale up and scale down) 3 to 4 times in a week. I would like to see the history of the cluster when the cluster scaled up or scaled down along with scaling reason (e.g. high CPU in Analytics node) and the time it took to complete the scaling transition.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Allow auto-scale for IOPS.

    Allow auto-scale for IOPS.

    47 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    5 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Add "scheduled scaling/suspend" option for certain clusters

    There should be an option to schedule scale up/down or suspending/resuming at specified time & day.

    For example, DEV environments could be auto-suspended at 6pm and resume at 8am on weekdays, and be suspended all weekend.

    Production clusters could be scaled down e.g. from M80->M60 every evening and on weekends rather than being suspended completely.

    This would help save money but would need to be coordinated with the existing auto-scaling based on load.

    This would need to be at the cluster level since sometimes different clusters in a single project could have different workload patterns.

    34 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    5 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Profiles for Autoscale heuristics

    Case #00731969

    Is it possible to tweak the scale up settings so that my clusters scale up faster than they do today?

    I think it is best to analyze my clusters and figure out how long they are staying above 70% and then average it out, and if equal to or greater than that average, I would like my clusters to scale up.

    Once scaled up, I want it to remain scaled up for the next ~2 hours at least and only scale down if usage gets really small enough for it to justify scale down.

    Right now, the auto…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Autoscaling - CPU/ Memory Utilization duration monitor - Minutes

    Hi, Currently there is no option to initiate auto scaling during heavy workload/ requests comes and the cpu/ memory utilization is 75% for the even within
    - 5 minutes
    - 15 minutes
    - 30 minutes
    - 45 minutes

    The system has to wait for 1 hour to autoscale to next tier.This one of the concerns for the heavy workload applications.

    Idea:

    Add a custom duration option as

    • minutes also while monitoring the cpu / memory utilization

    Example : When a CPU / Memory utilization is above 75% for the past 15 minutes , move the cluster to the next tier…

    32 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Autoscaling improvements

    Hi, we have tried BETA of Autoscaling feature and we have some thoughts how to make it better. In its current setup its not really suitable for our production workloads. Here are some thoughts how to make it better:

    1. Define separate scaling steps At them moment the scaling step is always 1. Going from M10 -> M20 which is not really suitable for burst loads where going one step up might not be enough. Same goes for rapid scaling down

    Example:
    Scale range = M10 - M50
    Scale step up 4 = (M10 -> M50)
    Scale step down 2 =…

    151 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. auto scaling on IOPS like the storage autoscaling

    the IOPS can be auto scale up when it reach the maximum and stay at the maximum level for more than 1 hour

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1
  • Don't see your idea?

Feedback and Knowledge Base