Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

1360 results found

  1. Existing Owner business

    Please SOLVE my billing

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Billing  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Change Streams Monitoring and Alerting

    Change streams can cause performance issues if not used properly. In some cases, administrators of multi-tenant dbs have no control (and shouldn't) over how various clients create change streams.

    I think it is important that we accommodate these use-cases and provide useful metrics in the OM/Atlas metrics pages, and alerts on those metrics. Some potential metrics:
    1. Number of change streams open
    2. Average change stream lifetime
    3. Query targeting ratios for change streams
    4. Avg time between consecutive polls of the change stream (and other statistics)
    --thought here is that change streams that are polled infrequently will result in…

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. create roles with rights on wildcard database like collections

    Exemple:

    use admin
    db.createRole(
    {
    role: "UserCanCreateDbTest",
    privileges: [
    { resource: { db: "test", collection: "" }, actions: [ "update", "insert", "remove" ] },
    { resource: { db: "test", collection: "" }, actions: [ "find" ] }
    ],
    roles: [
    { role: "read", db: "admin" }
    ]
    },
    { w: "majority" , wtimeout: 5000 }
    )

    https://stackoverflow.com/questions/30462767/mongodb-grant-all-with-wildcard-role-like-mysql

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Uniform approach to Dedicated and Tenant clusters in terraform mongodbatlas_advanced_cluster

    Our software development lifecycle has 5 environments. We also wish to spin up dedicated, short lived environments for individual Developers. As a result, we'd like to have two M40s, an M10 and two M5s for our SDLC and the ability to spin up M0s for the devs.

    The way you've implemented the terraform mongodbadvancedcluster resource makes it exceptionally difficult to use the same code across all environments. This isn't in keeping with best practice for terraform implementations, which is to use the same code and only change the variables. Might I suggest you add providertype and allow

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Handle Duplicate data in Timeseries collection

    It would be helpful by introducing the optional add on functionality of overwriting the document if it is already exists for an insert operation based on “_id” field for only timeseries collection in upcoming releases. Because if we have huge amount of sensors then data should be blindly written into timeseries collection without querying for duplicate check (Conditional inserts will definitely impact the performance).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data API  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Set "Cluster Termination" ON as default for an Atlas organisation

    Have a button in the Atlas Org setting to toggle ON/OFF Custer Termination protection.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. dynamic auto-downscale (configurable)

    MongoDB have automatic fixed limit for auto-downscale (50%) but sometimes is neccessary downscale with other amount, ie: in our case in the night the clusters have 55%, 60%... will be productive for us that the % limit for downscale will be parametrizable.
    If I configure 60% I know that all the nights the cluster auto-downscale to previus instance.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. CDC

    Wanted to have the CDC information stored as json or parquet in a bucket storage for retrieval at a later point to understand the data transformations that has happened in our Application Transactional collections.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Private endpoint termination protection

    Similar to the Cluster Termination Protection, it would be nice to have Private Endpoint termination protection to prevent accidental deletes of private endpoints, which could very likely result in application connectivity loss and downtime.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Webhook

    Currently alert notification configuration allows webhook targets with a secret configuration. This limits us in terms of which webhooks could be configured. There are multiple webhooks which do not support secret configuration rather support Headers to be passed with authentication details.

    This request is to support additional set of header to be passed/added in the webhook configuration for alert notification.

    For example, during alert notification configuration I should be allow to configure additional set of headers like Authorization: "Basic cdge....==" and Source: "Atlas"; with Webhook URL configuration which is further used by target to first authorise POST call made by…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Alerts  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Improved Metrics | Memory usage by collection

    The Profiler is a great tool, but it does have limitations. The metrics there don't really tell us what's driving I/O. For example, we might have these two queries:

    1. a query that scans 10000 documents that active users trigger, but the queried documents are the same for all users
    2. a query that scans 100 documents that active users trigger, but the queried documents are different for all users

    If you have 1000s or more active users, the working set (memory usage & i/o) is being driven by the second query. And, it probably performs better than the first query in…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Kindly see how you can accomodate Dedicated Pricing for countries especially in Africa where dollar rates are very high

    Minimize pricing for Dedicate service the hourly rate multiply by 3 server node is high especially for startups in Africa

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Billing  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. grpc

    You should create a grpc API like AstraDB has already done. Grpc is the ideal way for a serverless app to connect to a database. Get both of best worlds: native driver performance plus don't need to worry about database connections.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data API  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Support terraform plan with ORG_READ_ONLY role

    An API key with ORGREADONLY should be sufficient to run a terraform plan. Afterall its describe is "Provides read-only access to the settings, users, projects, and billing in the organization.")
    However, this is not the case: checking settings for "Cloud Provider Access" [1] and "Encrypting at Rest" [2] fail due to mission permission. Read-write project permissions like GROUP_OWNER on each project are required.

    [1] https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Cloud-Provider-Access/operation/listCloudProviderAccessRoles
    [2] https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management/operation/getEncryptionAtRest

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Google Private Service Connect

    Greeting from Fivetran!

    This is somewhat related to https://feedback.mongodb.com/forums/924145-atlas/suggestions/45272014-allow-customers-to-specify-the-number-of-service-a . Having 50 service attachments is not scalable for us which requires 50 IP addresses for each PSC. We have a large customer base and having each of them create PSC would require a lot of IP addresses and would quickly exhaust our subnets.

    From the support case it seems that the decision to use 50 PSC attachments comes from the fact that GCP load balancer does not allow more than one pool of servers per service attachment and that the ports are passed through as is as opposed to AWS…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Select exclusive copy snapshot from different region to restore for testing DR from copy snapshot.

    We have "Additional backup policy" feature enabled which will copy our snapshot to other region.As part of DR testing we want to exlusively select copy snapshot in other region to restore to perform DR.In the current cluster I don't see an option specifically to select snapshot from other region.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Backup  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Ability to quickly filter backup activity from the project activity feed

    The activity feed is often clogged with backup activities. It would be extremely useful to be able to quickly filter out the common 4 or 5 project activities that are related to a backup snapshot being taken.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Improve the sample code for Connecting with MongoDB Driver

    When in Atlas you click the Connect button and get to the screen 'Connecting with MongoDB Driver', the Java sample isn't using the MongoClientSettings.builder() to set writeConcern and retryWrites. This is done in the connect string instead, even though the builder does provide those two settings.
    I suggest only using the MongoClientSettings.builder() for better type safety.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Restrict whitelist ip

    I would like there to be an option to restrict users to what they can whitelist in network settings for a project. I want there to be option like "only users with owner access can create non-temporary IP whitelistings" so we can avoid having our developers adding IP addresses that gets outdated and pose a potential security risk.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base