Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • or

73 results found

  1. If you select "Collections" on a database, and you don't have at least "Project Data Access Read Only" access, you get an unhelpful message

    If you don't have the access righrts to read data, instead of a suitable message, you get "An error occurred while querying your MongoDB deployment.
    Please try again in a few minutes."
    Totally unhelpful, as trying again won't fix your permissions!

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Explorer  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Add v1.5 API support to Terraform to support asymmetric hardware

    We use a base tier MongoDB cluster (M20) and an analytics tier (M30), where they are of different sizes due to different business requirements.

    Currently this is not supported by Terraform unless tiers use the same hardware (e.g. either both are set to M20 or M30).

    See the error message below.

    Error: error reading MongoDB Cluster (development): GET https://cloud.mongodb.com/api/atlas/v1.0/groups/1234567890/clusters/development: 400 (request "ASYMMETRICHARDWAREINVALID") Asymmetric hardware is not supported by the v1.0 API. Please use the v1.5 API instead. Documentation for the v1.5 API is available at https://docs.atlas.mongodb.com/reference/api/clusters-advanced/.

    Please add v1.5 API support to Terraform to support asymmetric hardware.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  3. Allow OpLog Backups in Atlas M0-M5 Shared-Tier clusters running on MongoDB v.5.x

    I have a backup process that is taking hourly dumps of the oplog on my production cluster running MongoDB v.5.0.6.

    It works fine using older versions of mongodump. However, when I update my tools to any version higher than 100.3.1 (100.4.x and above), my oplog backups fail with the following error:

    CMD : 2022-02-16T15:39:03.186-0500  Failed: error creating intents to dump: error counting local.oplog.rs: (Location40602) $collStats is only valid as the first stage in a pipeline.
    

    According to Atlas Support, this issue is limited to M0-M5 clusters and will need to be addressed by the Atlas development team.

    Can I ask…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Allow to provide names/descriptions for peering connections

    If the user has 3 peering connections in a project, it's error prone terminating one, as you may end the connection with the wrong VPC and cause a disaster. Adding a description field allows to clearly see the purpose of the peering.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Enable and disable balancing on a collection

    Shell commands sh.enableBalancing() and sh.disableBalancing() were not permitted on Atlas hosted MongoDB. Is it possible to grant this permission?
    Currently it failed with error:

    uncaught exception: Error: command failed: {
    "ok" : 0,
    "errmsg" : "not authorized on config to execute command { update: \"collections\", ordered: true, writeConcern: { w: \"majority\", wtimeout: 60000.0 }, lsid: { id: UUID(\"ed83ed26-e07b-4f34-a90a-a145bbe58a48\") }, $clusterTime: { clusterTime: Timestamp(1646788928, 17), signature: { hash: BinData(0, 347249E35ED43BDADAE12FB0F794C091CB86206E), keyId: 7052082196881866761 } }, $db: \"config\" }",
    "code" : 13,
    "codeName" : "Unauthorized",
    "operationTime" : Timestamp(1646788928, 18),
    "$clusterTime" : {
    "clusterTime" : Timestamp(1646788928, 18),
    "signature" : {
    "hash" : BinData(0,"NHJJ417UO9ra4S+w95TAkcuGIG4="),
    "keyId"

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. connecting from ubuntu to mongodb atlas is terrible

    kamal@zehan:~/Desktop/mongodb$ mongo "mongodb+srv://cluster0.d9ct7.mongodb.net/myFirstDatabase" --username mongo
    MongoDB shell version v5.0.5
    Enter password:
    connecting to: mongodb://cluster0-shard-00-00.d9ct7.mongodb.net:27017,cluster0-shard-00-02.d9ct7.mongodb.net:27017,cluster0-shard-00-01.d9ct7.mongodb.net:27017/myFirstDatabase?compressors=disabled&gssapiServiceName=mongodb&ssl=true

    *** You have failed to connect to a MongoDB Atlas cluster. Please ensure that your IP allowlist allows connections from your network.
    Error: bad auth : Authentication failed. :
    connect@src/mongo/shell/mongo.js:372:17
    @(connect):2:6
    exception: connect failed
    exiting with code 1

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Metrics charts x-axis showing more than 24 hours of data should be labeled according to the scale

    Metrics charts x-axis showing more than 24 hours of data should be labeled according to the scale

    Currently, when a mouse is not hovering over a Metrics plot, the plot will show particular hours as labels for the x-axis. For example, if an 8 hour range of time is displayed, the time every two hours will be labeled.

    However, if more than 1 day is displayed, the x-axis labels are less useful. The particular time once/day is displayed, but the day is not included. For example, when I display a week of data, 07:00 is highlighted on each particular day,…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Add a datasource for members of an organization

    Problem: The mongodbatlas_teams resource only works with email addresses of user accounts which are existing members of the organization at apply time¹, and Team memberships can't (yet?) be managed via Identity Federation (third-party SAML IdP).

    Solution: If there was a datasource using the "Get All Organization Users" API² to return a list of organization members, that data could be used to filter the usernames input attribute of the mongodbatlas_teams resource to only add valid users.

    Workaround: We're using a Python script as an external datasource to get the data needed to perform said filtering.

    ¹: An error is thrown when…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  9. Data migration from Cloud manager to Atlas

    With reference to case https://support.mongodb.com/case/00918578 we experienced situation where the migration was in a stuck state eternally. If the validation checks fail, then it is only fair and logical to fail the migration with a relevant error rather than the status lying about your shard migration is in progress.

    Also, emit status updates on the migration, such as pre-validation checks passed, syncing data, and % complete on data migration or initial sync rather than just a green bar, and whatever else such as if it is building indexes, etc during migration. Basically, make the status as readable and logical as…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Data Usage Reporting & Improved Profiler

    The Current Problem:
    The existing issue stems from MongoDB's inability to provide concrete evidence supporting the data charges for your data usage. This predicament becomes especially troublesome when your system typically operates within a data usage threshold of less than 100GB daily. Suddenly, over a span of 7 days, you are billed for data usage exceeding 1000-2000GB daily, only to subsequently revert to using less than 100GB daily. The absence of substantiated evidence leaves you in a quandary, unsure whether the issue lies with your system or is a reporting error on MongoDB's part. MongoDB Support relies on the slowest…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Billing  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Terraform Downward Autoscaling "Use Smallest possible" placeholder

    When setting "providerautoscalingcomputemininstancesize", is it possible to either;

    • Loosen the validation by ignoring current disk configuration size.
    • Create a placeholder to indicate "Use smallest possible value".

    Atlas appears to support invalid values, provided it was entered when it was valid.


    I think this will help simplify deployment of "providerautoscalingcomputemininstancesize", we set the ideal value and THE MACHINE will interpret the best match and remove the possibility of a once valid Terraform definition becoming invalid without any updates on the definition. Like the following example,
    1. I define an…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Support safe handling of shared project IP access list entries in Terraform (prevent deletes when the same IP is used by multiple services).

    Description:
    We are facing an issue managing project-level IP access lists in MongoDB Atlas when multiple services/profiles share the same IP address.

    Scenario:

    One Atlas Project.

    Service is deployed with different cluster profiles (e.g., integration, testing).

    Each profile’s Terraform stack provisions:

    An Atlas cluster (mongodbatlasadvancedcluster)

    A backup schedule (mongodbatlascloudbackup_schedule)

    A project IP access list entry (mongodbatlasprojectipaccesslist) using the same IP address.

    Behavior Observed:

    Integration profile deployment:

    Creates cluster, backup, and adds Gateway IP to the project access list. ✅

    Testing profile deployment (same service, different profile):

    Creates a second cluster and backup.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Support safe handling of shared project IP access list entries in Terraform (prevent deletes when the same IP is used by multiple services).

    Description:
    We are facing an issue managing project-level IP access lists in MongoDB Atlas when multiple services/profiles share the same IP address.

    Scenario:

    One Atlas Project.

    Service is deployed with different cluster profiles (e.g., integration, testing).

    Each profile’s Terraform stack provisions:

    An Atlas cluster (mongodbatlasadvancedcluster)

    A backup schedule (mongodbatlascloudbackup_schedule)

    A project IP access list entry (mongodbatlasprojectipaccesslist) using the same IP address.

    Behavior Observed:

    Integration profile deployment:

    Creates cluster, backup, and adds Gateway IP to the project access list.

    Testing profile deployment (same service, different profile):

    Creates a second cluster and backup.

    Attempts…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 2 4 Next →
  • Don't see your idea?

Feedback and Knowledge Base