Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

156 results found

  1. Terraform feature request: Cluster Termination Protection

    Termination Protection has been added. Would be great to add support for it in the terraform provider, so clusters can be provisioned and configured entirely via terraform.

    Without this support, we need to perform some configuration manually in the UI.

    15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  2. mongodbatlas_serverless_privatelink_endpoint

    I'm trying to import a serverless private link endpoint but I have no success with the current resource "mongodbatlasprivatelinkendpoint". As I investigate I saw that "mongodbatlasprivatelinkendpoint" is using the "private endpoint" API not the "serverless private endpoint" API reference on this url https://www.mongodb.com/docs/atlas/reference/api/serverless-private-endpoints/ that is why its unable to get the resource I wanted to import.

    It is good if we have a separate resource for the serverless private endpoint because it is the only way to securely connect to AWS without using the network peering connection. As of this writing network peering is not yet…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  3. immutable backups

    currently Atlas - MongoDB backup are stated to be immutable, however, that is not true because there is no object lock on the s3 bucket.

    We would like to request adding the option to have an object lock on the s3 bucket that our snapshots are located on which will make sure that the snapshots can only be deleted by retention and not modified or deleted by anyone else. This is to line up with WORM compliance while dealing with financial data.

    https://www.telemessage.com/what-is-worm-compliance-and-when-is-it-needed/

    https://aws.amazon.com/blogs/storage/protecting-data-with-amazon-s3-object-lock/

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Admin →

    Hello,

    I am pleased to announce that we have released our backup feature called Backup Compliance Policy, that protects your backups from being deleted by any user, ensuring WORM and full immutability (can not be edited/modified or deleted) for backups automatically in Atlas.


    Backup Compliance Policy allows organizations to configure a project-level policy to prevent the deletion of backups before a predefined period, guarantee all clusters have backup enabled, ensure that all clusters have a minimum backup retention and schedule policy in place, and more.


    With these controls, you can more easily satisfy data protection requirements (e.g., AppJ, DORA, immutable / WORM backups, etc.) without the need for manual processes.


    Please note that the Backup Compliance Policy can not be disabled without MongoDB support once enabled so please make sure to read our documentation thoroughly before enabling.

  4. Having tag/label in the Atlas UI

    Hello,

    The ATLAS portal used to view our organizations, projects, deployments, etc... does not offer a tag/label functionality at the visual interface level.

    This is problematic because when managing multiple organizations, projects, and deployments. We need to be able to put metadata on these objects/components (organization, project, deployment, user, custom group, API Key, Network address) in order to properly manage our inventory.

    In Azure, this notion of "tag" is very useful and can be used on all types of components (See attached image which presents tags that we have on an Azure component used for encryption at rest).

    We would…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  5. Shared Clusters upgrade to MongoDB 6.0+

    Currently MongoDB 6.0.1 is only available in dedicated clusters (M10+) and in serverless clusters.

    We need to use MongoDB 6.0.1 or higher in the shared cluster (M0/M2/M5).

    I understand that this should be in the roadmap anyway, but getting it sooner than later would be great.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    completed  ·  1 comment  ·  Other  ·  Admin →
  6. To not to delete the most recent backup when the DB is deleted

    After a cluster is terminated in MongoDB Atlas, the backups disappear with it. It will be good to preserve the most recent backup of this database. Otherwise, there is no point to have backup if the DB cannot be recover after accidental delete

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    completed  ·  0 comments  ·  Backup  ·  Admin →
  7. Terraform Serverless VPC Endpoint configuration

    Create the equivalent of mongodbatlasprivatelinkendpoint but for serverless.

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  8. Enable setting encryption at rest details for project

    Please allow us to set the encryption at rest KMS details for the project when we create the project

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  9. Configure --jsonFormat=canonical flag in export policy.

    JSON does not support all data types that are available in BSON. This means that when using JSON there will be a so called "loss of fidelity" of the information.
    However, using the --jsonFormat=canonical flag in a mongoexport command will preserve all available BSON data types, so the "loss of fidelity" issue can be completely avoided.

    Now we plan to export our cloud backups to an AWS S3 bucket. To do this, we would like to set up an export policy to automatically export the snapshots. We could already do this via the API. However, the data is output in…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    completed  ·  1 comment  ·  Backup  ·  Admin →
  10. Datadog integration for US5

    Hi I am using US5 datadog, and as I learned that Atlas only supports US1, it would be great if the integration with US5 is also added.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  11. Allow backup download through PrivateLink

    We need the ability to download our backups via PrivateLink connection. Our clusters aren't reachable via VPC peering as we solely use PrivateLink. The existing download capability doesn't support a PrivateLink URL to download our backups through.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Admin →

    For Atlas clusters hosted on AWS and Azure with private endpoints configured, Atlas now enables you to download the snapshot via the private endpoints within the same region as the snapshot via both the UI and Admin API.


    Documentation can be found here.

  12. MS Teams alert support in terraform provider

    It is possible to configure MS Teams alerts in the atlas UI, however terraform support is still missing. It would be great to have that option

    https://www.mongodb.com/docs/atlas/tutorial/integrate-msft-teams/#std-label-integrate-with-microsoft-teams

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  13. Disable data explorer and other features in project settings

    The following features cannot be disabled in the terraform provider (project settings):

    Real Time Performance Panel
    Data Explorer
    Performance Advisor and Profiler
    Schema Advisor

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  14. Add autoExport snapshot to AWS S3 Bucket on mongodbatlas_cloud_backup_schedule

    By company policy, we have to export our snapshots automatically to an AWS S3 Bucket.

    I started following https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/ and implemented on terraform due to the high number of projects, and clusters that we need to backup.

    However, looks like the terraform provider doesn't support "autoExportEnabled" from https://www.mongodb.com/docs/atlas/reference/api/cloud-backup/schedule/modify-one-schedule/ on https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cloud_backup_schedule terraform resource.

    Best regards,
    Wagner Sartori Junior

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  15. Granular permissions via roles / hashicorp vault

    We are using the Hashicorp Vault Atlas plugin in order to generate credentials for Atlas.
    We are able to generate roles on the Atlas end and then use those roles to provision vault users.
    However, I don't see a way to restrict those roles to just certain resources/clusters.
    So the user can access all the deployments in a project.
    It should be possible to restrict roles to certain resources only.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Hi,  


    This has already been added some time ago.   You need to specify the scopes you want to include, here's the code - https://github.com/hashicorp/vault-plugin-database-mongodbatlas/blob/master/mongodbatlas.go#L206. It should be an array like roles, but with the resource name and then if it's a cluster or data lake (see scopes here: https://www.mongodb.com/docs/atlas/reference/api/database-users-create-a-user/).   I hope that helps! 


    Best, 

    Melissa


  16. Comprehensive Backup Ransomware Protection

    MongoDB Atlas needs a modern, comprehensive, secure ransomware protection strategy for its customers. Simply providing the ability to backup a database, and encrypt that database with "bring your own key" is not enough. Below I highlight what I believe are key components of a comprehensive strategy (or at least a good start).

    Immutable and Verifiable Backups

    Once backups are created, Atlas should provide a facility to ensure the backup remains immutable. Further, Atlas should provide verification that a backup continues to be untouched / unmodified for its entire lifecycle.

    Deletion Protection

    Atlas should provide enhanced deletion protection for backups. Any…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Backup  ·  Admin →

    Hello,

    I am pleased to announce that we have released our backup feature called Backup Compliance Policy, that protects your backups from being deleted by any user, ensuring WORM and full immutability (can not be edited/modified or deleted) for backups automatically in Atlas.


    Backup Compliance Policy allows organizations to configure a project-level policy to prevent the deletion of backups before a predefined period, guarantee all clusters have backup enabled, ensure that all clusters have a minimum backup retention and schedule policy in place, and more.


    With these controls, you can more easily satisfy data protection requirements (e.g., AppJ, DORA, immutable / WORM backups, etc.) without the need for manual processes.


    Please note that the Backup Compliance Policy can not be disabled without MongoDB support once enabled so please make sure to read our documentation thoroughly before enabling.


    In addition to Backup Compliance Policy, organizations can also utilize our multi-region

  17. Disable Specific API's

    For certain API's, like the ability to Delete a backup, have the ability for an Owner to disable this API call entirely, to prevent bad actors from being able to destroy a system or even a good actor from unintentionally destroying a system. If a customer has a policy that no backups shall be deleted ever, have the ability to disable this API across the board.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Admin →

    Hello,


    I am pleased to announce that we have released our backup feature called Backup Compliance Policy, that protects your backups from being deleted by any user, ensuring WORM and full immutability (can not be edited/modified or deleted) for backups automatically in Atlas. This applies to any method of deleting backups, regardless of wheter it is through the UI or the API.


    Backup Compliance Policy allows organizations to configure a project-level policy to prevent the deletion of backups before a predefined period, guarantee all clusters have backup enabled, ensure that all clusters have a minimum backup retention and schedule policy in place, and more.


    With these controls, you can more easily satisfy data protection requirements (e.g., AppJ, DORA, immutable / WORM backups, etc.) without the need for manual processes.


    Please note that the Backup Compliance Policy can not be disabled without MongoDB support once enabled so please make sure…

  18. Return private endpoints for peered network from mongo-db prometheus discovery endpoint

    We are using VPC peering to connect with Mongo Atlas. With the recent account about, prometheus integration. We added scrape config to mongo-db discovery API. However, scraping times out. Upon checking further it is found that discovery API returns public endpoints not private ones. Hence connection is failing. Is there a way that discovery API can send private endpoints.

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Alerts  ·  Admin →
  19. ReadOnly DATA API

    Current DATA API feature looks promising. However there is no way provide access controls around it. If you have access to API key then you can potentially do both read-write to cluster. We did some PoC recently but couldnt promote to prod because of this problem. If we get a readonly Data API access that would be super helpful.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data API  ·  Admin →
  20. Allow to assign API Key to Project via Terraform by referencing public key

    Currently you can only assign an API Key to an Atlas Project via Terraform by referencing the ID of the API Key. Unfortunately, the ID is not exposed through the UI, only through the Atlas API. This is not very ideal for customers/users that are creating API Keys through the UI.

    Adding support for referencing the API Key in Terraform via the public key instead of the ID would fix this. Or alternatively, exposing the API Key ID in Atlas.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

  • Don't see your idea?

Feedback and Knowledge Base