Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
149 results found
-
Return private endpoints for peered network from mongo-db prometheus discovery endpoint
We are using VPC peering to connect with Mongo Atlas. With the recent account about, prometheus integration. We added scrape config to mongo-db discovery API. However, scraping times out. Upon checking further it is found that discovery API returns public endpoints not private ones. Hence connection is failing. Is there a way that discovery API can send private endpoints.
10 votesI'm happy to announce that the Prometheus integration does now support VPC peering. This can be configured in the Prometheus configuration modal in the user interface when using the HTTP SD discovery method.
More information on how to configure this can also be found here: https://www.mongodb.com/docs/atlas/reference/api/third-party-integration-settings-discovery/#request-query-parameters
-
Vault Lock to protect Atlas Cloud Backups
We are currently looking for a solution to secure our Atlas backups.
Something similar to AWS Glacier Vault Lock [1] or a simple grace period before backups are deleted once and for all would be nice.
It would be amazing to protect the Atlas backups from being deleted.
Currently, if one of our Atlas admins was compromised, the damage for the company would be enormously high. So we need to implement measures against the final deletion of our most mission critical data.also mentioned in: [2]
[1] https://aws.amazon.com/de/blogs/security/amazon-glacier-introduces-vault-lock/
[2] https://developer.mongodb.com/community/forums/t/is-there-a-vault-lock-for-atlas-backups/1104110 votesHello,
I am pleased to announce that we have released our backup feature called Backup Compliance Policy, that protects your backups from being deleted by any user, ensuring WORM and full immutability (can not be edited/modified or deleted) for backups automatically in Atlas.
Backup Compliance Policy allows organizations to configure a project-level policy to prevent the deletion of backups before a predefined period, guarantee all clusters have backup enabled, ensure that all clusters have a minimum backup retention and schedule policy in place, and more.
With these controls, you can more easily satisfy data protection requirements (e.g., AppJ, DORA, immutable / WORM backups, etc.) without the need for manual processes.
Please note that the Backup Compliance Policy can not be disabled without MongoDB support once enabled so please make sure to read our documentation thoroughly before enabling.
-
Add autoExport snapshot to AWS S3 Bucket on mongodbatlas_cloud_backup_schedule
By company policy, we have to export our snapshots automatically to an AWS S3 Bucket.
I started following https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/ and implemented on terraform due to the high number of projects, and clusters that we need to backup.
However, looks like the terraform provider doesn't support "autoExportEnabled" from https://www.mongodb.com/docs/atlas/reference/api/cloud-backup/schedule/modify-one-schedule/ on https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cloud_backup_schedule terraform resource.
Best regards,
Wagner Sartori Junior9 votesThis is now out in version 1.4.2.
-
9 votes
-
Security Key (FIDO2) MFA option
Please enable security key (e.g. https://www.yubico.com/gb/product/yubikey-5c-nfc/) option for MFA. Ideally using FIDO2 protocol
9 votesMongoDB added webAuthn support as an MFA method. Please use "Security Key/Biometric" MFA. option to use it with your FIDO2 keys.
https://www.mongodb.com/docs/atlas/security-multi-factor-authentication/
-
Add resource to allow attachment of roles to mongodbatlas_cloud_provider_access
The need to do two applies to completely configure the
mongodbatlas_cloud_provider_access
resource should have never seen the light of day. I would like to see an additional resource that could attach a role to amongodbatlas_cloud_provider_access
after it has been created. Then you could use the attributes in themongodbatlas_cloud_provider_access
resource to create the role, then attach the role to it using theaccess_role_attachment
resource.9 votesCloud Provider Access in v0.9.0 now has a single apply method and the original two apply method.
-
Alert when a snapshot restore succeeds/fails
Simply send an email alert when a restore finishes (or errors).
This is important for us because we run a restore (from prod to qa) every weekend and it takes over 14 hours. If it fails, we need to know so we can quickly kick it off again before Monday. Otherwise the QA team will be dead in the water.
9 votesIn Atlas, you can now create alerts related to backup.
-
Send alert when there are performance advisor recommendations
I would like to have an alert setting when MongoDB Atlas's performance advisor has any recommendations.
Currently there is no way to determine if the performance advisor has a suggested improvement on the main project console page.
Our prod project has a lot of clusters(with more planned). It is cumbersome to go in and check each one manually.
9 votesWe released Performance Advisor alerts on Week of Aug 7. It is available as a default alert.
-
immutable backups
currently Atlas - MongoDB backup are stated to be immutable, however, that is not true because there is no object lock on the s3 bucket.
We would like to request adding the option to have an object lock on the s3 bucket that our snapshots are located on which will make sure that the snapshots can only be deleted by retention and not modified or deleted by anyone else. This is to line up with WORM compliance while dealing with financial data.
https://www.telemessage.com/what-is-worm-compliance-and-when-is-it-needed/
https://aws.amazon.com/blogs/storage/protecting-data-with-amazon-s3-object-lock/
8 votesHello,
I am pleased to announce that we have released our backup feature called Backup Compliance Policy, that protects your backups from being deleted by any user, ensuring WORM and full immutability (can not be edited/modified or deleted) for backups automatically in Atlas.
Backup Compliance Policy allows organizations to configure a project-level policy to prevent the deletion of backups before a predefined period, guarantee all clusters have backup enabled, ensure that all clusters have a minimum backup retention and schedule policy in place, and more.
With these controls, you can more easily satisfy data protection requirements (e.g., AppJ, DORA, immutable / WORM backups, etc.) without the need for manual processes.
Please note that the Backup Compliance Policy can not be disabled without MongoDB support once enabled so please make sure to read our documentation thoroughly before enabling.
-
Terraform Serverless VPC Endpoint configuration
Create the equivalent of mongodbatlasprivatelinkendpoint but for serverless.
8 votes -
Add Prometheus as a Supported Third-Party Integration Settings type
Great work releasing the new Prometheus Integration functionality!
Ideally, we'd like to use Terraform to codify our interface with the Prometheus Integration, similar to how we leverage the existing Third-Party Integration Settings types.
8 votesThis is now out in version 1.4.2 of the provider.
-
Please add datadog US3 site also for the integration with MongoDB atlas
Please add datadog US3 site also for the integration with MongoDB atlas
8 votesThe US3 Datadog site is available.
-
Create Organization using API or terraform in Atlas
I support multiple business units (BU) within our company. Each BU uses multiple applications & teams. In order to offer service in Atlas, through automation, there is no option in Atlas to create/ delete Organization dynamically. It allows managing Projects and below.
It will be good to add this feature to create/ delete Organization dynamically.
It will also be good to manage, ie to create different Accounts dynamically (like AWS/GCP).
This will help to have separate account & organization, for each BU to manage billing and manage the consumption, through automation.
Thanks
Rama Arumugam8 votesYou can now create Organizations via Atlas Administrator API. Please refer to the API specification for more details.
CLI and Terraform support will be available in upcoming months.
-
Atlas API Enhancements
Since we want to automate the user (de)provisioning for organizations and projects, we would like to see the following API enhancements:
Please enhance the Mongo Atlas API for the following functionalities:
- invite (existing mongo) user to organization (currently not possible)
- remove user from organization
- get invitation status from user
- cancel invitation for userThank you
8 votesThe work for invite management has been completed and added as endpoints to organizations and projects: https://docs.atlas.mongodb.com/reference/api/projects/ and https://docs.atlas.mongodb.com/reference/api/organizations/
-
Tag/label project
Give the ability to tag/label a project
8 votes -
Expose minRetentionHours oplog option
MongoDB 4.4 introduces the new minRetentionHours for the oplog.
Currently this option is not exposed in the Atlas UI and the command replSetResizeOplog is not allowed.
Would be nice to have this option available in Atlas (and be accessible via Terraform too)8 votes -
Integrate Azure Private Endpoint
Enable connectivity in Azure using private endpoints
8 votes -
atlas portal ip whitelist
We were given this idea from a security audit.
From a security-in-depth perspective we would like to be able to restrict logins on the atlas portal to only whitelisted IP's, this would be analog as to how API whitelisting works at the organization level.
This is to prevent login's other than from our permitted sites.8 votes -
Allow backup download through PrivateLink
We need the ability to download our backups via PrivateLink connection. Our clusters aren't reachable via VPC peering as we solely use PrivateLink. The existing download capability doesn't support a PrivateLink URL to download our backups through.
7 votesFor Atlas clusters hosted on AWS and Azure with private endpoints configured, Atlas now enables you to download the snapshot via the private endpoints within the same region as the snapshot via both the UI and Admin API.
Documentation can be found here.
-
killop
I have read the other two suggestions on providing more killOp() and read Andrew's comment on the difficulties in the medium term.
I just watched a situation where a primary became unresponsive and the queues piled up. The solution was to cause an election, but the dba wanted to kill operations by user (the application user) and couldn't.
It would be nice to have something more than the real-time panel, which in this situation had become unresponsive as well so it was no use, such as a DBA console where an authorized dba could kill operations started by other users.
7 votes
- Don't see your idea?