Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
-
Vault Lock to protect Atlas Cloud Backups
We are currently looking for a solution to secure our Atlas backups.
Something similar to AWS Glacier Vault Lock [1] or a simple grace period before backups are deleted once and for all would be nice.
It would be amazing to protect the Atlas backups from being deleted.
Currently, if one of our Atlas admins was compromised, the damage for the company would be enormously high. So we need to implement measures against the final deletion of our most mission critical data.also mentioned in: [2]
[1] https://aws.amazon.com/de/blogs/security/amazon-glacier-introduces-vault-lock/
[2] https://developer.mongodb.com/community/forums/t/is-there-a-vault-lock-for-atlas-backups/110413 votes -
Enable AWS EBS Fast Snapshot Restore (FSR) on Atlas
To speed up recovery after restoring from a snapshot or having a node replaced, it would be nice to have the option to use AWS Fast Snapshot Restore
https://aws.amazon.com/blogs/aws/new-amazon-ebs-fast-snapshot-restore-fsr/
This would allow the disk to have full performance as soon as available.
3 votes -
Add editable description to a snapshot
After an incident I may single out a snapshot and decide it is important. At this point I am able to change the retention policy to keep it for a longer time.
It would be useful to also be able to add a note or a description to that snapshot so that I may tell future me why that snapshot was important and what incident it is tied to.
3 votes -
Permission level to query snapshots
Right now I have to give an admin permission to allow someone on my team to query snapshots from a backup (i.e. downloading the tunnel file).
I don't want to give an admin permission just to allow our data analysts or a developer trying to understand data corruptions to view the database at an older date.3 votes -
Smart backup scheduler (deferring snapshot backups X minutes based on load metrics)
seems like a potential enhancement for the snapshot schedule to consider cpu/connection/etc load metrics before deciding to run or not
3 votes -
Use the User's Profile Time Zone for PIT Restore
When viewing Cloud Provider Snapshot information, all dates are listed in the time zone on the User's profile. However, the date/time to be entered when trying to perform a PIT restore has to be entered in UTC time. This is asking a lot of someone who is probably already really nervous to be able to correctly calculate the difference between their time zone and UTC time.
3 votes -
Provide size (in bytes) in `GET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots` Public API call for each `snapshotId`
What is the problem that needs to be solved? Provide size (in bytes) in
GET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots
Public API call for eachsnapshotId
Why is it a problem? (the pain) You're doing your own automation with manual restore, and you can't programmatically know the actual filesize of
.tar.gz
file you'll be downloading for eachsnapshotId
listed inGET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots
Public API call until you actually download.tar.gz
file for eachsnapshotId
.3 votes -
Configure --jsonFormat=canonical flag in export policy.
JSON does not support all data types that are available in BSON. This means that when using JSON there will be a so called "loss of fidelity" of the information.
However, using the --jsonFormat=canonical flag in a mongoexport command will preserve all available BSON data types, so the "loss of fidelity" issue can be completely avoided.Now we plan to export our cloud backups to an AWS S3 bucket. To do this, we would like to set up an export policy to automatically export the snapshots. We could already do this via the API. However, the data is output in…
2 votes -
GCP Export Cloud Backup Snapshot support
Currently "Export Cloud Backup Snapshot" is a really good feature for off-line backups, but it only supports S3. This is quite limiting for organisations that don't use AWS. It would be really good if it also supported GCP Cloud Storage.
2 votes -
Recovery after cluster delete
Cloud Backups should be recoverable even after a cluster delete otherwise they can't really be considered backups. One way to do this would be to allow for automated backup downloads to customer specified cloud provider storage.
2 votes -
snapshot
Show Snapshot progress (from provider snapshots) and possibly an ETA in Atlas UI
2 votes -
Replay the point-in-time-restore journal for debug/diagnosis
When performing a forensic analysis for the purposes of a restore, in order to determine the appropriate point in time when our data was last "sane", it would be awesome if we could start with a point-in-time restoration, and play the journal forward, e.g. performing a bisect on the data.
2 votes -
Allow instant backup on every type of cluster
Allow to create a backup when we want, on every type of cluster (like we could do with mLab plans)
2 votes -
Helpful error message when restoring backup
When I recently attempted to restore a db backup I got a cryptic error message stating "Error: Cluster Not Found"--which was extremely confusing considering that I pressed the restore button in the backup options. After some great tech support, I added my user as project owner and everything worked fine.
Please make the error message clearer in both the Restore and Download backup scenarios.
2 votes -
Cloud backup to new cluster
Two different ideas for this:
-When building a new cluster having the option to build it using a cloud backup snapshot for DR when a cluster is down-Same idea to restore to a new cluster but from the restore target dropdown IE Restore to New cluster which opens the create a cluster workflow with the cloudbackup snapshot section populated
2 votes -
Provide a granular mechanism to disconnect and disable all active sessions during a planned maintenance.
Provide a per cluster mechanism to shut off client access for restore operations.
2 votes -
Change Azure snapshot backups from LRS to GRS
Sorry but this is a must. GRS is a 2 region datacenter backup, LRS only 1. Fire or disaster will take out everything if LRS is used.
2 votes -
Allow "Project Data Access Read Only" to retrieve restore links.
Allow "Project Data Access Read Only" to retrieve restore links. Currently, to retrieve a restore link from an Atlas cluster you must be a Project Owner.
In the current version to take the link to the snapshot we need to use Restore Jobs part of the API. Problem is that the same POST endpoint is used for the restore job which can do changes in the system and needs Owner level of the access and to generate link to the stream with the snapshot data. This second type of the job should not need so big access level as should…
2 votes -
Allow backup download through PrivateLink
We need the ability to download our backups via PrivateLink connection. Our clusters aren't reachable via VPC peering as we solely use PrivateLink. The existing download capability doesn't support a PrivateLink URL to download our backups through.
1 vote -
Comprehensive Backup Ransomware Protection
MongoDB Atlas needs a modern, comprehensive, secure ransomware protection strategy for its customers. Simply providing the ability to backup a database, and encrypt that database with "bring your own key" is not enough. Below I highlight what I believe are key components of a comprehensive strategy (or at least a good start).
Immutable and Verifiable Backups
Once backups are created, Atlas should provide a facility to ensure the backup remains immutable. Further, Atlas should provide verification that a backup continues to be untouched / unmodified for its entire lifecycle.
Deletion Protection
Atlas should provide enhanced deletion protection for backups. Any…
1 vote
- Don't see your idea?