Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
-
Alert for backup policy changes
It would be great to have an alert that notifies you when someone edits the backup policies.
This would be good to validate and everyone to be aware of, because a bad change can cause major problems if it is necessary to restore a backup.
2 votes -
efficient backup restore between MongoAtlas Projects
Hello,
currently, when we restore a SNAPSHOT from one cluster to another cluster in the same MongoAtas Project (example from "production" project to "production" project), this is very efficient (several minutes).
On the other hand, if we execute the same backup restore from one cluster to another cluster on another MongoAtlas project (example : from "production" project to "staging" project), this is much less efficient and it will take several hours (instead of several minutes above).
This is very efficient in the same MongoAtlas project because it will use cloud provider system using hard disk management.
Could you improve efficiency…
1 vote -
Add editable description to a snapshot
After an incident I may single out a snapshot and decide it is important. At this point I am able to change the retention policy to keep it for a longer time.
It would be useful to also be able to add a note or a description to that snapshot so that I may tell future me why that snapshot was important and what incident it is tied to.
2 votes -
Provide a default alert for projects using Cloud Backup
Given how import backups are to an enterprise's business continuity plans, the set of common sense default alerts set by Atlas should include an alert for Cloud Backup on projects with Cloud Backup enabled. Either "CPS snapshot has been stuck in queue for 1 hour" or "Snapshot schedule feel behind by 1 hour" would suffice.
1 vote -
Permission level to query snapshots
Right now I have to give an admin permission to allow someone on my team to query snapshots from a backup (i.e. downloading the tunnel file).
I don't want to give an admin permission just to allow our data analysts or a developer trying to understand data corruptions to view the database at an older date.3 votes -
SHA1 Digest hash for Atlas Cloud Backup Restore
Atlas Legacy Backup used to have digest hash (SHA1 Checksum) on the restore job, but it is currently not available in Atlas Cloud Backup. This feature will be helpful in validating the integrity of the downloaded file.
2 votes -
Sync backup data with AWS and GCS
I use Atlas because it's easy to set up, especially with backups. Without this feature, I have to build sync tools with the API to make sure my backups are secure. If someone got into our database and deleted the cluster - the backups are completely lost.
2 votes -
Cloud backup to new cluster
Two different ideas for this:
-When building a new cluster having the option to build it using a cloud backup snapshot for DR when a cluster is down-Same idea to restore to a new cluster but from the restore target dropdown IE Restore to New cluster which opens the create a cluster workflow with the cloudbackup snapshot section populated
2 votes -
Allows to share Cloud Provider Backup Snapshot with customer's Cloud Provider account
What is the problem that needs to be solved? Add an option (as it was in mLab) that allows to share the Cloud Provider Backup Snapshot (EBS Snapshot [1], in case of AWS) for Atlas Cluster with customer's Cloud Provider account. This action is easier, quicker and cheaper for customers than manually downloading the Atlas Snapshot.
Why is it a problem? (the pain) A) Operational pain: as of now (2020-02-18), if you require to execute disaster recovery scenario with restoration of your Atlas Cluster data outside of Atlas Clusters you'll need to create a Cloud Provider Backup Snapshot Restore job…
9 votes -
Smart backup scheduler (deferring snapshot backups X minutes based on load metrics)
seems like a potential enhancement for the snapshot schedule to consider cpu/connection/etc load metrics before deciding to run or not
3 votes -
Change Azure snapshot backups from LRS to GRS
Sorry but this is a must. GRS is a 2 region datacenter backup, LRS only 1. Fire or disaster will take out everything if LRS is used.
2 votes -
Helpful error message when restoring backup
When I recently attempted to restore a db backup I got a cryptic error message stating "Error: Cluster Not Found"--which was extremely confusing considering that I pressed the restore button in the backup options. After some great tech support, I added my user as project owner and everything worked fine.
Please make the error message clearer in both the Restore and Download backup scenarios.
1 vote -
Use the User's Profile Time Zone for PIT Restore
When viewing Cloud Provider Snapshot information, all dates are listed in the time zone on the User's profile. However, the date/time to be entered when trying to perform a PIT restore has to be entered in UTC time. This is asking a lot of someone who is probably already really nervous to be able to correctly calculate the difference between their time zone and UTC time.
3 votes -
regular
Restore snapshots to different clusters.
I have a staging environment cluster whose data I'd like updated daily with anything new from my production cluster.
Currently I do a dump and restore, but this is slow because it gets piped externally.
Ideally I'd be able to restore a production snapshot to my staging cluster through a script running on a cron. Whether than is API or CLI doesn't matter.
1 vote -
Allow "Project Data Access Read Only" to retrieve restore links.
Allow "Project Data Access Read Only" to retrieve restore links. Currently, to retrieve a restore link from an Atlas cluster you must be a Project Owner.
In the current version to take the link to the snapshot we need to use Restore Jobs part of the API. Problem is that the same POST endpoint is used for the restore job which can do changes in the system and needs Owner level of the access and to generate link to the stream with the snapshot data. This second type of the job should not need so big access level as should…
2 votes -
Provide a granular mechanism to disconnect and disable all active sessions during a planned maintenance.
Provide a per cluster mechanism to shut off client access for restore operations.
1 vote -
Dynamically Calculate Timestamp Limits for Continuous Backup Restore (via OPLOG Timestamp)
Currently, you can put in a OPLOG TIMESTAMP value that exceeds the maximum / minimum value for the Continuous Backup Time Window.
For example, on the "Date & Time" tab of the "Point in Time Restore" menu, you are given a warning that specifies: "You can only restore to a specific point in time after MM/DD/YYYY HH:MM"
This should be added to the "OPLOG TIMESTAMP" tab where a warning could be given such as "You can only restore to a specific point in time after XXXXXXXXXX" (Epoch time).
As such, you should enforce data form validation so that a user…
1 vote -
Provide `replSetName` vs. `snapshotId` mapping in `GET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots` Public API call for each
What is the problem that needs to be solved? Provide
replSetName
vs.snapshotId
mapping inGET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots
Public API call for eachsnapshotId
.Why is it a problem? (the pain) You're doing automated disaster recovery (restore from Atlas to on-prem via Manual Restore) scenario and you need to know which
snapshotId
(and its corresponding.tar.gz
file) is related to which Atlas Cluster Shard/Config Server Replica Set. E.g.5e442aa4cf09a2352527536b
=Cluster0-shard-0
,5e442aa4cf09a23525275370
=Cluster0-shard-1
,5e442aa4cf09a23525275375
=Cluster0-config-0
.3 votes -
Ability to stop the snapshot restore in the Atlas UI
This is a feature request for ability to stop the snapshot restore in the Atlas UI
1 vote -
Provide size (in bytes) in `GET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots` Public API call for each `snapshotId`
What is the problem that needs to be solved? Provide size (in bytes) in
GET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots
Public API call for eachsnapshotId
Why is it a problem? (the pain) You're doing your own automation with manual restore, and you can't programmatically know the actual filesize of
.tar.gz
file you'll be downloading for eachsnapshotId
listed inGET /groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/backup/snapshots
Public API call until you actually download.tar.gz
file for eachsnapshotId
.2 votes
- Don't see your idea?