Steve
My feedback
-
1 vote
Steve shared this idea ·
-
2 votes
Steve shared this idea ·
-
2 votes
Steve supported this idea ·
An error occurred while saving the comment -
1 vote
Steve shared this idea ·
-
2 votes
Steve shared this idea ·
-
1 vote
Steve shared this idea ·
-
9 votes
Steve supported this idea ·
-
1 vote0 comments · Data Federation and Data Lake » Infrastructure Options · Flag idea as inappropriate… · Admin →
Steve shared this idea ·
-
1 vote0 comments · Ops Tools » MongoDB CLI for Cloud Manager and Ops Manager · Flag idea as inappropriate… · Admin →
Steve shared this idea ·
-
2 votes
An error occurred while saving the comment Steve commented
It is also important to have an API endpoint that implements Live Migration to migrate clusters between Atlas project, when, for example, the limits have been approached or exceeded in that cluster or would be exceeded if, for example, one were to convert a cluster to global cluster.
-
1 vote
Steve shared this idea ·
-
3 votes
An error occurred while saving the comment Steve commented
Perhaps this table could be build using test results from TPCC or YCSB?
In any case, the IOp/S metric in Atlas is without context. One would need block size, workload (seq vs rand), and IO depth to make any sense out of this. Is this a MongoDB, 30K seq write (iodepth 2) in flushing the cache or random reads executed by client?
Likewise, the CPU and RAM values are clear context AFAIK. It would take most folks time to dig to find out what Cloud VM MDB is using.
Performance numbers calculated on well-specified VMs/storage using industry performance tests would help allot.
-
154 votes
An error occurred while saving the comment Steve commented
This would be a truly time-saving feature. Otherwise, each customer has to roll their own.
-
35 votes
An error occurred while saving the comment Steve commented
Generally, data errors or loss, like inadvertently dropping a minor collection, should not require the complete restoration of the backup over the existing database. Doing so could have an even more adverse problem than rebuilding the missing document or collection. Instead, we should be able to read the snapshot and cherry-pick the desired data from either a replication set or a sharded cluster backup.
We experienced a similar problem when attempting to restore a backup from a cluster on another region to a cluster in the GCP "us-east4" region. All we received as 'restore failed', which is entirely unhelpful. Worse yet, that message disappeared after a day or so.