Steven
My feedback
26 results found
-
3 votes
Steven shared this idea ·
-
1 vote
Steven shared this idea ·
-
2 votes
An error occurred while saving the comment -
1 vote
Steven shared this idea ·
-
3 votes
An error occurred while saving the comment Steven commented
Perhaps this table could be build using test results from TPCC or YCSB?
In any case, the IOp/S metric in Atlas is without context. One would need block size, workload (seq vs rand), and IO depth to make any sense out of this. Is this a MongoDB, 30K seq write (iodepth 2) in flushing the cache or random reads executed by client?
Likewise, the CPU and RAM values are clear context AFAIK. It would take most folks time to dig to find out what Cloud VM MDB is using.
Performance numbers calculated on well-specified VMs/storage using industry performance tests would help allot.
-
59 votes
An error occurred while saving the comment Steven commented
Generally, data errors or loss, like inadvertently dropping a minor collection, should not require the complete restoration of the backup over the existing database. Doing so could have an even more adverse problem than rebuilding the missing document or collection. Instead, we should be able to read the snapshot and cherry-pick the desired data from either a replication set or a sharded cluster backup.
It is also important to have an API endpoint that implements Live Migration to migrate clusters between Atlas project, when, for example, the limits have been approached or exceeded in that cluster or would be exceeded if, for example, one were to convert a cluster to global cluster.