Add a "Limit" to Delete and Bulk Delete operations
Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.
- For the delete, this would make sure we only delete n number of documents.
- For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.
Right now, the only solution is a hack, which is to query the documents with a Limit and a projection to get the IDs, then delete only those. This means we need to do large queries/projections and large delete operations. This workaround/hack is not efficient and not good in any possible way. It is, after all, just a temporary hack we have to use until a valid solution is supported by MongoDB, which is to add a Limit to deletes.
![](https://secure.gravatar.com/avatar/521024c03f74d55ab397a6687370ceec?size=40&default=https%3A%2F%2Fassets.uvcdn.com%2Fpkg%2Fadmin%2Ficons%2Fuser_70-6bcf9e08938533adb9bac95c3e487cb2a6d4a32f890ca6fdc82e3072e0ea0368.png)
-
Albert commented
Thinking of multiple threads: I would say throttle with the idea of using % of IOPS or queueing or spreading deletes over a longer period to not overload system.
-
Eric commented
Please throttle remove().
Please allow caller to throttle or limit "remove()". The syntax to remove includes a filter query much like find(). But, there is NO limit. So if a caller wants to delete all documents older than date X, but only delete 1MM of them at a time, there is NO good way to do that. It would be really nice to allow finer control of remove.