Eric
My feedback
-
1 vote
Eric shared this idea ·
-
2 votes
An error occurred while saving the comment -
23 votes
Eric supported this idea ·
-
7 votes
Eric supported this idea ·
-
22 votes
Work on this feature has begun. Let us know if you have any questions.
Eric supported this idea ·
-
199 votes
Eric supported this idea ·
-
22 votes
An error occurred while saving the comment Eric commented
In several collections, we have tens of millions documents, but only 1-5 million are public/searchable in any given collection. It seems tragic to index all 50 million documents when only 5 million are public.
Eric supported this idea ·
-
6 votes
An error occurred while saving the comment Eric commented
Please throttle remove().
Please allow caller to throttle or limit "remove()". The syntax to remove includes a filter query much like find(). But, there is NO limit. So if a caller wants to delete all documents older than date X, but only delete 1MM of them at a time, there is NO good way to do that. It would be really nice to allow finer control of remove.
Eric supported this idea ·
-
12 votes
An error occurred while saving the comment Eric commented
Just commenting to bump this up. This is a trivial add that would mean so much to me. :)
Eric shared this idea ·
-
8 votes
Eric shared this idea ·
-
98 votes
An error occurred while saving the comment Eric commented
Our workload is highly predictable. We serve K-12 students. For 8 hours, M-F we have very heavy loads. Evenings, weekends, holidays and summers we have nothing.
I'd like to +1 on the time-based scaling... but only as substitute for better granularity on perf metrics.
It would be better to trigger scale up/scale down on IOPS or Ops. Ops is the better metric b/c it does not change when the scale changes. (whereas read iops can drop precipitously after a scale up)
For instance, scale up when OPS hit 500, 1000, 2000. To scale down, you could specify these metrics as pairs.
500,100 => scale up when hit 500, down when fall back to 100.
1000, 500 => scale up when hit 1000, down when back to 500.
2000, 1000 = > scale up when hit 2000, down when back to 1000.Or, just take single points... and 100, 500, 1000, 2000 and infer the scale down from the previous up point.
-
10 votes
Eric shared this idea ·
-
42 votes
An error occurred while saving the comment Eric commented
Please... just add the collection name and query. You can get fancy after that.
Eric supported this idea ·
Can't you already do this by turning dynamic mapping off?