Eric
My feedback
10 results found
-
1 voteEric shared this idea ·
-
2 votes
An error occurred while saving the comment -
362 votes
Now in Public Preview: Introducing a Local Experience for Atlas, Atlas Search, and Atlas Vector Search with the Atlas CLI
Please feel to reply with feedback or leave comments. Happy to connect directly and include your team in our Early Access Program to provide additional help.
Eric supported this idea · -
60 votes
An error occurred while saving the comment Eric commentedIn several collections, we have tens of millions documents, but only 1-5 million are public/searchable in any given collection. It seems tragic to index all 50 million documents when only 5 million are public.
Eric supported this idea · -
10 votes
An error occurred while saving the comment Eric commentedPlease throttle remove().
Please allow caller to throttle or limit "remove()". The syntax to remove includes a filter query much like find(). But, there is NO limit. So if a caller wants to delete all documents older than date X, but only delete 1MM of them at a time, there is NO good way to do that. It would be really nice to allow finer control of remove.
Eric supported this idea · -
15 votes
An error occurred while saving the comment Eric commentedJust commenting to bump this up. This is a trivial add that would mean so much to me. :)
Eric shared this idea · -
9 votesEric shared this idea ·
-
151 votes
An error occurred while saving the comment Eric commentedOur workload is highly predictable. We serve K-12 students. For 8 hours, M-F we have very heavy loads. Evenings, weekends, holidays and summers we have nothing.
I'd like to +1 on the time-based scaling... but only as substitute for better granularity on perf metrics.
It would be better to trigger scale up/scale down on IOPS or Ops. Ops is the better metric b/c it does not change when the scale changes. (whereas read iops can drop precipitously after a scale up)
For instance, scale up when OPS hit 500, 1000, 2000. To scale down, you could specify these metrics as pairs.
500,100 => scale up when hit 500, down when fall back to 100.
1000, 500 => scale up when hit 1000, down when back to 500.
2000, 1000 = > scale up when hit 2000, down when back to 1000.Or, just take single points... and 100, 500, 1000, 2000 and infer the scale down from the previous up point.
-
11 votesEric shared this idea ·
-
61 votes
An error occurred while saving the comment Eric commentedPlease... just add the collection name and query. You can get fancy after that.
Eric supported this idea ·
Can't you already do this by turning dynamic mapping off?