Eric

My feedback

9 results found

  1. 1 vote
    How important is this to you?
    Eric shared this idea  · 
  2. 2 votes
    1 comment  ·  Atlas Search  ·  Admin →
    How important is this to you?
    An error occurred while saving the comment
    Eric commented  · 

    Can't you already do this by turning dynamic mapping off?

  3. 369 votes
    How important is this to you?
    Eric supported this idea  · 
  4. 10 votes
    2 comments  ·  Database  ·  Admin →
    How important is this to you?
    An error occurred while saving the comment
    Eric commented  · 

    Please throttle remove().

    Please allow caller to throttle or limit "remove()". The syntax to remove includes a filter query much like find(). But, there is NO limit. So if a caller wants to delete all documents older than date X, but only delete 1MM of them at a time, there is NO good way to do that. It would be really nice to allow finer control of remove.

    Eric supported this idea  · 
  5. 15 votes
    How important is this to you?
    An error occurred while saving the comment
    Eric commented  · 

    Just commenting to bump this up. This is a trivial add that would mean so much to me. :)

    Eric shared this idea  · 
  6. 10 votes
    How important is this to you?
    Eric shared this idea  · 
  7. 156 votes
    18 comments  ·  Atlas » Autoscaling  ·  Admin →
    How important is this to you?
    An error occurred while saving the comment
    Eric commented  · 

    Our workload is highly predictable. We serve K-12 students. For 8 hours, M-F we have very heavy loads. Evenings, weekends, holidays and summers we have nothing.

    I'd like to +1 on the time-based scaling... but only as substitute for better granularity on perf metrics.

    It would be better to trigger scale up/scale down on IOPS or Ops. Ops is the better metric b/c it does not change when the scale changes. (whereas read iops can drop precipitously after a scale up)

    For instance, scale up when OPS hit 500, 1000, 2000. To scale down, you could specify these metrics as pairs.

    500,100 => scale up when hit 500, down when fall back to 100.
    1000, 500 => scale up when hit 1000, down when back to 500.
    2000, 1000 = > scale up when hit 2000, down when back to 1000.

    Or, just take single points... and 100, 500, 1000, 2000 and infer the scale down from the previous up point.

  8. 11 votes
    How important is this to you?
    Eric shared this idea  · 
  9. 62 votes
    8 comments  ·  Atlas » Alerts  ·  Admin →
    How important is this to you?
    An error occurred while saving the comment
    Eric commented  · 

    Please... just add the collection name and query. You can get fancy after that.

    Eric supported this idea  · 

Feedback and Knowledge Base