Skip to content

Eric

My feedback

10 results found

  1. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Eric shared this idea  · 
  2. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Atlas Search  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Eric commented  · 

    Can't you already do this by turning dynamic mapping off?

  3. 362 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Eric supported this idea  · 
  4. 60 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    started  ·  12 comments  ·  Atlas Search  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Eric commented  · 

    In several collections, we have tens of millions documents, but only 1-5 million are public/searchable in any given collection. It seems tragic to index all 50 million documents when only 5 million are public.

    Eric supported this idea  · 
  5. 10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Database  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Eric commented  · 

    Please throttle remove().

    Please allow caller to throttle or limit "remove()". The syntax to remove includes a filter query much like find(). But, there is NO limit. So if a caller wants to delete all documents older than date X, but only delete 1MM of them at a time, there is NO good way to do that. It would be really nice to allow finer control of remove.

    Eric supported this idea  · 
  6. 15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Eric commented  · 

    Just commenting to bump this up. This is a trivial add that would mean so much to me. :)

    Eric shared this idea  · 
  7. 9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Eric shared this idea  · 
  8. 151 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    18 comments  ·  Atlas » Autoscaling  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Eric commented  · 

    Our workload is highly predictable. We serve K-12 students. For 8 hours, M-F we have very heavy loads. Evenings, weekends, holidays and summers we have nothing.

    I'd like to +1 on the time-based scaling... but only as substitute for better granularity on perf metrics.

    It would be better to trigger scale up/scale down on IOPS or Ops. Ops is the better metric b/c it does not change when the scale changes. (whereas read iops can drop precipitously after a scale up)

    For instance, scale up when OPS hit 500, 1000, 2000. To scale down, you could specify these metrics as pairs.

    500,100 => scale up when hit 500, down when fall back to 100.
    1000, 500 => scale up when hit 1000, down when back to 500.
    2000, 1000 = > scale up when hit 2000, down when back to 1000.

    Or, just take single points... and 100, 500, 1000, 2000 and infer the scale down from the previous up point.

  9. 11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Eric shared this idea  · 
  10. 61 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    8 comments  ·  Atlas » Alerts  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Eric commented  · 

    Please... just add the collection name and query. You can get fancy after that.

    Eric supported this idea  · 

Feedback and Knowledge Base