Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

43 results found

  1. support $lookup for update aggregation

    We frequently denormalise either full documents or subsets to different documents in order to speed up reading, create indexes or paginate/sort on fields.

    Consider a user collection and a task collection, if a task can be assigned to the user, it makes sense to just put the user document on the task they are assigned. But an update to a user now requires you to update the user both in the user collection and all tasks with that user in the tasks collection.

    This can be achieved but does introduce some complexity, however the introduction of updates using aggregation pipelines…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. geoContain

    Dear all,
    according to the attached image, I have some documents (in blue, with id from 1 to 5) having a geographic extent and a search area (in yellow).
    I need to find all documents where search area is completly inside the document's geometry.
    Using different words, I need to find all documents where geometry completly covers the given search area.
    In my sample, the geo query should return the document with id 1.
    This kind of query has a opposite logic than the $geoWithin

    Could you provide a $geoContain functionality in the next future?

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Allow configuration of 100mb memory limit per aggregation pipeline stage

    In this old thread from 2016 (https://groups.google.com/forum/#!topic/mongodb-user/LCeFZZRz5EY) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:

    1. If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
    2. You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.

    I believe this subject needs to be revisited for the following reasons:

    1. “Too much memory” is very subjective, and the 100mb…
    26 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 3 Next →
  • Don't see your idea?

Feedback and Knowledge Base