Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. geoIntersects feature for 2D index

    At Airbus, we would like to use geoIntersects feature with 2D Index.
    We don't need 2DSphere Index, but we have to use it if we want to have geoIntersects avalaible.
    As a consequence, we have to manage Spherical margins which introduces a useless and hard to maintain workaround in our product.
    Could you please plan to implement geoIntersects and not only geoWithin with 2D Index ?
    Thank you

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Automatic Indexes

    MongoDB can already suggest useful indexes. Why not take the next step and allow MongoDB to autonomously create and manage indexes. Ideally it will automatically maintain the indexes over time as the structure and usage of the database changes.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Elevate hidden replica to a voting member when quorum is lost for a Replica Set

    The MongoDB engine should allow for hidden members in a replica set to be automatically elevated to a voting member. When quorum is lost and can't be met, the hidden replica is automatically reconfigured to a voting member to obtain quorum.
    When the failed voting member comes back online and catches up the hidden replica is demoted back to a normal hidden member.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Replication  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Get metadata about source client connection that submitted a given change

    Currently with change streams it is impossible to know who or what connection initiated the changes.

    It would be a good feature to have to be able to receive some data about the source client connection that initiated a change.

    My particular use case is the following:

    I have an app that connects to Atlas. (source client connection)
    I can subscribe to change streams and then execute some logic when it applies.

    That app can scale to multiple instances.
    Each instance subscribes to the change streams.
    But I only want each instance to execute the logic that applies to only…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Change Streams  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. CSFLE - Enable aggregation stages for non-encrypted collections

    When encrypting any collection with CSFLE, aggregation is not allowed on non-encrypted collections.

    The official recommendation is to maintain 2 clients: 1 for CSFLE and 1 for when aggregation is needed. How is this an acceptable solution?

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Encryption  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Preserve field order in $merge

    Filed on behalf of https://jira.mongodb.org/browse/SERVER-63853:

    A variety of formats require strict adherence to the sequence of fields, such as bioinformatics

    Files of such formats are often very large and contain nested structures, so it is convenient to use them as collections. But to keep the data belonging to the above specs, it is necessary to keep the arrangement of the fields. Unfortunately, aggregations with saving results to another DB lose original arrangement.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Raise the limit of 16 MB JSON between aggregation stages

    When doing analytics, the 16 MB JSON limit between aggregation stages restricts the ability to process large amount of data. allowDiskUse does not help with all the various $operators and stages that we use. See ticket 00774514 for details.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Query and Aggregation Pipeline  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Unique index in sharded cluster

    For enforcing uniqueness in a sharded cluster, the officially recommended approach provided here https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/#std-label-shard-key-arbitrary-uniqueness is simplistic and in production environment it brings non-trivial amount of work. Some considerations:

    1. Ephemeral issues might cause inconsistencies between the two collections (for example, unique index collection update succeeded but not the main collection) and make some unique keys not useable.
    2. There are many changes needed (we're using ORM Mongoose, there are many hooks for it to change) for enforce this universally.

    What we ended up doing is to use distributed ephemeral locks (a TTLed MongoDB collection) to lock on the unique keys before adding…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Rename an existing index

    Allow for the possibility of renaming an existing index, without having to drop and recreate it.

    Let us say a unique index exists in production, it might not be possible to safely drop it. Yet the index name might not be ideal.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Tiered TTL for time series collection based on granularity

    Currently time series collections have a single TTL across all inherent granularities. It would be great to specify a TTL for each granularity. For example:

    For seconds: 1 week
    For hours: 1 month
    Others: never

    Course information should be held longer than finer information in some cases - currently they all fall under the main TTL specified.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Time Series  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. support $lookup for update aggregation

    We frequently denormalise either full documents or subsets to different documents in order to speed up reading, create indexes or paginate/sort on fields.

    Consider a user collection and a task collection, if a task can be assigned to the user, it makes sense to just put the user document on the task they are assigned. But an update to a user now requires you to update the user both in the user collection and all tasks with that user in the tasks collection.

    This can be achieved but does introduce some complexity, however the introduction of updates using aggregation pipelines…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Query and Aggregation Pipeline  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Support for MongoDB Server on Ubuntu 21.04.

    Per the MongoDB Server Supported Platforms Matrix support for Ubuntu 21.04 is not yet available.
    We would like to see the currently supported MongoDB Server versions available on the Ubuntu 21.04 LTS distribution which was released on 22 April 2021

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Provide a KRB5_KTNAME setParameter or other config setting

    The Kerberos keytab file is specified in the KRB5_KTNAME environment variable.

    Could a setParameter or other config file setting "krb5KtName" be provided to allow this to be set?

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Update to two binding accounts in the config file for LDAP.

    Please refer the Case: 00803199

    We need to update to two binding accounts in the config file to for LDAP authentication so that we can avoid down time while resetting the binding account password.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Ability to replicate the data within a cluster to another cluster

    Two different scenarios. One scenario is to have a DR cluster. 2 independant clusters that are one way synced. As of right now, a custom CDC solution (custom code + Kafka) is needed to achieve this.
    Another is because of latency and the requirement for data to be stored in 2 different data center, 2 indepentant mongoDB clusters that are two way synced. As of right now, a custom CDC solution (custom code + Kafka) is needed to achieve this.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Replication  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Record running queries at the time of a failover

    I recently observed some failovers in a replica set that may have been related to long-running queries. However, since these queries happened only sporadically, it was hard to track them down. Since the queries didn't finish, they weren't present in the logs at the time of the crash.

    It would be useful if it could be possible to somehow capture the long-running queries that were running at the time that something went wrong. I recognize that this is potentially impossible, since once something has gone wrong it can be difficult to do anything.

    We ended up being able to diagnose…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. NoTableScan at the collection level

    NoTableScan at the collection level instead of mongod level.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. delete logs number of days old

    In the options for Mongo DB Log Settings, there is only
    Max Percent of Disk
    Total Number of files

    A new option for
    Number of Days to keep

    Would be useful

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Add expression indexes

    An expression index is one where the value being indexed is the result of an expression, like lower casing a string.

    http://en.wikipedia.org/wiki/Expression_index
    http://www.postgresql.org/docs/8.1/static/indexes-expressional.html

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Document known breaking change in JavaScript Engine

    If a new version of MongoDB will include a new version of MozJS (ex https://jira.mongodb.org/browse/SERVER-29286) and this version will break any existing scripts it would be ideal if this was documented.

    For example, with MongoDB 4.2 "value".contains() no longer worked in scripts and had to be converted to "value".includes()

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base