Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

230 results found

  1. 8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Auditing requirement for the changes done through Ops Manager portal

    Changes done through Ops Manager portal are visible only in Alerts(Activity Feeds). Though these Alerts(Activity Feeds) can be derived through API, our auditors may not accept API calls and we need ops manger to log these change in respective deployment mongo audit.log

    Thank you

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Nanosecond timestamp support

    I've put this under the time series category since that's where it's most applicable, but it's really a data model / BSON issue.

    The topic of higher resolution timestamps have been surfaced from time to time for at least a decade (https://jira.mongodb.org/browse/SERVER-1460), and usually prompts a response like "just use integers". With the addition of time series collections however, where the concept of time is integral to database functionality, I think it's time to reconsider adding a type with at least nanosecond precision timestamp support. Date's millisecond resolution is woefully inadequate for a number of relevant use cases,…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. prometheus metrics availablity through privatelink

    until now is only possible to get metrics from public or private peered vpc but is not actually possible through privatelink. Since privatelink are used most of the time for security reason this limitation leads to find compromises or workaround to customers

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. provide faster way to perform a count() on million documents

    a couple of customer a shifting from MongoDB to other DBs like Redis or even PostreSQL as count is too slow on MongoDB when there are a lot of document to count even with use of indexes

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Private Endpoint Tags / Custom Names

    Private Endpoint names are not user friendly, they are just Amazon resource ID's. If you have multiple Private Endpoints for a project this makes it difficult to select the right one. It would be nice if these endpoints could be tagged or have custom names.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Implement $bucket and $group on indexed values with sub-linear runtime

    We noticed that sum $bucket and $group aggregations such as $min, $max, $count are unexpectedly slow even when fully covered by an index, (partially) because the DB scans through the entire index rather than employing optimization approaches such as binary search.

    An example pipeline that should return instantaneous but scans through the entire index (confirmed on v4.4 and v5):
    [
    {
    $match: {
    status: "DELIVERED",
    },
    },
    {
    $group: {
    id: {
    status: "$status",
    },
    min: {
    $min: "$modify
    time",
    },
    },
    },
    ]
    with an index { status: 1, modify_time: 1}

    Another example is $bucket (same index):
    [
    {…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Support compound/multiple grouping keys in $bucket

    We often need to compute statistical/summarizing aggregations grouped by more than one field where all fields are of a $bucket-able type.

    An example, would be to count all orders grouped by their status and some custom time ranges of their creation date.
    This can be achieved by using $group in combination with a $switch expression (sometimes simplified with $trunc), however, that is cumbersome and prevents efficient grouping since e.g. no binary search can be employed to identify the bucket boundaries efficiently.

    The query syntax of $bucket would not need to change much. It would simply need to allow for nested…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. sharding error shardsvr

    Make it clear which node is causing the "shardsvr" error.

    Spawned from support case 01042995

    Our error occurred when the user tried to connect using Compass. The failure was to list the collection names on one database.

    The error presented back to the user was merely Cannot accept sharding commands if not started with --shardsvr

    We found eventually that the primary changed on one of the shards, and that primary did not have the appropriate clusterRole in the mongod.conf file. My concerns are that this took too long to track down and would be impossible in a 100-shard environment.

    • Nothing…
    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. x509 authentication with other certificate's components than (O,OU,DC)

    In some entities (e.g. ours), the O, OU, DC triplet is not detailled enough or not appropriate, which makes it impossible to authenticate through x509.

    For exemple, in our entity, the O and OU are the same for all certificates (because all servers are in the same Organisation Unit), and the DC field is not used. We do use other fields though.
    Because of that, we can't use the x509 authentication feature, although it is strongly asked for by the security Team.

    Would it be possible to enhance the x509 authentication mechanism to allow more flexibility for the authentication's fields?

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Allow custom autoscaling policies

    Currently this is the following autoscaling policy in mongo:

    Please note, if the next highest cluster tier is within your Maximum Cluster Size range, Atlas scales the cluster up to the next tier if one of the following is true for any node in the cluster:

    Average CPU Utilization has exceeded 75% for the past hour, or
    Memory Utilization has exceeded 75% for the past hour.

    The feature request is to allow a DBA to setup custom autoscaling policy

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Elevate hidden replica to a voting member when quorum is lost for a Replica Set

    The MongoDB engine should allow for hidden members in a replica set to be automatically elevated to a voting member. When quorum is lost and can't be met, the hidden replica is automatically reconfigured to a voting member to obtain quorum.
    When the failed voting member comes back online and catches up the hidden replica is demoted back to a normal hidden member.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Replication  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Handle Daylight Saving Time when $densify is used on a date field

    When using "day" as "unit" for a $densify pipeline stage on a date field, the date is always advanced of 24 hours. This is however not always the expected result in timezones in which the year has one 23-hour and one 25-hour long day, because of Daylight Saving Time.

    It would be useful to have the possibility to pass an optional timezone parameter in the $densify stage and, when present, have the stage account for these exceptions when appropriate.

    Here follows an example.

    Assume we have a collection containing the following documents:

    db.densifyDateExample.insertMany([
        {_id: "a", d: ISODate("2022-10-28T22:00:00Z")},
        {_id: "b", d:
    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Get metadata about source client connection that submitted a given change

    Currently with change streams it is impossible to know who or what connection initiated the changes.

    It would be a good feature to have to be able to receive some data about the source client connection that initiated a change.

    My particular use case is the following:

    I have an app that connects to Atlas. (source client connection)
    I can subscribe to change streams and then execute some logic when it applies.

    That app can scale to multiple instances.
    Each instance subscribes to the change streams.
    But I only want each instance to execute the logic that applies to only…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Unique index in sharded cluster

    For enforcing uniqueness in a sharded cluster, the officially recommended approach provided here https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/#std-label-shard-key-arbitrary-uniqueness is simplistic and in production environment it brings non-trivial amount of work. Some considerations:

    1. Ephemeral issues might cause inconsistencies between the two collections (for example, unique index collection update succeeded but not the main collection) and make some unique keys not useable.
    2. There are many changes needed (we're using ORM Mongoose, there are many hooks for it to change) for enforce this universally.

    What we ended up doing is to use distributed ephemeral locks (a TTLed MongoDB collection) to lock on the unique keys before adding…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. NoTableScan at the collection level

    NoTableScan at the collection level instead of mongod level.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Support for Ubuntu 20.4 in MongoDB Server version 4.2

    Per the Server Support Matrix https://docs.mongodb.com/manual/installation/ support for Ubuntu 20 is in MongoDB Server version 4.4+ but not 4.2.
    We would like to see the currently supported MongoDB Server version 4.2 available on the Ubuntu 20.4 LTS distribution.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Kafka audit event streaming

    Provide Kafka Topic as a write target for database auditing and database message logging.
    https://docs.mongodb.com/manual/core/auditing/
    Auditing is currently limited to a local and editable JSON/BSON file or the system console log.

    The SYSLOG is not recommended by MongoDB. "The syslog message limit can result in the truncation of the audit messages. The auditing system will neither detect the truncation nor error upon its occurrence."

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Collection Comments

    I would like the ability to attach comments to a collection so that other people using the data can get some understand of context or important Readme/FAQ that I would need to share.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process

    What is the problem that needs to be solved? Store (historically) serverStatus.uptime counter info on MongoDB Server process, so that it will be possible to track serverStatus.uptime changes through the time.

    Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since serverStatus.uptime counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base