Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Support for Ubuntu 20.4 in MongoDB Server version 4.2

    Per the Server Support Matrix https://docs.mongodb.com/manual/installation/ support for Ubuntu 20 is in MongoDB Server version 4.4+ but not 4.2.
    We would like to see the currently supported MongoDB Server version 4.2 available on the Ubuntu 20.4 LTS distribution.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. 5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Kafka audit event streaming

    Provide Kafka Topic as a write target for database auditing and database message logging.
    https://docs.mongodb.com/manual/core/auditing/
    Auditing is currently limited to a local and editable JSON/BSON file or the system console log.

    The SYSLOG is not recommended by MongoDB. "The syslog message limit can result in the truncation of the audit messages. The auditing system will neither detect the truncation nor error upon its occurrence."

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process

    What is the problem that needs to be solved? Store (historically) serverStatus.uptime counter info on MongoDB Server process, so that it will be possible to track serverStatus.uptime changes through the time.

    Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since serverStatus.uptime counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Extend db.collection.distinct() to work with multiple fields in a compound key

    Currently the distinct() command finds the unique set of values for a SINGLE specified field across a collection or view. For example:
    db.staff.distinct("last_name" )

    If there is an index on the lastname field, the DISTINCTSCAN plan can use that index and the operation is very fast.

    To find the unique values for a set of more than one fields, the $group aggregation stage has to be used like this:
    db.staff.aggregate([
    {$group: {id: {FName: "$firstname", LName: "$last_name"}} ]);

    This operation does not really need the $group functionality, as it is not calculating a sum/min/max/average/etc value using the…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Reduce the minimum value for watchdogPeriodSeconds

    The storage watchdog attempts to create, write, and read a test file in critical directories every 10 seconds.

    The watchdogPeriodSeconds parameter controls how often these a thread checks to ensure at least one check has succeeded since the last check.

    The minimum value for watchdogPeriodSeconds is 60 seconds. This means that in the worst case, the mongod could be unable to write for up to 2 minutes before the watchdog asserts and kills the stalled node. That is a very long time for a primary node to be stalled in a busy cluster.

    It does make sense that watchdogPeriodSeconds must…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. wired

    wiredTiger open files usage

    Currently WT uses a file per collection and index, leading in some scenarios to extremely high number of open files/dhandles.

    Is there any plan to support one file/dhandle per database?

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Storage (Wired Tiger)  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Ability to enable collection name enforcement

    Currently if you perform a find or other operation, and you have a typo in your collection name, the operation will execute successfully and there is no indication that the collection you are operating on doesn't exist. It would be handy if there were some sort of session variable that could be set that tells the engine, to return an error if the collection being operated on does not exist. For example, lets say we have a collection named "myCollection". If I issue a find with a typo in the collection name from the shell like:

    db.myCollectio.find();

    This will successfully…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Improve the election process to consider node reachability

    Consider both new and existing sockets to be utilized in order to make more realistic observations about cluster health during an election to avoid for example DNS related issues which would make a node unreachable for new connections.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Include the _ids of existing documents in BulkWriteResult when performing upserts

    When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:

    db.getCollection("test").find({})

    db.test.drop()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
    bulk.execute();
    ```

    The BulkWriteResult contains the upserted _id:

    BulkWriteResult({
    "writeErrors" : [ ],
    "writeConcernErrors" : [ ],
    "nInserted" : 0,
    "nUpserted" : 1,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [
    {
    "index" : 0,
    "_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
    }
    ]
    })

    However, when a document already exist, the _id is not returned:

    db.test.find()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
    bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Upgrade Advisor

    Similar to Microsoft's SQL Server Upgrade Advisor application, generate a report (from Ops Manager for example) identifying issues to fix before or after an upgrade from one major version to another.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. MongoDB 4.2 Distributed Transaction with Arbiter

    Hello.
    We are preparing to introduce MongoDB 4.2 and expect distributed transaction function.

    I read document which arbiter don't be member when use distributed transaction.

    https://docs.mongodb.com/manual/core/transactions/index.html#arbiters

    PSA can do that, but it's weird that it doesn't even work out to PSSA.

    It usually operates as a PSS from an operating point of view, but can temporarily become a PSA in the event of equipment problems.

    Why should there be no Arbiter in the Shard to use Distributed Transaction?
    I can not understand restriction.

    Can you tell me technical reason for not being able to support Distributed Transaction with Arbiter?
    Do…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Identification of unused/less used collections

    As a DBA I'd like to identify unused collections so that I can remove them to optimize performance.
    As of now there doesn’t seem to be a direct way. One option could be to settup auditing or using logging (e.g. set the slow op timer to 0 so we log everything).
    An other might be to use the collstats command and frequently traverse all collections..

    I'd be happy to get a direct method to show me unused resp. last usage of a collection. Based on DB level or even over all DBs.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Make IAM database user authentication compatible with AWS SSO

    We are in the process of implementing AWS SSO across our Organisation and wanted to tie this in with mongodb-aws for database authentication.
    Unfortunately we have been informed the 2 services are not compatible.

    This would be a really useful add on to improve management of our systems.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. view on nested array

    • It is quite common to have nested array in documents in MongoDB.
    • It is also quite common to have to "flatify" those arrays in queries.

    Here is an example where we have a collection with consultants and their professional experiences.

    consultant [{
       name:"Toto",
       age: 25,
       experiences: [{
          role: "Sofware Engineer",
          from: "2010-01-01",
          to: "2012-01-01",
          company: "CompanyA"
       }, {
          role: "Data Analyst",
          from: "2018-01-01",
          to: "2020-01-01",
          company: "CompanyB"
       }]
    }]
    

    If we would like to list all experiences at a company during a certain period of time, we would have to do an aggregate query with 3 steps that are sometimes…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. log connection string used by application to connect

    there are multiple options to connect to mongo: you can connect to specific node or you can connect to the whole replicaset etc...
    if DBA does not have access to source code - it's not possible to validate if application properly configured and connects to replicaset.

    it would be nice to let mongoDB dump to mongod.log used connection string and/or details how exactly client sessions is connected to mongo.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Low latency Change Stream for Global Cluster in Atlas

    our event-driven applications need to publish events to Kafka triggered by Change Stream feature. It works perfect in replicaset MongoDB cluster.

    However, after migrating to Global Cluster in Atlas, the Change Stream cannot keep low latency because of ordering reason among shards.
    The latency may go up to 20 seconds when for a single change event.

    It would be nice if application can receive Change Stream from the single shard only (not care about ordering among shards) to prevent latency.

    The idea is to pass "location" options when starting the Change Stream cursor.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Change Streams  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Allow custom autoscaling policies

    Currently this is the following autoscaling policy in mongo:

    Please note, if the next highest cluster tier is within your Maximum Cluster Size range, Atlas scales the cluster up to the next tier if one of the following is true for any node in the cluster:

    Average CPU Utilization has exceeded 75% for the past hour, or
    Memory Utilization has exceeded 75% for the past hour.

    The feature request is to allow a DBA to setup custom autoscaling policy

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Data masking policy

    Implement Data masking similar to Schema Validation in Mongo so that customer can define a server-side data masking policy to mask the results of a query and a new role which will give explicit permission to users for reading unmasked data

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Allow multiple text indexes per collection

    MongoDb only allow one text index by collection, in contrast of other index types.

    This is a limitation that makes it difficult to develop projects with search functionality, for example, if you want to add a text search for advanced users on all fields and a public search on a subset of fields, there isn't a simple and performant way to solve it.

    Many developers end up using other products like elastic search, or creating additional collections and using lookup, or using preg_reg or building smart indexes when the creation of multiple text indexes per collection would allow a quick,…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Flag idea as inappropriate…  ·  Admin →

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base