Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

243 results found

  1. Too many keys in 2dsphere index when used on geoJson within arrays

    Mongo support have asked me to raise this bug as an improvement idea because they see it as an edge case.
    The issue is here and includes more details:

    https://support.mongodb.com/case/00659614

    In summary, if you have a geoJson sub document that sits within an array, and you index it using the 2dspehere index, you get keys duplicated for every element in the array.

    For example, if you have geoJson that represents a big 4,500 vertex polygon, and you put it within an array element, if the array has only one element I see 20 keys generated, which is good. But if…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Password enforcement without LDAP

    Enforce complex password policy
    Enforce password expiration
    Enforce password history

    17 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Periodic large bulk insert/update all or nothing with instant switchover

    Provide the ability for orgs to update things like large reference data sets periodically. E.g. an ecommerce site updates its product inventory every night at midnight, where even though the inserts or updates may take say 10 minutes to run and complete, the applications do not see the changes until the bulk update finishes. The problem otherwise is currently, the ecommerce app, in this example, may be reading product data partly from yesterday's data and partly from today's changed data, during the 10 minute run. For consistency the application would ideally want to see data from either before the bulk…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Support for wildcards in permission parameters

    Support for wildcards or maybe even regular expressions in the database/collection name of a permission for a custom role.

    We have a multi tennant setup, by storing every customer's data in its own database on the cluster. The applications we build span those databases, but every application has it's own permissions on a collection name bases.

    instead of granting "find" permission to the "products" collection of all databases like "mongodbcom" and "somecompanycom", it would be nice if we coulf configure it like "find" permission to the "products" collection on the database matching /^[a-z]+_[a-z]+$/

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Making Elections Faster.

    Election is an expensive operation. We can avoid the elections if we can enforce some priority-based ordering on who shall become primary if the current primary fails.
    Specifically, we can keep a node with second highest priority in the data center of the most preferred primary. Usually, the network failures within the data centers are less probable than across the data centers. So, it is reasonable to believe that if the second preferred member loses the contact with current primary, it is the primary node which has failed and not the network. As a supplement, we can also have redundant…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Improve the election process to consider node reachability

    Consider both new and existing sockets to be utilized in order to make more realistic observations about cluster health during an election to avoid for example DNS related issues which would make a node unreachable for new connections.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Include the _ids of existing documents in BulkWriteResult when performing upserts

    When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:

    db.getCollection("test").find({})

    db.test.drop()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
    bulk.execute();
    ```

    The BulkWriteResult contains the upserted _id:

    BulkWriteResult({
    "writeErrors" : [ ],
    "writeConcernErrors" : [ ],
    "nInserted" : 0,
    "nUpserted" : 1,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [
    {
    "index" : 0,
    "_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
    }
    ]
    })

    However, when a document already exist, the _id is not returned:

    db.test.find()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
    bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Upgrade Advisor

    Similar to Microsoft's SQL Server Upgrade Advisor application, generate a report (from Ops Manager for example) identifying issues to fix before or after an upgrade from one major version to another.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. More complex balancer windows for sharded clusters

    Currently we can define a single balancer window which is applied for every day of the week. It would useful to extend this with, for example:

    • multiple windows per day (e.g. 2-4am and 9-11pm)
    • custom windows for days of the week (e.g. Sat 5pm-midnight, Sunday 0-24)
    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Notification Alert

    Whenever a document is created within a collection to send a email to the account holder.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Allow kill connections

    Kill session commands only stop current activities on DB, but not closing/dropping connections (connections still remain open in $listSessions).
    It´d be useful to be able to close opened connections in situations where too many sessions have been opened incorrectly or not closed.

    16 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Burstable IOPS for MongoDB Atlas on Azure

    According to Azure documentation, bursting is enabled by default for all VMs that using Premium SSD (https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-types#bursting). It would be great if MongoDB Atlas on Azure can get benefit from it.

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Validate Window

    In 4.4 with validate being able to run in the background.

    https://docs.mongodb.com/master/reference/method/db.collection.validate/#behavior

    It would be good to have a "validate window" that would cycle though each collection. Similar to the way a balancing window works in sharding.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Allow validate to use replica tags

    The use case for this is to be able to target a secondaries in different shards.

    If validate accepted tag read preference it could be kicked off from a shell connected to a mongos.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. would be helpful, with the online interface, to be able to bulk import collections

    For those whose corporate firewalls do not allow for mongo client access to the free mongo cloud db, it would be extremely helpful to be able to upload a pre-formatted json collection of documents, rather than entering those documents one at a time.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. mongo binary utility for slaptest

    Please provide binary for slaptest for Mongo server just like mysqlslap for mysql server. This enables us to test a better index by doing a slap test on server with multiple combinations of indexes by dba itself instead of depending on load tests from application like jmeter load test , soasta load test etc which needs involvement of multiple resources, developers, testers and lot of time.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Database users should be able to change their own passwords

    Currently, there is no way for Database Users to manage their own passwords, (even if they are atlasAdmin@admin). Moreover, as a Project Owner, I cannot create a role that allows them to do so, e.g.:

    use admin
    db.createRole(
       { role: "changeOwnPasswordRole",
         privileges: [
            {
              resource: { db: "", collection: ""},
              actions: [ "changeOwnPassword"]
            }
         ],
         roles: []
       }
    )
    

    As such, changing passwords always requires a Project Owner setting the new password and sharing it with the Database User. This is a problem, because user-password combinations known by more than one person do not serve as proof of identity.

    A…

    26 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. XA Support

    Is there any plan to implement distributed transaction that involves more than one data stores (for e.g. an RDBMS and Mongo)? We have one such requirement and tried a simple POC by creating a simple class extending XAResource (MongoXAResource implements XAResource) and overriding below methods.

    @Override
    public void commit(Xid xid, boolean b) throws XAException {
    clientSession.commitTransaction();
    }

    @Override
    public void rollback(Xid xid) throws XAException {
    clientSession.abortTransaction();
    }

    It appears to work but i think there is a lot more to do. Is there any plan to implement by MongoDB team?

    49 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Allow configuration of 100mb memory limit per aggregation pipeline stage

    In this old thread from 2016 (https://groups.google.com/forum/#!topic/mongodb-user/LCeFZZRz5EY) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:

    1. If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
    2. You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.

    I believe this subject needs to be revisited for the following reasons:

    1. “Too much memory” is very subjective, and the 100mb…
    23 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Add a "Limit" to Delete and Bulk Delete operations

    Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.

    • For the delete, this would make sure we only delete n number of documents.
    • For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.

    Right now, the only solution is a hack,…

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base