Database

  1. XA Support

    Is there any plan to implement distributed transaction that involves more than one data stores (for e.g. an RDBMS and Mongo)? We have one such requirement and tried a simple POC by creating a simple class extending XAResource (MongoXAResource implements XAResource) and overriding below methods.

    @Override
    public void commit(Xid xid, boolean b) throws XAException {
    clientSession.commitTransaction();
    }

    @Override
    public void rollback(Xid xid) throws XAException {
    clientSession.abortTransaction();
    }

    It appears to work but i think there is a lot more to do. Is there any plan to implement by MongoDB team?

    47 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    10 comments  ·  Flag idea as inappropriate…  ·  Admin →
  2. partial text search

    we've already seen a full text search and it will be awesome if you manage to implement partial version :)

    21 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  3. Database users should be able to change their own passwords

    Currently, there is no way for Database Users to manage their own passwords, (even if they are atlasAdmin@admin). Moreover, as a Project Owner, I cannot create a role that allows them to do so, e.g.:

    use admin
    db.createRole(
       { role: "changeOwnPasswordRole",
         privileges: [
            {
              resource: { db: "", collection: ""},
              actions: [ "changeOwnPassword"]
            }
         ],
         roles: []
       }
    )
    

    As such, changing passwords always requires a Project Owner setting the new password and sharing it with the Database User. This is a problem, because user-password combinations known by more than one person do not serve as proof of identity.

    A…

    15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Password enforcement without LDAP

    Enforce complex password policy
    Enforce password expiration
    Enforce password history

    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Flag idea as inappropriate…  ·  Admin →
  5. Allow views with programmatic role based access control rather than just declarative

    Often Views, defined by an aggregation pipeline, are used to filter out certain fields, certain records and obfuscate parts of certain values to enable users with a specific restricted role to only see a subset of 'less sensitive' data from a collection. Views can be assigned to a role declaratively, but in some use cases it is also useful to allow the aggregation pipeline logic to be able to access the context of the current session's roles (e.g. $$ROLES) or user id (e.g. $$USER) to be able to make some programmatic decisions of what to show in the view based…

    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  6. Allow configuration of 100mb memory limit per aggregation pipeline stage

    In this old thread from 2016 (https://groups.google.com/forum/#!topic/mongodb-user/LCeFZZRz5EY) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:

    1. If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
    2. You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.

    I believe this subject needs to be revisited for the following reasons:

    1. “Too much memory” is very subjective, and the 100mb…
    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  7. Additional checks for storage consistency

    The following opt-in features would add additional check to check for storage layer corruptions of collections.

    1. Upon write read what data was committed to disk.
    2. Periodic or scheduled scanning of a collection. Similar to collection.validate but non blocking.
    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  8. Allow kill connections

    Kill session commands only stop current activities on DB, but not closing/dropping connections (connections still remain open in $listSessions).
    It´d be useful to be able to close opened connections in situations where too many sessions have been opened incorrectly or not closed.

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  9. Auditing requirement for the changes done through Ops Manager portal

    Changes done through Ops Manager portal are visible only in Alerts(Activity Feeds). Though these Alerts(Activity Feeds) can be derived through API, our auditors may not accept API calls and we need ops manger to log these change in respective deployment mongo audit.log

    Thank you

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  10. More complex balancer windows for sharded clusters

    Currently we can define a single balancer window which is applied for every day of the week. It would useful to extend this with, for example:

    • multiple windows per day (e.g. 2-4am and 9-11pm)
    • custom windows for days of the week (e.g. Sat 5pm-midnight, Sunday 0-24)
    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  11. 5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  12. Kafka audit event streaming

    Provide Kafka Topic as a write target for database auditing and database message logging.
    https://docs.mongodb.com/manual/core/auditing/
    Auditing is currently limited to a local and editable JSON/BSON file or the system console log.

    The SYSLOG is not recommended by MongoDB. "The syslog message limit can result in the truncation of the audit messages. The auditing system will neither detect the truncation nor error upon its occurrence."

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  13. Add a "Limit" to Delete and Bulk Delete operations

    Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.

    • For the delete, this would make sure we only delete n number of documents.
    • For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.

    Right now, the only solution is a hack,…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  14. Data Replication from one MongoDB Cluster to another MongoDB Cluster

    Create an option for the user to replicate data from one MongoDB Cluster to another MongoDB Cluster inside the account. We don't need to migrate data, I need replicate data from an Production Cluster to an Dev/Test Cluster. Thanks.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  15. Support throttling the balancer for sharded clusters

    As high throughput clusters grow and documents become larger, the MongoDB balancer may begin impacting workloads.

    I'd like the ability to throttle the speed at which the balancer or chunk migrator runs such that impact is minimised.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  16. Support for Ubuntu 20.4 in MongoDB Server version 4.2

    Per the Server Support Matrix https://docs.mongodb.com/manual/installation/ support for Ubuntu 20 is in MongoDB Server version 4.4+ but not 4.2.
    We would like to see the currently supported MongoDB Server version 4.2 available on the Ubuntu 20.4 LTS distribution.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  17. Reduce the minimum value for watchdogPeriodSeconds

    The storage watchdog attempts to create, write, and read a test file in critical directories every 10 seconds.

    The watchdogPeriodSeconds parameter controls how often these a thread checks to ensure at least one check has succeeded since the last check.

    The minimum value for watchdogPeriodSeconds is 60 seconds. This means that in the worst case, the mongod could be unable to write for up to 2 minutes before the watchdog asserts and kills the stalled node. That is a very long time for a primary node to be stalled in a busy cluster.

    It does make sense that watchdogPeriodSeconds must…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  18. wired

    wiredTiger open files usage

    Currently WT uses a file per collection and index, leading in some scenarios to extremely high number of open files/dhandles.

    Is there any plan to support one file/dhandle per database?

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  19. Improve the election process to consider node reachability

    Consider both new and existing sockets to be utilized in order to make more realistic observations about cluster health during an election to avoid for example DNS related issues which would make a node unreachable for new connections.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. Include the _ids of existing documents in BulkWriteResult when performing upserts

    When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:

    db.getCollection("test").find({})

    db.test.drop()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
    bulk.execute();
    ```

    The BulkWriteResult contains the upserted _id:

    BulkWriteResult({
    "writeErrors" : [ ],
    "writeConcernErrors" : [ ],
    "nInserted" : 0,
    "nUpserted" : 1,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [
    {
    "index" : 0,
    "_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
    }
    ]
    })

    However, when a document already exist, the _id is not returned:

    db.test.find()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
    bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3 4 5 6
  • Don't see your idea?

Database

Categories

Feedback and Knowledge Base