Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

272 results found

  1. Include the _ids of existing documents in BulkWriteResult when performing upserts

    When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:

    db.getCollection("test").find({})

    db.test.drop()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
    bulk.execute();
    ```

    The BulkWriteResult contains the upserted _id:

    BulkWriteResult({
    "writeErrors" : [ ],
    "writeConcernErrors" : [ ],
    "nInserted" : 0,
    "nUpserted" : 1,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [
    {
    "index" : 0,
    "_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
    }
    ]
    })

    However, when a document already exist, the _id is not returned:

    db.test.find()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
    bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Upgrade Advisor

    Similar to Microsoft's SQL Server Upgrade Advisor application, generate a report (from Ops Manager for example) identifying issues to fix before or after an upgrade from one major version to another.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. MongoDB 4.2 Distributed Transaction with Arbiter

    Hello.
    We are preparing to introduce MongoDB 4.2 and expect distributed transaction function.

    I read document which arbiter don't be member when use distributed transaction.

    https://docs.mongodb.com/manual/core/transactions/index.html#arbiters

    PSA can do that, but it's weird that it doesn't even work out to PSSA.

    It usually operates as a PSS from an operating point of view, but can temporarily become a PSA in the event of equipment problems.

    Why should there be no Arbiter in the Shard to use Distributed Transaction?
    I can not understand restriction.

    Can you tell me technical reason for not being able to support Distributed Transaction with Arbiter?
    Do…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Introduce a new field BucketLifeSpan Optional along with Granularity

    An enhancement to MongoDB's management of time series collections could involve the introduction of a BucketLifeSpan attribute, in addition to the existing Granularity setting. This new, optional attribute would automate the duration a bucket can remain open, with the condition that Granularity should be less than or equal to BucketLifeSpan.

    Consider a use case involving a time series collection for tracking data from 70,000 socket devices daily, with DeviceId as the metafield. Assuming data is organized into daily collections and granularity is set to minutes to optimally fill the buckets unless they reach their size limit.

    For a collection named…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. add IO throughput related fields to 'serverStatus' output

    There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
    Need it in serverStatus output so that we can monitor it.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Extend schema validation to be able to enforce referential integrity between collections

    Where a relational database uses 2 tables to store a 1:many "parent - child" relationship between entities, MongoDB mostly stores the child documents in an array file as part of the parent document. This automatically ensures referential integrity in that
    - a child document cannot be inserted or updated to refer to a non-existent parent, and
    - a parent document cannot be deleted such that it leaves "orphaned" child documents

    However, there are situations where the number and/or size of the child documents makes embedding them all in their parent unworkable, due to the 16 megabyte document size limit if…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Document scoped RBAC - Permission for collection document fields

    Roles and accesses can be defined on the basis of collections that define roles for users.
    It would be nice if these access permissions could be made within the scope of the fields under the collection and the query results would be returned accordingly.

    Current:

    privileges: [
    { resource: { db: "users", collection: "user" }, actions: [ "find"] },
    }

    Expected:

    { resource: { db: "users", collection: "user", field: "email" }, actions: [ "find"] },

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Progress bar

    when upgrading cluster / instance from one instance type to other on shared instance for example from M2 to M5, there should be some sort of progress bar exhibited tracking progress of upgrade

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. bloom filter index

    https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/

    I think having an option to use bloom filter indexes could provide for better performance when compared to compound indexes and eliminate the need for having multiple indexes. It would likely require tuning still, but with very large data sets this could be much less expensive.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Support compound TTL index

    Right now you can only do a single field TTL index. I would like a put a TTL on a compound index.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. TTL Support within a document

    The current TTL implementation where documents can expire after a certain amount of time is extremely useful, especially because of its robustness in terms of if the db crashes.

    I would love for this to be extended upon with the ability to allow data within a document to expire after a set time. So for example, if you add data to a document you could set that data and that data only to expire with its own time to live value

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Enhancement on Native Auditing

    When we enable native auditing the following three information is missing. Its more useful from security aspects . Can it be considered to capture these information in current or future releases soon..

    Session ID

    OS user
    Service name

    Kannan

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Enable BigInt Support for Blockchain Use

    Smart contracts on Ethereum and on Ethereum compatible chains are represented as 256bit integers. There are several libraries widely used to deal with these data types in JavaScript such as bn.js and BigNumber.js.

    A potential workaround could be to split the data, store using Decimal128, and recombine using Aggregation Framework. However, this would add performance and programming overhead that encourages customers to select alternatives to MongoDB.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Add SSO authentication support to mongoDB database

    Existing issue: One user has accounts in multiple mongoDB databases on Atlas that exist in different Projects and maybe Organizations as well. When he wants to switch from one database to the other from a 3rd party app, he has to provide his credentials every time.

    Adding SSO authentication support to the mongoDB databases would add flexibility to a user like that, to switch from one database to the other without being asked for his credentials every time when connecting from a 3rd party application.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. delete logs number of days old

    In the options for Mongo DB Log Settings, there is only
    Max Percent of Disk
    Total Number of files

    A new option for
    Number of Days to keep

    Would be useful

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. view on nested array

    • It is quite common to have nested array in documents in MongoDB.
    • It is also quite common to have to "flatify" those arrays in queries.

    Here is an example where we have a collection with consultants and their professional experiences.

    consultant [{
       name:"Toto",
       age: 25,
       experiences: [{
          role: "Sofware Engineer",
          from: "2010-01-01",
          to: "2012-01-01",
          company: "CompanyA"
       }, {
          role: "Data Analyst",
          from: "2018-01-01",
          to: "2020-01-01",
          company: "CompanyB"
       }]
    }]
    

    If we would like to list all experiences at a company during a certain period of time, we would have to do an aggregate query with 3 steps that are sometimes…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. log connection string used by application to connect

    there are multiple options to connect to mongo: you can connect to specific node or you can connect to the whole replicaset etc...
    if DBA does not have access to source code - it's not possible to validate if application properly configured and connects to replicaset.

    it would be nice to let mongoDB dump to mongod.log used connection string and/or details how exactly client sessions is connected to mongo.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Low latency Change Stream for Global Cluster in Atlas

    our event-driven applications need to publish events to Kafka triggered by Change Stream feature. It works perfect in replicaset MongoDB cluster.

    However, after migrating to Global Cluster in Atlas, the Change Stream cannot keep low latency because of ordering reason among shards.
    The latency may go up to 20 seconds when for a single change event.

    It would be nice if application can receive Change Stream from the single shard only (not care about ordering among shards) to prevent latency.

    The idea is to pass "location" options when starting the Change Stream cursor.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. updateMany limit

    When porting an application backend from an RDBMS to MongoDB, we've spoken to two people who've are looking for a way to specify a limit on the number of documents in .updateMany(). I understand the behavior cannot be defined on a sharded cluster, but if we had a way to do this on an unsharded collection, that would help when dealing with these teams.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Option to prohibit a non-voting member from becoming a sync source of a voting member

    Hi,

    Our proposition in a few words: add a replica set option to allow chained replication but with the following exception: a non-voting member cannot become a sync source of a voting member under any circumstance.

    This proposition would allow chained replication for clusters having both w=majority writes and analytics nodes.

    Right now, those clusters cannot safely enable chained replication, because under some circumstances, the non-voting analytics member may become the first secondary in a serial chain of replication. In that case, this node[*] slows down the replication process for all downstream secondaries. Higher replication lag translates to extremely slow…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base