Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

310 results found

  1. Collection Comments

    I would like the ability to attach comments to a collection so that other people using the data can get some understand of context or important Readme/FAQ that I would need to share.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Include the _ids of existing documents in BulkWriteResult when performing upserts

    When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:

    db.getCollection("test").find({})

    db.test.drop()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
    bulk.execute();
    ```

    The BulkWriteResult contains the upserted _id:

    BulkWriteResult({
    "writeErrors" : [ ],
    "writeConcernErrors" : [ ],
    "nInserted" : 0,
    "nUpserted" : 1,
    "nMatched" : 0,
    "nModified" : 0,
    "nRemoved" : 0,
    "upserted" : [
    {
    "index" : 0,
    "_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
    }
    ]
    })

    However, when a document already exist, the _id is not returned:

    db.test.find()

    var bulk = db.test.initializeUnorderedBulkOp();
    bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
    bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Validation for referential integrity

    Currently, with the JSON Schema validation, we are able to limit the values for a field using enumerations. However, we need to have a way to limit the values to those entered in another collection.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Additional checks for storage consistency

    The following opt-in features would add additional check to check for storage layer corruptions of collections.

    1. Upon write read what data was committed to disk.
    2. Periodic or scheduled scanning of a collection. Similar to collection.validate but non blocking.
    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Automatic Indexes

    MongoDB can already suggest useful indexes. Why not take the next step and allow MongoDB to autonomously create and manage indexes. Ideally it will automatically maintain the indexes over time as the structure and usage of the database changes.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Allow collection collation to be editable

    Collation of a collection can be set at creation time only. It would be useful to edit these fields to avoid creating an entire new collection and copying the collection over.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Conditional TTL index

    In case of auto purging sometimes user might want to ensure data has been archived before purging. If conditional TTL was allowed user could set value of a field to indicate document has been archived and then db can purge accordingly.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. collections under a document as in firebase firestore

    collections under a document as in firebase firestore

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Allow parallel migrations to be throttled

    This request is for a tuneable parameter to allow the number of parallel chunk migrations to be limited.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Improve the election process to consider node reachability

    Consider both new and existing sockets to be utilized in order to make more realistic observations about cluster health during an election to avoid for example DNS related issues which would make a node unreachable for new connections.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Upgrade Advisor

    Similar to Microsoft's SQL Server Upgrade Advisor application, generate a report (from Ops Manager for example) identifying issues to fix before or after an upgrade from one major version to another.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Add Event Stream Features (Apache Kafka, NATS Streaming)

    I thought it would be great if mongodb can support an event streaming(event bus) feature.
    Existing popular event streaming services(AWS Kinesis, NATS Streaming, Apache Kafka) can persist data which is somewhat like a database.

    Its great for debugging, data logging for later uses such as machine learning. Since I can see the full flow of the data and its changes.

    But there are two downside with most even streaming services.
    1. Its very difficult to query the data.
    2. "Eventual Consistency" issue when dealing programmatic errors(bugs). Most event streams have this nice feature, which keeps sending the same event to…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. TTL expiration on embedded subdocs

    TTL expiration on an embedded subdoc will delete the parent/root doc.

    Provide an option to only delete the subdocument on expiration.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Performance problem with compound index using doubles, with range filtering

    It looks like there may be a problem when using doubles in a compound index and using range-test filters against all of them in a query.
    The problem we see from the explain plans is that far more keys are being examined that there should be and this leads to poor performance.
    If we switch to using a similar setup but with integers instead, we don’t see the problem.

    See the second part of this ticket for full details

    https://support.mongodb.com/case/00659614

    We have a document that looks like this –

    {
    "id" : ObjectId("5e2ab24eca314f10b486d827"),
    "attributes" : [
    {
    "attributeCode" :

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Too many keys in 2dsphere index when used on geoJson within arrays

    Mongo support have asked me to raise this bug as an improvement idea because they see it as an edge case.
    The issue is here and includes more details:

    https://support.mongodb.com/case/00659614

    In summary, if you have a geoJson sub document that sits within an array, and you index it using the 2dspehere index, you get keys duplicated for every element in the array.

    For example, if you have geoJson that represents a big 4,500 vertex polygon, and you put it within an array element, if the array has only one element I see 20 keys generated, which is good. But if…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Ability to view and track collection wise transactions (read,write,success,failures)

    We have multiple microservices using the same collection and we'd like to track how many calls come from each, and also understand which transactions are successful or not.
    Ultimately this will allow us to understand the MDB usage by each of the microservices.
    [setting verbose logging level to 1 and then using these logs to create a dashboard in Splunk doesn't give us enough granularity]

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Making Elections Faster.

    Election is an expensive operation. We can avoid the elections if we can enforce some priority-based ordering on who shall become primary if the current primary fails.
    Specifically, we can keep a node with second highest priority in the data center of the most preferred primary. Usually, the network failures within the data centers are less probable than across the data centers. So, it is reasonable to believe that if the second preferred member loses the contact with current primary, it is the primary node which has failed and not the network. As a supplement, we can also have redundant…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Find all geoJSON points within a large collection of polygons. Reference collection in geoWithin.

    I would like to be able to find all geoJSON points located within a collection of polygons. My personal goal is to find points located within earth's water bodies. The water bodies are located in their own collection while the geoJSON points are located in a separate collection within the same DB. Mongo's geoWithin function only lets me hard code the polygons, but I would like to reference a collection of polygons.

    See https://stackoverflow.com/questions/63162823/mongo-geowithin-error-polygon-coordinates-must-be-an-array

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process

    What is the problem that needs to be solved? Store (historically) serverStatus.uptime counter info on MongoDB Server process, so that it will be possible to track serverStatus.uptime changes through the time.

    Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since serverStatus.uptime counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. MongoDB 4.2 Distributed Transaction with Arbiter

    Hello.
    We are preparing to introduce MongoDB 4.2 and expect distributed transaction function.

    I read document which arbiter don't be member when use distributed transaction.

    https://docs.mongodb.com/manual/core/transactions/index.html#arbiters

    PSA can do that, but it's weird that it doesn't even work out to PSSA.

    It usually operates as a PSS from an operating point of view, but can temporarily become a PSA in the event of equipment problems.

    Why should there be no Arbiter in the Shard to use Distributed Transaction?
    I can not understand restriction.

    Can you tell me technical reason for not being able to support Distributed Transaction with Arbiter?
    Do…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base