Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

310 results found

  1. Compound clustered index

    Now it is possible to create a clustered index for only one field. Since documents can be arranged in ascending order of multiple fields, I see no reason to disallow a clustered index from being a compound.

    Expected syntax:

    create_collection('testVCFcoll', clusteredIndex={'key': {'_id': 1}, 'unique': True, 'name': ['#CHROM', 'POS']})

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Document scoped RBAC - Permission for collection document fields

    Roles and accesses can be defined on the basis of collections that define roles for users.
    It would be nice if these access permissions could be made within the scope of the fields under the collection and the query results would be returned accordingly.

    Current:

    privileges: [
    { resource: { db: "users", collection: "user" }, actions: [ "find"] },
    }

    Expected:

    { resource: { db: "users", collection: "user", field: "email" }, actions: [ "find"] },

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Progress bar

    when upgrading cluster / instance from one instance type to other on shared instance for example from M2 to M5, there should be some sort of progress bar exhibited tracking progress of upgrade

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. bloom filter index

    https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/

    I think having an option to use bloom filter indexes could provide for better performance when compared to compound indexes and eliminate the need for having multiple indexes. It would likely require tuning still, but with very large data sets this could be much less expensive.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Support compound TTL index

    Right now you can only do a single field TTL index. I would like a put a TTL on a compound index.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. TTL Support within a document

    The current TTL implementation where documents can expire after a certain amount of time is extremely useful, especially because of its robustness in terms of if the db crashes.

    I would love for this to be extended upon with the ability to allow data within a document to expire after a set time. So for example, if you add data to a document you could set that data and that data only to expire with its own time to live value

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Enhancement on Native Auditing

    When we enable native auditing the following three information is missing. Its more useful from security aspects . Can it be considered to capture these information in current or future releases soon..

    Session ID

    OS user
    Service name

    Kannan

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Enable BigInt Support for Blockchain Use

    Smart contracts on Ethereum and on Ethereum compatible chains are represented as 256bit integers. There are several libraries widely used to deal with these data types in JavaScript such as bn.js and BigNumber.js.

    A potential workaround could be to split the data, store using Decimal128, and recombine using Aggregation Framework. However, this would add performance and programming overhead that encourages customers to select alternatives to MongoDB.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Allow changing compression of an existing collection

    I'd like to switch my database from Snappy to Zstd compression.

    Currently, it doesn't seem that it is possible to change the compression of an existing collection.

    It would be nice if there were a way to do this, even if (for example) it required making a new replica set member which "re-compresses" the data to the new algorithm while cloning it.

    As per SERVER-67726, the only way to do this today is to create a new collection and manually copy with mongodump/mongorestore. This doesn't seem to be a viable option, for uptime / data consistency reasons.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Support newer versions of JSON Schema in validation to be able to use the "if", "then", "else" and "const" keywords

    Currently there is no way to have conditional schema validators because you can't use the "const" keyword in a "oneOf" or the "if", "then" and "else" keywords.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Support expressions in $densify range bounds

    The $densify aggregation pipeline stage seems unable to evaluate range bounds expressions, requiring the range bounds to be constant.

    See the following example (the collection testcoll contains a single documents with only the _id field):

    sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [0, 5], step: 1}}}])
    
    [
    
      { a: 0 },
    
      { _id: ObjectId("6284a16d64553eaf74b1e189"), a: 1 },
    
      { a: 2 },
    
      { a: 3 },
    
      { a: 4 }
    
    ]
    
    
    
    sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [{$toInt: "0"}, 5], step: 1}}}])
    
    MongoServerError: A bounding array must be an ascending array of either two dates or
    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. collection-level users should be able to list their collections

    Currently users with collection-specific read or read/write permissions are not authorized to perform the following commands:

    db.listCollections()
    show collections
    db.getCollectionNames()

    This impacts the shell (and also third party tools that won't let users access their permitted collections because the list of collections is blocked in the first place)

    Suggestion:

    Users with collection-specific read or read/write permissions should be able to run the above commands and the result would only present the collections for which the user has some read or write privileges (instead of blocking everything).

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Add SSO authentication support to mongoDB database

    Existing issue: One user has accounts in multiple mongoDB databases on Atlas that exist in different Projects and maybe Organizations as well. When he wants to switch from one database to the other from a 3rd party app, he has to provide his credentials every time.

    Adding SSO authentication support to the mongoDB databases would add flexibility to a user like that, to switch from one database to the other without being asked for his credentials every time when connecting from a 3rd party application.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Update to two binding accounts in the config file for LDAP.

    Please refer the Case: 00803199

    We need to update to two binding accounts in the config file to for LDAP authentication so that we can avoid down time while resetting the binding account password.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. delete logs number of days old

    In the options for Mongo DB Log Settings, there is only
    Max Percent of Disk
    Total Number of files

    A new option for
    Number of Days to keep

    Would be useful

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Deny Network Access to MongoDB Cluster

    To improve network security, please create an option to Deny specific Network Adresses to MongoDB Cluster.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. view on nested array

    • It is quite common to have nested array in documents in MongoDB.
    • It is also quite common to have to "flatify" those arrays in queries.

    Here is an example where we have a collection with consultants and their professional experiences.

    consultant [{
       name:"Toto",
       age: 25,
       experiences: [{
          role: "Sofware Engineer",
          from: "2010-01-01",
          to: "2012-01-01",
          company: "CompanyA"
       }, {
          role: "Data Analyst",
          from: "2018-01-01",
          to: "2020-01-01",
          company: "CompanyB"
       }]
    }]
    

    If we would like to list all experiences at a company during a certain period of time, we would have to do an aggregate query with 3 steps that are sometimes…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. log connection string used by application to connect

    there are multiple options to connect to mongo: you can connect to specific node or you can connect to the whole replicaset etc...
    if DBA does not have access to source code - it's not possible to validate if application properly configured and connects to replicaset.

    it would be nice to let mongoDB dump to mongod.log used connection string and/or details how exactly client sessions is connected to mongo.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Low latency Change Stream for Global Cluster in Atlas

    our event-driven applications need to publish events to Kafka triggered by Change Stream feature. It works perfect in replicaset MongoDB cluster.

    However, after migrating to Global Cluster in Atlas, the Change Stream cannot keep low latency because of ordering reason among shards.
    The latency may go up to 20 seconds when for a single change event.

    It would be nice if application can receive Change Stream from the single shard only (not care about ordering among shards) to prevent latency.

    The idea is to pass "location" options when starting the Change Stream cursor.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. updateMany limit

    When porting an application backend from an RDBMS to MongoDB, we've spoken to two people who've are looking for a way to specify a limit on the number of documents in .updateMany(). I understand the behavior cannot be defined on a sharded cluster, but if we had a way to do this on an unsharded collection, that would help when dealing with these teams.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base