Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

317 results found

  1. user authentication

    Hy

    It would be extremely useful to be able to create users who can only connect to the database from specific networks or even specific IP addresses, similar to what is possible with MySQL.

    For example, using the following commands:

    CREATE USER 'user_name'@'10.214.3.0' IDENTIFIED BY 'password';

    GRANT ALL PRIVILEGES ON shorturl.* TO 'user_name'@'10.214.3.0';

    You can create a user who can access the database only from the network with the IP address 10.214.3.0.

    I would like to know if it is possible to achieve similar functionality in mongodb as well. This would be very useful for my purposes, as I want…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Introduce a new field BucketLifeSpan Optional along with Granularity

    An enhancement to MongoDB's management of time series collections could involve the introduction of a BucketLifeSpan attribute, in addition to the existing Granularity setting. This new, optional attribute would automate the duration a bucket can remain open, with the condition that Granularity should be less than or equal to BucketLifeSpan.

    Consider a use case involving a time series collection for tracking data from 70,000 socket devices daily, with DeviceId as the metafield. Assuming data is organized into daily collections and granularity is set to minutes to optimally fill the buckets unless they reach their size limit.

    For a collection named…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. support parallel query executions include find(), aggregation()

    To use multi-core environment and enhance the query performance w/ a large amount of documents, need a parallel execution.
    Sharding or microsharding is not an alternative in this case.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. TTL index throttling for deletion

    To avoid deletion impact, we need to control the quantity and speed of document generations(insert).
    Need a way to throttle down the deletion quantity per a delete operation.

    i.e. # of documents per a ttl delete, sleep time between deletions(default 60s)

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. add IO throughput related fields to 'serverStatus' output

    There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
    Need it in serverStatus output so that we can monitor it.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. $merge

    Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Compound clustered index

    Now it is possible to create a clustered index for only one field. Since documents can be arranged in ascending order of multiple fields, I see no reason to disallow a clustered index from being a compound.

    Expected syntax:

    create_collection('testVCFcoll', clusteredIndex={'key': {'_id': 1}, 'unique': True, 'name': ['#CHROM', 'POS']})

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Document scoped RBAC - Permission for collection document fields

    Roles and accesses can be defined on the basis of collections that define roles for users.
    It would be nice if these access permissions could be made within the scope of the fields under the collection and the query results would be returned accordingly.

    Current:

    privileges: [
    { resource: { db: "users", collection: "user" }, actions: [ "find"] },
    }

    Expected:

    { resource: { db: "users", collection: "user", field: "email" }, actions: [ "find"] },

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Progress bar

    when upgrading cluster / instance from one instance type to other on shared instance for example from M2 to M5, there should be some sort of progress bar exhibited tracking progress of upgrade

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. bloom filter index

    https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/

    I think having an option to use bloom filter indexes could provide for better performance when compared to compound indexes and eliminate the need for having multiple indexes. It would likely require tuning still, but with very large data sets this could be much less expensive.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Support compound TTL index

    Right now you can only do a single field TTL index. I would like a put a TTL on a compound index.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. TTL Support within a document

    The current TTL implementation where documents can expire after a certain amount of time is extremely useful, especially because of its robustness in terms of if the db crashes.

    I would love for this to be extended upon with the ability to allow data within a document to expire after a set time. So for example, if you add data to a document you could set that data and that data only to expire with its own time to live value

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Enhancement on Native Auditing

    When we enable native auditing the following three information is missing. Its more useful from security aspects . Can it be considered to capture these information in current or future releases soon..

    Session ID

    OS user
    Service name

    Kannan

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Enable BigInt Support for Blockchain Use

    Smart contracts on Ethereum and on Ethereum compatible chains are represented as 256bit integers. There are several libraries widely used to deal with these data types in JavaScript such as bn.js and BigNumber.js.

    A potential workaround could be to split the data, store using Decimal128, and recombine using Aggregation Framework. However, this would add performance and programming overhead that encourages customers to select alternatives to MongoDB.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Allow changing compression of an existing collection

    I'd like to switch my database from Snappy to Zstd compression.

    Currently, it doesn't seem that it is possible to change the compression of an existing collection.

    It would be nice if there were a way to do this, even if (for example) it required making a new replica set member which "re-compresses" the data to the new algorithm while cloning it.

    As per SERVER-67726, the only way to do this today is to create a new collection and manually copy with mongodump/mongorestore. This doesn't seem to be a viable option, for uptime / data consistency reasons.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Support newer versions of JSON Schema in validation to be able to use the "if", "then", "else" and "const" keywords

    Currently there is no way to have conditional schema validators because you can't use the "const" keyword in a "oneOf" or the "if", "then" and "else" keywords.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Support expressions in $densify range bounds

    The $densify aggregation pipeline stage seems unable to evaluate range bounds expressions, requiring the range bounds to be constant.

    See the following example (the collection testcoll contains a single documents with only the _id field):

    sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [0, 5], step: 1}}}])
    
    [
    
      { a: 0 },
    
      { _id: ObjectId("6284a16d64553eaf74b1e189"), a: 1 },
    
      { a: 2 },
    
      { a: 3 },
    
      { a: 4 }
    
    ]
    
    
    
    sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [{$toInt: "0"}, 5], step: 1}}}])
    
    MongoServerError: A bounding array must be an ascending array of either two dates or
    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. collection-level users should be able to list their collections

    Currently users with collection-specific read or read/write permissions are not authorized to perform the following commands:

    db.listCollections()
    show collections
    db.getCollectionNames()

    This impacts the shell (and also third party tools that won't let users access their permitted collections because the list of collections is blocked in the first place)

    Suggestion:

    Users with collection-specific read or read/write permissions should be able to run the above commands and the result would only present the collections for which the user has some read or write privileges (instead of blocking everything).

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Add SSO authentication support to mongoDB database

    Existing issue: One user has accounts in multiple mongoDB databases on Atlas that exist in different Projects and maybe Organizations as well. When he wants to switch from one database to the other from a 3rd party app, he has to provide his credentials every time.

    Adding SSO authentication support to the mongoDB databases would add flexibility to a user like that, to switch from one database to the other without being asked for his credentials every time when connecting from a 3rd party application.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Update to two binding accounts in the config file for LDAP.

    Please refer the Case: 00803199

    We need to update to two binding accounts in the config file to for LDAP authentication so that we can avoid down time while resetting the binding account password.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base