Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

291 results found

  1. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. define the random seed manually, for $rand and $sample

    It will be great if an additional paramater to define the seed for $rand and $sample could be use.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Tool for score data model

    It would be great to have a tool which can scores a data model from a specific database. I mean, I could allows this tool to scan a database model and score this data model based on best practices, patterns and anti patterns. It could also generates a list of problems and suggestions for improvement.

    This tool could be used on CI pipelines near to the unit tests and the main goal is to avoid release new features using a bad data model.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Collection which stores last login date_time for the users

    Are you please able to store the last login date_time for the users which exist either in admin database or $external database in a collection of admin database of that cluster or opsmanager database which manages the clusters?

    I have a requirement at my end - is to find the users who havent logged in for 60 days, , their roles to be revoked. And, ultimately delete the users who dont have any roles attached after a fixed period of time.

    I do undertsnad you store the login details in audit logs. But that would be a tedious process at…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Flatten arrays in group stage

    Have group operators to flatten document arrays into a single one with or without repeated elements.
    So ->
    doc1 = {arr: [1,2,3,4], gr: "group"}, doc2 = {arr: [5, 6, 7, 8], gr: "group"}
    {$group: {id: "$gr", arrays: {$***: "$arr"} } }
    =>
    {
    id: "gr", arrays: [1, 2, 3, 4, 5, 6, 7, 8]}

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. geo

    It would be nice to get the length of an LineString of a geo-json object or the possibility to write an aggregation to calculate it.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Query Planner needs a timeOut set as a Database parameter.

    Query Planner needs a timeOut set as a Database parameter.

    We see App queries timing out on Query planner taking >1 sec. Whilst this can be avoided by setting maxTimeMS at client end - this is more of a setting for the overall query and not just the query planner - Also, this comes at the risk of closing/timing out the actual query (cursor) which is not our need.

    We only want the query planner itself to have a specific - set/customisable timeout and the query to keep running selecting one/any if the plans run thus far without timing out…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. We need to be able to use $[<identifier>] and "$setOnInsert" in the same command

    I want to be able to have a maintain array of counters for a user through a single update statement. If the document containing array of counters does not exist, I want to add it. If it does exist, I want to increment the counter

    For example, this command

    db.inboxItemCounts.updateOne(
    // filter
    {
    "userId": userDoc.userId
    },
    // update
    {
    "$setOnInsert": {
    "userId": userDoc.userId,
    "fromUserSummary": [{
    "userName": fromUserDoc.userName,
    "count": 1
    }]
    },
    // "$inc": incBody,
    "$inc": {
    "fromUserSummary.$[userElement].count": 1
    }
    },
    // options
    {
    "upsert": true,
    "writeConcern": { "w": "majority" },
    "arrayFilters": [
    { "userElement.userName": { $eq: fromUserDoc.userName }}
    ]
    }

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. hint support for $graphLookup

    Currently you can supply a hint to the aggregation call in order to tell MongoDB to use a specific index for the initial $match. But there is currently no way to specify which index to use for a $graphLookup later in the pipeline.

    I would like an optional hint property on the $graphLookup stage.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Add numerically-ordered index feature which forces a field to maintain ordering

    I would like MongoDB to add better support for ordered lists across documents in a collection. This feature would allow the user to designate a field such as "position", and the DB would ensure the values of that field across documents remain in a monotonically-increasing integer sequence 0, 1, 2, 3, ... N.

    I am the maintainer of a Ruby library called "Mongoid Orderable". This library makes numerically ordered field across documents. I'd like to ask MongoDB to investigate moving the functionality of this library to the server.

    What use case would this feature/improvement enable?

    This feature would be useful…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Add pipeline stage for "downsampling" data

    Down sampling is an extremely common operation used when plotting time-series data on graphs when there is too much data to get a good looking/meaningful graph. This would pick and choose "important" data points based on an algorithm such as "Largest-Triangle-Three-Buckets" (https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf) instead of returning the entire data set.

    Not only would this make prettier graph but it will also reduce the overall payload returned from the data thus reducing network related latency.

    This would be an awesome addition to timeseries!

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Allow changing compression of an existing collection

    I'd like to switch my database from Snappy to Zstd compression.

    Currently, it doesn't seem that it is possible to change the compression of an existing collection.

    It would be nice if there were a way to do this, even if (for example) it required making a new replica set member which "re-compresses" the data to the new algorithm while cloning it.

    As per SERVER-67726, the only way to do this today is to create a new collection and manually copy with mongodump/mongorestore. This doesn't seem to be a viable option, for uptime / data consistency reasons.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Support newer versions of JSON Schema in validation to be able to use the "if", "then", "else" and "const" keywords

    Currently there is no way to have conditional schema validators because you can't use the "const" keyword in a "oneOf" or the "if", "then" and "else" keywords.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Support readOnly in Json Schema Validation

    Support the readOnly property for json schema validation to make certain fields immutable (or the entire document) after creation.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Use dedicated interface for replication and balancing

    Like many other big database system, MongoDB may support to specify dedicated interfaces for replication and/or for the balancer (sharding)

    Currently you can configure replication sets and shards members only by single hostname/IP-Address, i.e. all data is transmitted over the same network interface.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Replication  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Make it possible to delete a database via API

    It would be nice to be able to delete a database in a replica set via an API call. This would make it easier for CI/CD when deleting clients
    Strange thing is that I can delete a complete cluster, but more fine-grained deletion is not possible. like deleting a replica set or only 1 database in a replica set. The most dangerous one is available via API :)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Make it clear an index not on present on all shards

    We recently had an issue where an index did not get created on all shards. When we ran getIndexes() on mongos it reported the index was present. So we dropped another redundant index thinking all would be well - it was not. We had serious performance issues on the shard that was missing the new index and our app was unavailable for a few hours whilst we re-created the missing index on a huge collection.

    It would be better if getIndexes called on mongos reported some sort of warning or indication that it was not present on all shards. The…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. collection-level users should be able to list their collections

    Currently users with collection-specific read or read/write permissions are not authorized to perform the following commands:

    db.listCollections()
    show collections
    db.getCollectionNames()

    This impacts the shell (and also third party tools that won't let users access their permitted collections because the list of collections is blocked in the first place)

    Suggestion:

    Users with collection-specific read or read/write permissions should be able to run the above commands and the result would only present the collections for which the user has some read or write privileges (instead of blocking everything).

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. CSFLE - Enable aggregation stages for non-encrypted collections

    When encrypting any collection with CSFLE, aggregation is not allowed on non-encrypted collections.

    The official recommendation is to maintain 2 clients: 1 for CSFLE and 1 for when aggregation is needed. How is this an acceptable solution?

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Encryption  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Preserve field order in $merge

    Filed on behalf of https://jira.mongodb.org/browse/SERVER-63853:

    A variety of formats require strict adherence to the sequence of fields, such as bioinformatics

    Files of such formats are often very large and contain nested structures, so it is convenient to use them as collections. But to keep the data belonging to the above specs, it is necessary to keep the arrangement of the fields. Unfortunately, aggregations with saving results to another DB lose original arrangement.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base