Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

255 results found

  1. support parallel query executions include find(), aggregation()

    To use multi-core environment and enhance the query performance w/ a large amount of documents, need a parallel execution.
    Sharding or microsharding is not an alternative in this case.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. TTL index throttling for deletion

    To avoid deletion impact, we need to control the quantity and speed of document generations(insert).
    Need a way to throttle down the deletion quantity per a delete operation.

    i.e. # of documents per a ttl delete, sleep time between deletions(default 60s)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Tool for score data model

    It would be great to have a tool which can scores a data model from a specific database. I mean, I could allows this tool to scan a database model and score this data model based on best practices, patterns and anti patterns. It could also generates a list of problems and suggestions for improvement.

    This tool could be used on CI pipelines near to the unit tests and the main goal is to avoid release new features using a bad data model.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Flatten arrays in group stage

    Have group operators to flatten document arrays into a single one with or without repeated elements.
    So ->
    doc1 = {arr: [1,2,3,4], gr: "group"}, doc2 = {arr: [5, 6, 7, 8], gr: "group"}
    {$group: {id: "$gr", arrays: {$***: "$arr"} } }
    =>
    {
    id: "gr", arrays: [1, 2, 3, 4, 5, 6, 7, 8]}

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. $merge

    Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. geo

    It would be nice to get the length of an LineString of a geo-json object or the possibility to write an aggregation to calculate it.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Budget limit for serverless pay as you go mode

    I was looking at the serverless pay-as-you-go option for my DB so I could have continuous backup and snapshots but I found it too risky. Currently, the only protection user has is alerts when RPUs go over a certain budget threshold. I would like to be able to set a budget limit that would prevent me from going over pre-set daily budget. If you would get hit with DOS or some other brute force attack you could rack up lots of traffic and get an unexpected bill without a potential limit.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Query Planner needs a timeOut set as a Database parameter.

    Query Planner needs a timeOut set as a Database parameter.

    We see App queries timing out on Query planner taking >1 sec. Whilst this can be avoided by setting maxTimeMS at client end - this is more of a setting for the overall query and not just the query planner - Also, this comes at the risk of closing/timing out the actual query (cursor) which is not our need.

    We only want the query planner itself to have a specific - set/customisable timeout and the query to keep running selecting one/any if the plans run thus far without timing out…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. We need to be able to use $[<identifier>] and "$setOnInsert" in the same command

    I want to be able to have a maintain array of counters for a user through a single update statement. If the document containing array of counters does not exist, I want to add it. If it does exist, I want to increment the counter

    For example, this command

    db.inboxItemCounts.updateOne(
    // filter
    {
    "userId": userDoc.userId
    },
    // update
    {
    "$setOnInsert": {
    "userId": userDoc.userId,
    "fromUserSummary": [{
    "userName": fromUserDoc.userName,
    "count": 1
    }]
    },
    // "$inc": incBody,
    "$inc": {
    "fromUserSummary.$[userElement].count": 1
    }
    },
    // options
    {
    "upsert": true,
    "writeConcern": { "w": "majority" },
    "arrayFilters": [
    { "userElement.userName": { $eq: fromUserDoc.userName }}
    ]
    }

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Add numerically-ordered index feature which forces a field to maintain ordering

    I would like MongoDB to add better support for ordered lists across documents in a collection. This feature would allow the user to designate a field such as "position", and the DB would ensure the values of that field across documents remain in a monotonically-increasing integer sequence 0, 1, 2, 3, ... N.

    I am the maintainer of a Ruby library called "Mongoid Orderable". This library makes numerically ordered field across documents. I'd like to ask MongoDB to investigate moving the functionality of this library to the server.

    What use case would this feature/improvement enable?

    This feature would be useful…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Add pipeline stage for "downsampling" data

    Down sampling is an extremely common operation used when plotting time-series data on graphs when there is too much data to get a good looking/meaningful graph. This would pick and choose "important" data points based on an algorithm such as "Largest-Triangle-Three-Buckets" (https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf) instead of returning the entire data set.

    Not only would this make prettier graph but it will also reduce the overall payload returned from the data thus reducing network related latency.

    This would be an awesome addition to timeseries!

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Use dedicated interface for replication and balancing

    Like many other big database system, MongoDB may support to specify dedicated interfaces for replication and/or for the balancer (sharding)

    Currently you can configure replication sets and shards members only by single hostname/IP-Address, i.e. all data is transmitted over the same network interface.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Replication  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Make it clear an index not on present on all shards

    We recently had an issue where an index did not get created on all shards. When we ran getIndexes() on mongos it reported the index was present. So we dropped another redundant index thinking all would be well - it was not. We had serious performance issues on the shard that was missing the new index and our app was unavailable for a few hours whilst we re-created the missing index on a huge collection.

    It would be better if getIndexes called on mongos reported some sort of warning or indication that it was not present on all shards. The…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Enforced Password Complexity

    Please allow for the enforcement of password complexity:
    - Setting a password policy that restricts users to a certain level of complexity (e.g 10 chars with special characters)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. collection-level users should be able to list their collections

    Currently users with collection-specific read or read/write permissions are not authorized to perform the following commands:

    db.listCollections()
    show collections
    db.getCollectionNames()

    This impacts the shell (and also third party tools that won't let users access their permitted collections because the list of collections is blocked in the first place)

    Suggestion:

    Users with collection-specific read or read/write permissions should be able to run the above commands and the result would only present the collections for which the user has some read or write privileges (instead of blocking everything).

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. CSFLE - Enable aggregation stages for non-encrypted collections

    When encrypting any collection with CSFLE, aggregation is not allowed on non-encrypted collections.

    The official recommendation is to maintain 2 clients: 1 for CSFLE and 1 for when aggregation is needed. How is this an acceptable solution?

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Encryption  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Preserve field order in $merge

    Filed on behalf of https://jira.mongodb.org/browse/SERVER-63853:

    A variety of formats require strict adherence to the sequence of fields, such as bioinformatics

    Files of such formats are often very large and contain nested structures, so it is convenient to use them as collections. But to keep the data belonging to the above specs, it is necessary to keep the arrangement of the fields. Unfortunately, aggregations with saving results to another DB lose original arrangement.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Tiered TTL for time series collection based on granularity

    Currently time series collections have a single TTL across all inherent granularities. It would be great to specify a TTL for each granularity. For example:

    For seconds: 1 week
    For hours: 1 month
    Others: never

    Course information should be held longer than finer information in some cases - currently they all fall under the main TTL specified.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. support $lookup for update aggregation

    We frequently denormalise either full documents or subsets to different documents in order to speed up reading, create indexes or paginate/sort on fields.

    Consider a user collection and a task collection, if a task can be assigned to the user, it makes sense to just put the user document on the task they are assigned. But an update to a user now requires you to update the user both in the user collection and all tasks with that user in the tasks collection.

    This can be achieved but does introduce some complexity, however the introduction of updates using aggregation pipelines…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base