Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

296 results found

  1. Named MongoDB Connections

    When a service is acting erroneously and generating hundreds or thousands of connections, it's currently difficult to determine which service is doing so when you have 30+ services connecting to MongoDB.

    My proposal is that we should be optionally able to specify a non-unique name for the connection in the MongoDB URL (possibly after a #), which would allow DB administrators to see how many of each named connection was connected at any given time, and also other metrics (operations/s per name, average/max query execution time per name, etc.).

    Example URL:
    mongodb://user:****@clustername.abcd.mongodb.net:27017/dbName?authSource=admin#myServiceName

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Providing connection details with username for real time connection monitoring.

    Currently, mongd.log provides connection details like remote IP, source port with authentication information but doesn't provide the connection is in active state or not.
    serverStatus() only provides number of current and active connections.

    Example:

    10.0.0.100:12345 - username active
    10.0.0.101:12346 - username idle

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Export Backup Snapshots to GCP Bucket

    As Atlas user, i want to Export Backup Snapshots to a GCP bucket because i don't have an AWS subscription.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Change Timeseries Bucket Memory Limit

    Currently, there's no way to set a memory threshold for bucket allocation in MongoDB's time series collections. As the bucket size increases and more collections are opened day after day, a limiting mechanism triggers for open buckets, leading to cache pressure and the premature closing of buckets under high load. It would be beneficial for users to have the ability to set a memory threshold for the timeseries bucket memory limit ( I guess limit is around 3GB). It enabling us to prevent early bucket closure in production environments. Alternatively, providing the option to manually close buckets could help manage…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Include or flag nodes seen more than once / duplicates when traversing with $graphLookup

    When traversing a graph with $graphLookup, we suppress duplicates (in cases of cycles). It would be useful to have an option to include a flag indicating a node that caused the traversal to stop due to duplication.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. TTL index throttling for deletion

    To avoid deletion impact, we need to control the quantity and speed of document generations(insert).
    Need a way to throttle down the deletion quantity per a delete operation.

    i.e. # of documents per a ttl delete, sleep time between deletions(default 60s)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Improve Changestream metadata by including userId(who), action/intent (event name).

    This can be done by including meta data in the options for C(R)UD operations!
    Something like: User.updateOne({ _id }, { $set: { name: "newName" } }, { $meta: { userId: _id, action: "nameUpdated" })

    This is super useful because now changestream can be used to create event store or audit logs, out of the box. I just have to store all the changestream events from the relevant collections. This also makes event driven architecture in micro services super useful. Developers can use this to publish events directly to a message broker such as kafka, where action would be equivalent to…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. define the random seed manually, for $rand and $sample

    It will be great if an additional paramater to define the seed for $rand and $sample could be use.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Tool for score data model

    It would be great to have a tool which can scores a data model from a specific database. I mean, I could allows this tool to scan a database model and score this data model based on best practices, patterns and anti patterns. It could also generates a list of problems and suggestions for improvement.

    This tool could be used on CI pipelines near to the unit tests and the main goal is to avoid release new features using a bad data model.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Flatten arrays in group stage

    Have group operators to flatten document arrays into a single one with or without repeated elements.
    So ->
    doc1 = {arr: [1,2,3,4], gr: "group"}, doc2 = {arr: [5, 6, 7, 8], gr: "group"}
    {$group: {id: "$gr", arrays: {$***: "$arr"} } }
    =>
    {
    id: "gr", arrays: [1, 2, 3, 4, 5, 6, 7, 8]}

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Cascading delete for DBRefs

    Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:

    1. start a transaction
    2. fetch document
    3. look for dbref fields
    4. fetch those docs
    5. continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
    6. go back in reverse and delete all of them
    7. commit transaction

    If this is possible to do…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. geo

    It would be nice to get the length of an LineString of a geo-json object or the possibility to write an aggregation to calculate it.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Support Two Array Fields in Compound Indexes

    Hi,

    I've come across a lot of use cases where the business logic has demanded unique constraints on 1-2 fields that have been modeled as arrays on documents. The cases that only contain a single array field is already taken care of using unique indexes in MongoDB, however the cases with 2 array fields have required application level constraints since the database only supports a single array field in compound indexes. While multiple arrays in a single index would massively increase the size of the indexes, it would be very helpful if it would still be possible. If there are…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Query Planner needs a timeOut set as a Database parameter.

    Query Planner needs a timeOut set as a Database parameter.

    We see App queries timing out on Query planner taking >1 sec. Whilst this can be avoided by setting maxTimeMS at client end - this is more of a setting for the overall query and not just the query planner - Also, this comes at the risk of closing/timing out the actual query (cursor) which is not our need.

    We only want the query planner itself to have a specific - set/customisable timeout and the query to keep running selecting one/any if the plans run thus far without timing out…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. We need to be able to use $[<identifier>] and "$setOnInsert" in the same command

    I want to be able to have a maintain array of counters for a user through a single update statement. If the document containing array of counters does not exist, I want to add it. If it does exist, I want to increment the counter

    For example, this command

    db.inboxItemCounts.updateOne(
    // filter
    {
    "userId": userDoc.userId
    },
    // update
    {
    "$setOnInsert": {
    "userId": userDoc.userId,
    "fromUserSummary": [{
    "userName": fromUserDoc.userName,
    "count": 1
    }]
    },
    // "$inc": incBody,
    "$inc": {
    "fromUserSummary.$[userElement].count": 1
    }
    },
    // options
    {
    "upsert": true,
    "writeConcern": { "w": "majority" },
    "arrayFilters": [
    { "userElement.userName": { $eq: fromUserDoc.userName }}
    ]
    }

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. hint support for $graphLookup

    Currently you can supply a hint to the aggregation call in order to tell MongoDB to use a specific index for the initial $match. But there is currently no way to specify which index to use for a $graphLookup later in the pipeline.

    I would like an optional hint property on the $graphLookup stage.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Add numerically-ordered index feature which forces a field to maintain ordering

    I would like MongoDB to add better support for ordered lists across documents in a collection. This feature would allow the user to designate a field such as "position", and the DB would ensure the values of that field across documents remain in a monotonically-increasing integer sequence 0, 1, 2, 3, ... N.

    I am the maintainer of a Ruby library called "Mongoid Orderable". This library makes numerically ordered field across documents. I'd like to ask MongoDB to investigate moving the functionality of this library to the server.

    What use case would this feature/improvement enable?

    This feature would be useful…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Add pipeline stage for "downsampling" data

    Down sampling is an extremely common operation used when plotting time-series data on graphs when there is too much data to get a good looking/meaningful graph. This would pick and choose "important" data points based on an algorithm such as "Largest-Triangle-Three-Buckets" (https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf) instead of returning the entire data set.

    Not only would this make prettier graph but it will also reduce the overall payload returned from the data thus reducing network related latency.

    This would be an awesome addition to timeseries!

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base