Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

230 results found

  1. ARM support

    Can we support ARM packages for Debian 11. They are required for bitnami to add ARM support to their mongo charts

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Raise maximum BSON document size bigger than 16 MB

    Per https://www.mongodb.com/docs/manual/reference/limits/#:~:text=The%20maximum%20BSON%20document%20size,MongoDB%20provides%20the%20GridFS%20API, maximum BSON document size is 16 MB. I would like to request this to support bigger sizes like 32 MB, 64 MB or even bigger.

    34 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    10 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. How to limit the number of document updates?

    Hi
    I want to limit the number of document updates in one command.

    for example

    db.users.updateMany(
    <filter>,
    <update>,
    {
    limit : 100
    }
    );

    https://www.mongodb.com/community/forums/t/how-to-limit-the-number-of-document-updates/102204/3

    14 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. add IO throughput related fields to 'serverStatus' output

    There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
    Need it in serverStatus output so that we can monitor it.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Nanosecond timestamp support

    I've put this under the time series category since that's where it's most applicable, but it's really a data model / BSON issue.

    The topic of higher resolution timestamps have been surfaced from time to time for at least a decade (https://jira.mongodb.org/browse/SERVER-1460), and usually prompts a response like "just use integers". With the addition of time series collections however, where the concept of time is integral to database functionality, I think it's time to reconsider adding a type with at least nanosecond precision timestamp support. Date's millisecond resolution is woefully inadequate for a number of relevant use cases,…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Implement $bucket and $group on indexed values with sub-linear runtime

    We noticed that sum $bucket and $group aggregations such as $min, $max, $count are unexpectedly slow even when fully covered by an index, (partially) because the DB scans through the entire index rather than employing optimization approaches such as binary search.

    An example pipeline that should return instantaneous but scans through the entire index (confirmed on v4.4 and v5):
    [
    {
    $match: {
    status: "DELIVERED",
    },
    },
    {
    $group: {
    id: {
    status: "$status",
    },
    min: {
    $min: "$modify
    time",
    },
    },
    },
    ]
    with an index { status: 1, modify_time: 1}

    Another example is $bucket (same index):
    [
    {…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Support compound/multiple grouping keys in $bucket

    We often need to compute statistical/summarizing aggregations grouped by more than one field where all fields are of a $bucket-able type.

    An example, would be to count all orders grouped by their status and some custom time ranges of their creation date.
    This can be achieved by using $group in combination with a $switch expression (sometimes simplified with $trunc), however, that is cumbersome and prevents efficient grouping since e.g. no binary search can be employed to identify the bucket boundaries efficiently.

    The query syntax of $bucket would not need to change much. It would simply need to allow for nested…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. prometheus metrics availablity through privatelink

    until now is only possible to get metrics from public or private peered vpc but is not actually possible through privatelink. Since privatelink are used most of the time for security reason this limitation leads to find compromises or workaround to customers

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. sharding error shardsvr

    Make it clear which node is causing the "shardsvr" error.

    Spawned from support case 01042995

    Our error occurred when the user tried to connect using Compass. The failure was to list the collection names on one database.

    The error presented back to the user was merely Cannot accept sharding commands if not started with --shardsvr

    We found eventually that the primary changed on one of the shards, and that primary did not have the appropriate clusterRole in the mongod.conf file. My concerns are that this took too long to track down and would be impossible in a 100-shard environment.

    • Nothing…
    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. $merge

    Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Unique Indexes and Bulk Upserts for Time Series Collections

    We would like to insert data in bulk into time series collections and identify the new data that has been inserted without the possibility of duplicates being inserted.

    For regular collections this is achievable by adding a unique index and performing a bulk upsert (as any duplicates will be rejected due to the unique index).

    For time series collections however unique indexes are not currently supported.

    In addition performing an upsert with $setOnInsert option which should only action insert operations is also not currently supported for time series collections.

    At the moment the only options appear to be:

    (1) to…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Cascading delete for DBRefs

    Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:

    1. start a transaction
    2. fetch document
    3. look for dbref fields
    4. fetch those docs
    5. continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
    6. go back in reverse and delete all of them
    7. commit transaction

    If this is possible to do…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Transactions  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Support change streams without service discovery

    Currently, change streams are not supported in standalone instances, so testing change stream functionality requires a one-node replica set. However, promoting a standalone node to a one-node replica set requires a call to rs.initiate(config), which requires host and port information so that clients can connect; something that is not required for standalone nodes. This means change stream support is conflated with service discovery. It becomes impossible, for example, to create a docker image that boots as a single-node replica set, while it's trivial to make a docker image that boots as a standalone server.

    Various ideas that would make…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Change Streams  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Extend schema validation to be able to enforce referential integrity between collections

    Where a relational database uses 2 tables to store a 1:many "parent - child" relationship between entities, MongoDB mostly stores the child documents in an array file as part of the parent document. This automatically ensures referential integrity in that
    - a child document cannot be inserted or updated to refer to a non-existent parent, and
    - a parent document cannot be deleted such that it leaves "orphaned" child documents

    However, there are situations where the number and/or size of the child documents makes embedding them all in their parent unworkable, due to the 16 megabyte document size limit if…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Avoid truncating the query on the Atlas profiler or system.profile collection

    Slow running queries that are captured in system.profile collection or on profiler page of Atlas are truncated if the query is too long. As an Application DBA, it would be difficult to analyse the query without figuring out the actual query. The current limitation of command document is 50Kb. Request you to consider this limitation to avoid truncation of queries.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Handle Daylight Saving Time when $densify is used on a date field

    When using "day" as "unit" for a $densify pipeline stage on a date field, the date is always advanced of 24 hours. This is however not always the expected result in timezones in which the year has one 23-hour and one 25-hour long day, because of Daylight Saving Time.

    It would be useful to have the possibility to pass an optional timezone parameter in the $densify stage and, when present, have the stage account for these exceptions when appropriate.

    Here follows an example.

    Assume we have a collection containing the following documents:

    db.densifyDateExample.insertMany([
        {_id: "a", d: ISODate("2022-10-28T22:00:00Z")},
        {_id: "b", d:
    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. geo

    It would be nice to get the length of an LineString of a geo-json object or the possibility to write an aggregation to calculate it.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Expose individual command execution time

    Many MongoDB drivers currently expose events (CommandSucceededEvent say) which provide an elapsed time. However, that elapsed time is the round-trip time, which is not super useful as that can be measured by a programmer manually. It would be neat if there was a way to get the actual time spent by the server on a per-command basis. This data is computed somewhere as it is exposed in Atlas metrics as Execution Time.

    There's the explain facility but this is just to get an estimate of a query's cost. I would be interested in knowing how much time the server spent…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Use a private peering that resolves to the private IP address of your LDAP server.

    We need to consume an LDAP server traffic through Private Endpoints. In the documentation we find that only creating a public endpoint it's possible but we have a security restriction. Our TAM suggested create a feedback request to the product owner.
    tks!

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1 3 4 5 11 12
  • Don't see your idea?

Feedback and Knowledge Base