Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

272 results found

  1. Export Backup Snapshots to GCP Bucket

    As Atlas user, i want to Export Backup Snapshots to a GCP bucket because i don't have an AWS subscription.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Administration  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. TTL index activity statistics

    Dear all,

    Presently, there is no visibility or tracking of TTL index activity: no data available to the user to see how much data and how often has been deleted. I suggest having a separate “dictionary” collection with statistics for all TTL indexes with pre-defined data retention (aka last month).

    Thank you for looking at that.

    Regards, Marina.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Make $merge support DELETE operation

    Currently, the $merge only supports insert/update/upsert/merge behaviour. It would be great to support delete behaviour. A common use case would be in-place document deduplication/clean-up in a collection.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Sorting support on Array fields.

    Presently, I'm attempting to arrange(sort) the documents within my collection based on a key nested within an array object. However, sorting isn't functioning as expected in this scenario.

    A proper explanation with example is here what I mean.
    https://www.mongodb.com/community/forums/t/how-to-sort-documents-in-collection-on-basis-of-array-fields/270867?u=samrat_n_a

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. user authentication

    Hy

    It would be extremely useful to be able to create users who can only connect to the database from specific networks or even specific IP addresses, similar to what is possible with MySQL.

    For example, using the following commands:

    CREATE USER 'user_name'@'10.214.3.0' IDENTIFIED BY 'password';

    GRANT ALL PRIVILEGES ON shorturl.* TO 'user_name'@'10.214.3.0';

    You can create a user who can access the database only from the network with the IP address 10.214.3.0.

    I would like to know if it is possible to achieve similar functionality in mongodb as well. This would be very useful for my purposes, as I want…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Introduce a new field BucketLifeSpan Optional along with Granularity

    An enhancement to MongoDB's management of time series collections could involve the introduction of a BucketLifeSpan attribute, in addition to the existing Granularity setting. This new, optional attribute would automate the duration a bucket can remain open, with the condition that Granularity should be less than or equal to BucketLifeSpan.

    Consider a use case involving a time series collection for tracking data from 70,000 socket devices daily, with DeviceId as the metafield. Assuming data is organized into daily collections and granularity is set to minutes to optimally fill the buckets unless they reach their size limit.

    For a collection named…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Change Timeseries Bucket Memory Limit

    Currently, there's no way to set a memory threshold for bucket allocation in MongoDB's time series collections. As the bucket size increases and more collections are opened day after day, a limiting mechanism triggers for open buckets, leading to cache pressure and the premature closing of buckets under high load. It would be beneficial for users to have the ability to set a memory threshold for the timeseries bucket memory limit ( I guess limit is around 3GB). It enabling us to prevent early bucket closure in production environments. Alternatively, providing the option to manually close buckets could help manage…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. JSON

    I'm converting a lot of data for export from MongoDB -> Postgresql

    I would like an aggregation function to convert an object to JSON

    {
    "$addField": {
    "_id": true,
    "json": {
    '$convert': {
    'input': '$subdocument',
    'to': 'json', # // Idealy, +1 for msgpack https://msgpack.org/
    }
    }
    }
    }

    thus object.json would be a STRING "{abc: true}"

    FYI: Postgresql supports JSON(B) in it's field structures: https://www.postgresql.org/docs/current/datatype-json.html

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Self-Managed Computed Collections

    This feature would enable MongoDB to automatically generate common patterns by computing entire collections. A particularly useful application would be the ability to identify duplicate fields within a collection and create a computed collection that groups all possible duplicate together.

    Sometimes, a field is duplicated between two documents, while another field in the same document is duplicated across seven other documents. Each of these seven documents might have fields that are duplicated in yet other documents, leading to a vast iteration in search of possibilities.

    Having MongoDB abstract all this logic in a performant manner to create computed collections would…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Make account connections stable

    My account is stuck in "We are deploying your changes (current action: configuring MongoDB)" endless loop for the entire eveing and I cannot finish my project for tomorrow. Cannot connect to my database even with a saved IP.
    Yes it's a free account, but this is not a way to just kill any possibility to connect to a simple database. This is a flop

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Include or flag nodes seen more than once / duplicates when traversing with $graphLookup

    When traversing a graph with $graphLookup, we suppress duplicates (in cases of cycles). It would be useful to have an option to include a flag indicating a node that caused the traversal to stop due to duplication.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Fine-tune the update privilege action

    We want our devs to be able to update docs, but only one at a time. If they need to update a bunch together, they should check in with the DBA. Right now, we can't fine-tune the update permissions like we'd prefer, so we're looking into ways to make that happen. This will help us manage document updates better and add a layer of security.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Fine-tune the update privilege action

    We want our devs to be able to update docs, but only one at a time. If they need to update a bunch together, they should check in with the DBA. Right now, we can't fine-tune the update permissions like we'd prefer, so we're looking into ways to make that happen. This will help us manage document updates better and add layer of security.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. no puedo crear bases de datos

    quiero elegir un servicio para crear la base de datos y no me deja en cuanto selecciono free o cualquier otro me envia al overview y nunca puedo terminar de apretar el boton para crear la base de datos

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. AWS IAM Role integration with MongoDB On premise

    We are planning to integrate AWS IAM Roles/Users with MongoDB On-premise for password-less authentication. So the authentication happens through IAM Profile. However, we noticed currently MongoDB on-premise doesn't support AWS IAM Role integration. This is a blocker. We want the authentication would happen through AWS IAM Profile rather than Standard Credentials.
    This would eliminate or significantly reduce the use of Database Credentials while tightly integrating with AWS Services. Simultaneously, applications can securely connect to MongoDB via AWS Profile.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Administration  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. array length as computed to be indexed

    in some cases, I need to filter only docs with specific (or range of) sizes of nested arrays.
    it will be good if I can configure 'automatic index' to keep the length of the array, based on the field name I will provide to the index, and then I can easily get the docs.

    (now I need to add and maintain this field on myself or with trigger)

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. support parallel query executions include find(), aggregation()

    To use multi-core environment and enhance the query performance w/ a large amount of documents, need a parallel execution.
    Sharding or microsharding is not an alternative in this case.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. TTL index throttling for deletion

    To avoid deletion impact, we need to control the quantity and speed of document generations(insert).
    Need a way to throttle down the deletion quantity per a delete operation.

    i.e. # of documents per a ttl delete, sleep time between deletions(default 60s)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. session

    The possibility configure what network or IP the user can be connect in the cluster in the mongo atlas.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Improve Changestream metadata by including userId(who), action/intent (event name).

    This can be done by including meta data in the options for C(R)UD operations!
    Something like: User.updateOne({ _id }, { $set: { name: "newName" } }, { $meta: { userId: _id, action: "nameUpdated" })

    This is super useful because now changestream can be used to create event store or audit logs, out of the box. I just have to store all the changestream events from the relevant collections. This also makes event driven architecture in micro services super useful. Developers can use this to publish events directly to a message broker such as kafka, where action would be equivalent to…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Change Streams  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1 3 4 5 13 14
  • Don't see your idea?

Feedback and Knowledge Base