Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

276 results found

  1. user authentication

    Hy

    It would be extremely useful to be able to create users who can only connect to the database from specific networks or even specific IP addresses, similar to what is possible with MySQL.

    For example, using the following commands:

    CREATE USER 'user_name'@'10.214.3.0' IDENTIFIED BY 'password';

    GRANT ALL PRIVILEGES ON shorturl.* TO 'user_name'@'10.214.3.0';

    You can create a user who can access the database only from the network with the IP address 10.214.3.0.

    I would like to know if it is possible to achieve similar functionality in mongodb as well. This would be very useful for my purposes, as I want…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Providing connection details with username for real time connection monitoring.

    Currently, mongd.log provides connection details like remote IP, source port with authentication information but doesn't provide the connection is in active state or not.
    serverStatus() only provides number of current and active connections.

    Example:

    10.0.0.100:12345 - username active
    10.0.0.101:12346 - username idle

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Introduce a new field BucketLifeSpan Optional along with Granularity

    An enhancement to MongoDB's management of time series collections could involve the introduction of a BucketLifeSpan attribute, in addition to the existing Granularity setting. This new, optional attribute would automate the duration a bucket can remain open, with the condition that Granularity should be less than or equal to BucketLifeSpan.

    Consider a use case involving a time series collection for tracking data from 70,000 socket devices daily, with DeviceId as the metafield. Assuming data is organized into daily collections and granularity is set to minutes to optimally fill the buckets unless they reach their size limit.

    For a collection named…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Lock on session start level with provided keys

    Scenario:
    Parallel DB updates with transactions on multiple collections that use same documents.
    Example: Calculate some common stuff to embed it in extra collections to avoid lookups.

    Problem:
    Lock and timeout on documents uses up valuable time and performance.
    Also lock conflicts produce a huge amount of exceptions that need to be handled.

    Idea:
    Key-based sessions

    Example:
    Session A with Keys 1 and 2 is started.
    Session B tries to start with Key 1. Key 1 is locked with Session A. Session B waits till Session A is finished.

    In this case transaction doesn't start with updating and failing. Waiting…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Should be possible to configure profiling output destination

    When enabling database profiling the output is sent to both the system.profile collection and system logs. Logging to a capped collection is fine but spamming the logs on disk is not good.

    We have a need to be able to react to changing query patterns quickly, so we have profiling enabled in production on a busy system and we do real-time analysis on the the system.profile collection. This works fine and the performance hit is acceptable but our system logs on disk grows a lot.

    Please make it possible to configure if profile logging should go to disk, collection or…

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Query comment or metadata in change stream event

    Our application is using change stream events to publish changes as kafka events for our customers. Sometimes we are in need to decide if an event should be send by kafka or not. Right at the moment the only way would be to decide based on additional fields on our document.

    It would be nice if there is an opportunity to include query comments to change stream events or some kind of meta data to get some info about the origin of the operation. This would be useful, to decide without any need to add additional fields to our data…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Change Streams  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. allow mongosync to migrate few field from source to destination for collection in database

    right now mongosync will migrate all the fields from source to destination based on filter or non filter setting but there is no way to move few field out of all field for all document

    to migrate few field from source to destination for collection in database with default primary key field (_ID )

    requesting the feature in new mongosync version

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Replication  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Include or flag nodes seen more than once / duplicates when traversing with $graphLookup

    When traversing a graph with $graphLookup, we suppress duplicates (in cases of cycles). It would be useful to have an option to include a flag indicating a node that caused the traversal to stop due to duplication.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Change Timeseries Bucket Memory Limit

    Currently, there's no way to set a memory threshold for bucket allocation in MongoDB's time series collections. As the bucket size increases and more collections are opened day after day, a limiting mechanism triggers for open buckets, leading to cache pressure and the premature closing of buckets under high load. It would be beneficial for users to have the ability to set a memory threshold for the timeseries bucket memory limit ( I guess limit is around 3GB). It enabling us to prevent early bucket closure in production environments. Alternatively, providing the option to manually close buckets could help manage…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Add timestamps to user documents

    Most database technologies store this metadata by default.
    Because the expected data volume and change rate of this attribute will most probably be low, there should be no reason of not storing this information.
    Of course this information might already be available in audit files, but first: auditing isn't enabled by default.
    Second: most database users won't have access to this file/info and third: most users won't expect this info in a separate file (reminder, MongoDB recommends to store the data where it belongs when it comes to "data/schema modelling", so the metadata of a user document should also be…

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Export Backup Snapshots to GCP Bucket

    As Atlas user, i want to Export Backup Snapshots to a GCP bucket because i don't have an AWS subscription.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Administration  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. TTL index activity statistics

    Dear all,

    Presently, there is no visibility or tracking of TTL index activity: no data available to the user to see how much data and how often has been deleted. I suggest having a separate “dictionary” collection with statistics for all TTL indexes with pre-defined data retention (aka last month).

    Thank you for looking at that.

    Regards, Marina.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Indexes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. ARM support

    Can we support ARM packages for Debian 11. They are required for bitnami to add ARM support to their mongo charts

    21 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Make $merge support DELETE operation

    Currently, the $merge only supports insert/update/upsert/merge behaviour. It would be great to support delete behaviour. A common use case would be in-place document deduplication/clean-up in a collection.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Allow to define access to DBs/collections by prefix or pattern.

    Please extend ACL to support prefixes (or regex) in the database name/collection.
    Currently only allowed ALL (when empty string provided) or exact db/collection matching when provided.

    Use case: Several services are using the same cluster but need to be isolated. Every service can get readWriteAnyDatabase but only to databases prefixed by some prefix.
    Services need to create new databases on the fly, so it is not possible to define a list of databases upfront.

    For example - rwRoleForService1 allows "update", "insert", and "remove" operations only on databases prefixed by "service1-" (sevice1-db1, sevice1-db2, ....)
    {
    role: "rwRoleForService1",
    privileges: [
    {
    {…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Sorting support on Array fields.

    Presently, I'm attempting to arrange(sort) the documents within my collection based on a key nested within an array object. However, sorting isn't functioning as expected in this scenario.

    A proper explanation with example is here what I mean.
    https://www.mongodb.com/community/forums/t/how-to-sort-documents-in-collection-on-basis-of-array-fields/270867?u=samrat_n_a

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. support parallel query executions include find(), aggregation()

    To use multi-core environment and enhance the query performance w/ a large amount of documents, need a parallel execution.
    Sharding or microsharding is not an alternative in this case.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Raise maximum BSON document size bigger than 16 MB

    Per https://www.mongodb.com/docs/manual/reference/limits/#:~:text=The%20maximum%20BSON%20document%20size,MongoDB%20provides%20the%20GridFS%20API, maximum BSON document size is 16 MB. I would like to request this to support bigger sizes like 32 MB, 64 MB or even bigger.

    70 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    15 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. JSON

    I'm converting a lot of data for export from MongoDB -> Postgresql

    I would like an aggregation function to convert an object to JSON

    {
    "$addField": {
    "_id": true,
    "json": {
    '$convert': {
    'input': '$subdocument',
    'to': 'json', # // Idealy, +1 for msgpack https://msgpack.org/
    }
    }
    }
    }

    thus object.json would be a STRING "{abc: true}"

    FYI: Postgresql supports JSON(B) in it's field structures: https://www.postgresql.org/docs/current/datatype-json.html

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1 3 4 5 13 14
  • Don't see your idea?

Feedback and Knowledge Base