Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

302 results found

  1. session

    The possibility configure what network or IP the user can be connect in the cluster in the mongo atlas.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Allow disabling auto removal for capped collections

    Capped collections work as a circular buffer - when adding new documents after the size limit is reached, old documents are removed.

    I would like to have an option for the collection, where this behavior can be changed - instead of removing old documents, the insert should fail with a specific error (eg: collection full). This would allow implementing log-rotation, by creating a new collection whenever the old one is full.

    My motivation:
    I have two services, one creating documents (events), and another one processing them. I'm looking for a reliable way to:
    - process all documents from start, without…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Collection which stores last login date_time for the users

    Are you please able to store the last login date_time for the users which exist either in admin database or $external database in a collection of admin database of that cluster or opsmanager database which manages the clusters?

    I have a requirement at my end - is to find the users who havent logged in for 60 days, , their roles to be revoked. And, ultimately delete the users who dont have any roles attached after a fixed period of time.

    I do undertsnad you store the login details in audit logs. But that would be a tedious process at…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Change Streams  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. placement of mongosh,mongo tools and mongo binary inside core bin

    Hi Team,
    Hope you are doing great!
    Didn't understand the purpose of keeping mongosh,mongo tools and mongo binary separately, can't this be place in a single rpm/tar file so that it can be easy the installation process.
    As per the current :
    Mongo binary location :

    /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-6.0.8-ent/bin

    Mongo Tools location :

    /var/lib/mongodb-mms-automation/mongodb-database-tools-linux-x86_64-100.7.4/bin

    Mongosh location :

    need to keep this file in separate location and access the mongodh. mongosh-1.10.1-linux-x64.tgz

    Thank you

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. When was last backup/mongodump ran on databases or database backup history

    How can get detail of last backup was taken using query. Its very hard to get into log file and see the history of mongodump. Just like other database tool mongodump history can be retrieved via query instead mongo log file.

    I know get the detail from ops manager if using enterprise only but what if we are using community edition.

    If you include this feature it will very helpful for DBA to get the details juts like other database tools available in market.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Allow rotation of Arbiter TLS certificates when authentication is enabled

    Mongo arbiters do not support using db.rotateCertificates(), because they do not possess the internal table of user and role mappings used for authentication.

    Add functionality to enable rotation of these certificates.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Exclude PDB files from installation

    mongod.pdb is nearly 1.0GB, mongos.pdb - 0.5GB

    Since these fils are not necessary for the server to function properly, they should be optional, e.g. provide a checkbox to exclude them during installation.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Tool for score data model

    It would be great to have a tool which can scores a data model from a specific database. I mean, I could allows this tool to scan a database model and score this data model based on best practices, patterns and anti patterns. It could also generates a list of problems and suggestions for improvement.

    This tool could be used on CI pipelines near to the unit tests and the main goal is to avoid release new features using a bad data model.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Flatten arrays in group stage

    Have group operators to flatten document arrays into a single one with or without repeated elements.
    So ->
    doc1 = {arr: [1,2,3,4], gr: "group"}, doc2 = {arr: [5, 6, 7, 8], gr: "group"}
    {$group: {id: "$gr", arrays: {$***: "$arr"} } }
    =>
    {
    id: "gr", arrays: [1, 2, 3, 4, 5, 6, 7, 8]}

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. DB Snapshot triggers when RDS Snapshot starts.

    Looking for a feature regarding DB Snapshots, so that we can configure a Snapshot Policy to be triggered whenever a specific AWS RDS snapshot starts (for the daily schedule).

    Thank you in advance.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Administration  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Vector Search

    Provide Vector Search capability in MongoDB enterprise edition for On-premise deployments.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. $group all fields

    $group should have the ability to allow specifying all fields in a document (without explicitly defining them all, which can lead to duplicating dozens of lines just to do "key: $first")

    This will help users that use $unwind and then want to $group the results without having to do a subsequent $lookup and $mergeObjects (or similar) to get the final document structure they're looking for.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Allow live migration using private interface

    There should be an option to live migrate data via a private interface instead of public one. With VPC being peered, it would be a great option to avoid exposing database to the public interface (even if it's specific set of IPs). This can resolve concerns of many customers who are following or need to follow some short of compliance on their end.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Administration  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. an aggregation stage to load data to DRAM for the fields that are only requested

    When we use $project stage, it loads whole document from disk to memory (if not in working set). Because of this, When we create a data model, We have to create separate collection if the field is not required in frequent access of data. Creating a view is an option, but what if $project itself or $project with some argument in it, or a new pipeline stage or operator gets introduced which fetches the data from disk only for the fields specified instead of loading whole document.

    With memory mapped file, retrieving fields specified alone would not be simply possible…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Nanosecond timestamp support

    I've put this under the time series category since that's where it's most applicable, but it's really a data model / BSON issue.

    The topic of higher resolution timestamps have been surfaced from time to time for at least a decade (https://jira.mongodb.org/browse/SERVER-1460), and usually prompts a response like "just use integers". With the addition of time series collections however, where the concept of time is integral to database functionality, I think it's time to reconsider adding a type with at least nanosecond precision timestamp support. Date's millisecond resolution is woefully inadequate for a number of relevant use cases,…

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. add IO throughput related fields to 'serverStatus' output

    There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
    Need it in serverStatus output so that we can monitor it.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. $merge

    Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Cascading delete for DBRefs

    Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:

    1. start a transaction
    2. fetch document
    3. look for dbref fields
    4. fetch those docs
    5. continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
    6. go back in reverse and delete all of them
    7. commit transaction

    If this is possible to do…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Extend schema validation to be able to enforce referential integrity between collections

    Where a relational database uses 2 tables to store a 1:many "parent - child" relationship between entities, MongoDB mostly stores the child documents in an array file as part of the parent document. This automatically ensures referential integrity in that
    - a child document cannot be inserted or updated to refer to a non-existent parent, and
    - a parent document cannot be deleted such that it leaves "orphaned" child documents

    However, there are situations where the number and/or size of the child documents makes embedding them all in their parent unworkable, due to the 16 megabyte document size limit if…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base