Skip to content

Ops Tools

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

41 results found

  1. On demand snapshots in Ops Manager

    Allow the possibility of performing a snapshot on-demand.

    Usually the snapshot time ends too far away in time after configuration changes of the backup job (i.e: changing the block size). For testing purposes it would make sense to allow performing a snapshot on-demand to get it generated and proceed with further testing/tuning if required.

    82 votes
    8 comments  ·  Backup  ·  Admin →
  2. Allow Ops Manager users to move/migrate backup job snapshots from one S3 bucket to a different S3 bucket

    Ops Manager users with S3 blockstores may need to move snapshots and backup jobs to a new S3 bucket. For MongoDB blockstores, this is accomplished using a groom.

    Move Blocks to a Different Blockstore
    https://docs.opsmanager.mongodb.com/current/core/administration-interface/#groom-priority-page

    This feature request is to provide the same feature to groom backup snapshots/jobs to a new bucket for S3 blockstores.

    45 votes
    3 comments  ·  Backup  ·  Admin →

    Hi All,


    We have developed a solution directly in Ops Manager that will support transitioning your S3-compatible snapshot storage without terminating the backup. Ops Manager will allow you to update the S3 snapshot store ID in the backup job document and the next scheduled snapshot will be a full snapshot in the new S3 snapshot store after the update. This is specifically for S3 stores only at this time.


    This feature is included in the Ops Manager 8.0.6 release that came out on April 3rd, 2025. 


    Information about this feature can be found in these documents:


    Release Notes: https://www.mongodb.com/docs/ops-manager/current/release-notes/application/


    UI Configuration: https://www.mongodb.com/docs/ops-manager/current/admin/backup/jobs-page/#std-label-transition-s3


    API Information: https://www.mongodb.com/docs/ops-manager/current/reference/api/backup/update-backup-config/



  3. Deploy MongoDB across different Kubernetes clusters

    MongoDB Operator can only deploy and manage MongoDB in a single Kubernetes cluster. However, for DR and global apps, it is important to deploy a single DataBase across multiple Kubernetes clusters to allow for DR or globally distributed apps.

    44 votes

    MongoDB Enterprise Operator now support multi-Kubernetes-cluster replica set deployments.


    Find out more


    Multi-Kubernetes-cluster Ops Manager support is in progress right now with a likely delivery date towards the end of this year (2023). Sharding support (across multiple Kubernetes clusters) will follow.


    If you're an Enterprise Advanced customer and interested in this, please feel free to reach out to me at dan.mckean@mongodb.com.

  4. Ability to to turn on audit log compression/deletion

    Ops Manager currently allows the ability to rotate audit logs based on the threshold settings in the Update MongoDB Log Settings modal but audit.logs do not compress/delete as the mongodb.logs do from the same modal. We would like the ability to either toggle compression/deletion of audit.log in that modal or a separate modal. We think a separate modal would be better since audit.logs may be used for security forensics and require a longer retention period.

    28 votes
    6 comments  ·  Automation  ·  Admin →

    We are pleased to announce that Cloud Manager and Ops Manger (5.0.8) now have the ability to set up a different configuration for rotation of MongoDB Log and MongoDB Audit Log Files.  This does depend on a feature available in MongoDB Enterprise Server 5.0 and up.  


    Documentation:

    OM: https://docs.opsmanager.mongodb.com/current/tutorial/view-logs/index.html#configure-log-rotation

    CM: https://docs.cloudmanager.mongodb.com/tutorial/view-logs/#configure-log-rotation

  5. 15 votes
  6. 14 votes
    completed  ·  Andrey responded

    Operator v 1.7.0 will have a full LDAP support

  7. 11 votes
    2 comments  ·  Ops Manager  ·  Admin →
  8. Support Any MongoDB configuration option in MongoDB Custom Resource

    Support all MongoDB configurations in Kubernetes CRD so that it is possible to deploy a fine-tuned cluster with Kubernetes resources

    11 votes
  9. Add Backup configuration to MongoDB Custom Resource

    Allow configuration of backup settings in MongoDB Custom resource.
    This includes management of backed up resources when updating, moving or deleting clusters

    11 votes
  10. Define current limits of Kubernetes Operator

    1.) What is the limiting factor of the Operator? Is it number of Pods, number of Custom Resources (e.g. MongoDB, MongoDBUser) or something else?

    2.) What does the number of "Clusters" refer to? Does it differ for Standalone, ReplicaSet and ShardedCluster?

    3.) How many instances of the "Clusters" in 2.) are supported per MongoDB Operator?

    Is there any way to add this sort of data to our documentation?

    Thanks

    9 votes

    Current scale recommendations are 20-50 deployments (ReplicaSet/ShardedCluster/Standalone). 


    While the Operator can handle hundreds, the limiting factor is API calls to Ops Manager/Cloud Manager. Updating one deployment at a time is fine, the issue arrises when making concurrent changes to many deployments simultaneously, where reconciliation will be slow for those later in the queue - for example during a DR scenario.


    We have work planned for later in 2023 to start removing Ops Manager as a prerequisite for many of the basic operations, and we expect that to greatly improve these limits.

  11. mongodb-atlas-kubernetes operator need to adjust instanceSize/disk for autoscaled cluster base on instanceSize range

    Scenario:

    Assumptions:
    autoscaling = true
    current instanceSize = M40
    minInstanceSize = M30
    maxInstanceSize = M70

    If we want to increase the minInstanceSize from M40 to M50 operator will return an error because our current instance size is less than M50, so we need to go to UI and increase the current instance size to M50(at least), and then apply the change to Kubernetes object (atlasdeployment).

    We expect that if the current instanceSize is not in the range, the operator automatically increases the instanceSize to minInstanceSize to remove the extra step.


    We expect the same logic for increasing/decreasing - minInstance/maxInstance and…

    7 votes
    1 comment  ·  Other  ·  Admin →

    Resolved in version 1.4 of the Atlas Operator


    It now allows the user to update the autoscaling config (min and max) to outside of current instance size and atlas will update the instance to the closest boundary. So, for example, with a current instance size on Atlas of M10, and a min and max of M10 and M30, the min and max could be changed to M30 and M80, and the instance size would now be auto-scaled up to M30 (as the nearest boundary to the old instance size). Side note: If the instance size in the config is not changed and is left at (for example M10), then you will get a warning in the logs that the spec contains an instance size outside of the current min and max. But that won't block the change or the auto-scaling.


    Similarly, you can also now change the fixed instance size in…

  12. Do not delete backups of deleted deployments

    Backups normaly protect from accidental deletion of a database. In a devOps environment it can happen that a MDB resource gets deleted by a Kubernetes deployment. At the moment, Ops Manager then deactivates the backup and deletes all snapshots. We would like to have the snapshots stored as long as their retention limit tells or at least until the project itself is deleted in Ops Manager.

    So that, if a developer detects the mistake and re-deploys the MDB resource(s), someone or even himself can restore the database(s) from the backups.

    7 votes
    3 comments  ·  Ops Manager  ·  Admin →
    completed  ·  Andrey responded

    We do not delete Ops Manager project when Backup is present.
    This has been addressed

  13. Automation Agent Client Certification Validation

    Many customers require that the MMS automation agent has a valid x509 for TLS communications with Ops Manager. With the Kubernetes Operator this is not currently possible so these customers cannot use the Operator to deploy MongoDB instances within their environments.

    This feature would improve the security of communications between agents and the Ops Manager and meet the security requirements of many of our customers who cannot move to services like Atlas.

    7 votes
    completed  ·  Andrey responded

    new spec option spec.agent.startupOptions is available. it can be used to configure client Certificates.

  14. MongoDB Agent (Automation Module): don't attempt to auth with `__system` (SCRAM) user when `security.clusterAuthMode` is set to `x509`

    Problem Statement,
    What is the problem? MongoDB Agent (Automation Module) attempts to auth with __system (SCRAM) user when security.clusterAuthMode is set to x509.

    Why is this a problem? MongoDB Server process logs are flooded by unnecessary noise from such MongoDB Agent (Automation Module) failed auth attempts.

    Example,
    {"t":{"$date":"2021-05-10T11:08:02.115+0000"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn115","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","principalName":"__system","authenticationDatabase":"local","client":"10.10.10.10:46765","result":"AuthenticationFailed: ###"}}

    Proposal,
    * Don't attempt to auth with __system (SCRAM) user when security.clusterAuthMode is set to x509 in MongoDB Server

    6 votes
    1 comment  ·  Automation  ·  Admin →
  15. Allow S3 Oplog Store to be defined and configured using the Operator.

    Ops Manager can utilize S3 storage for the Oplog Store. It should be possible to define and configure an S3 Oplog Store from the Operator.

    4 votes
  16. Enhance security by leveraging PodSecurityPolicies

    PodSecurityPolicies are a way to enhance security in a k8s cluster.

    Currently the Kubernetes Operator and the Helm Chart does not offer a way to integrate PSPs. If an administrator wants to enforce PSPs for the cluster where the MongoDB Kubernetes Operator is deployed, he would need to do this manually which leads to additional manual steps (e.g. editing the Operator role to allow "use" "psp").

    Please introduce a way to secure the MongoDB Management (Ops Manager, Operator) and Workload (MongoDB custom resources) with PSPs in the Kubernetes Operator / Helm ecosystem.

    4 votes
  17. Use the TLS options instead of the SSL options in Automation Config of MongoDB v4.2

    As the SSL Options are deprecated since MongoDB v4.2 but Ops Manager Automation still utilizes SSL options in the automation configuration for MongoDB v4.2. It will be best that Ops Manager v4.2+ will utilize TLS options in the Automation Config of their managed MongoDB v4.2 deployments.

    4 votes
    1 comment  ·  Automation  ·  Admin →
  18. Ops Manager should support SCRAM-SHA-256 authentication mechanism when connecting to Backing Databases

    Currently, Ops Manager does not provide support to SCRAM-SHA-256 authentication mechanism when connecting to the Backing Database.
    This is because the version of MongoDB Java Driver which is in use by Ops Manager 4.2.0 is 3.6.4.
    SCRAM-SHA-256 is supported by the Java Driver from the version 3.8.

    4 votes
    completed  ·  0 comments  ·  Ops Manager  ·  Admin →
  19. Add K8S namespace as a tag in Ops Manager project

    Add K8S namespace as a tag in Ops Manager project so it is easier to identify what project belongs in what namespace

    3 votes
  20. Add spec.externalAccess.externalDomain to running deployments

    The follow option:

    https://www.mongodb.com/docs/kubernetes-operator/master/reference/k8s-operator-specification/#spec.externalAccess.externalDomain

    • spec.externalAccess.externalDomain

    Is tremendously useful as we would no longer have to worry about terminating TLS connections through a proxy and then re-establishing a TLS connection for internal communication due to security reasons.

    This would remove a point of failure for self-hosted Kubernetes environments and safe resources.

    However, you cannot do this for existing replica sets:

    WARNING

    Specifying this field changes how Ops Manager registers mongod processes. You can specify this field only for new replica set deployments starting in Kubernetes Operator version 1.19. You can’t change the value of this field or any processes[n].hostname fields in…

    2 votes

    We've just tested this for existing deployments and confirmed that this does in fact work fine for existing replica sets. 


    This was an incorrect inference made while documenting this feature and we've raised a ticket to have our docs amended.

← Previous 1 3
  • Don't see your idea?

Feedback and Knowledge Base