Ops Tools

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. 8 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    planned  ·  0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  2. Support Any MongoDB configuration option in MongoDB Custom Resource

    Support all MongoDB configurations in Kubernetes CRD so that it is possible to deploy a fine-tuned cluster with Kubernetes resources

    7 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  1 comment  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  3. 6 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  1 comment  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  4. Ability to to turn on audit log compression/deletion

    Ops Manager currently allows the ability to rotate audit logs based on the threshold settings in the Update MongoDB Log Settings modal but audit.logs do not compress/delete as the mongodb.logs do from the same modal. We would like the ability to either toggle compression/deletion of audit.log in that modal or a separate modal. We think a separate modal would be better since audit.logs may be used for security forensics and require a longer retention period.

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Automation  ·  Flag idea as inappropriate…  ·  Admin →
  5. Deploy MongoDB across different Kubernetes clusters

    MongoDB Operator can only deploy and manage MongoDB in a single Kubernetes cluster. However, for DR and global apps, it is important to deploy a single DataBase across multiple Kubernetes clusters to allow for DR or globally distributed apps.

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  6. Automation Agent Client Certification Validation

    Many customers require that the MMS automation agent has a valid x509 for TLS communications with Ops Manager. With the Kubernetes Operator this is not currently possible so these customers cannot use the Operator to deploy MongoDB instances within their environments.

    This feature would improve the security of communications between agents and the Ops Manager and meet the security requirements of many of our customers who cannot move to services like Atlas.

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    planned  ·  0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  7. Allow to configure `maxTimeMS` for commands executed from Ops Manager's Data Explorer

    What is the problem that needs to be solved? Allow to configure maxTimeMS for MongoDB commands which are executed from Ops Manager's Data Explorer.

    Why is it a problem? (the pain) A) Ops Manager's Data Explorer cannot work with [views](https://docs.mongodb.com/manual/core/views/index.html) in case if the view is taking >15000 ms to be load. Data Explorer cannot work with [find](https://docs.mongodb.com/manual/reference/command/find/index.html) operations in case if that find operation is taking >15000 ms to be completed.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  8. Add Ops Manager check to prevent making backups of itself (Backing databases - AppDB, Oplog, Blockstore)

    Otherwise, an Out Of Memory condition may result and disable Ops Manager.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  9. SNMP traps for `AUTOMATION_AGENT_DOWN`, `MONITORING_AGENT_DOWN`, `BACKUP_AGENT_DOWN` alert types does not contain hostname information

    What is the problem that needs to be solved? SNMP traps for AUTOMATION_AGENT_DOWN, MONITORING_AGENT_DOWN, BACKUP_AGENT_DOWN alert types does not contain hostname information in .1.3.6.1.4.1.41138.1.1.1.4 (.iso.org.dod.internet.private.enterprises.mms.server.serverMIBObjects.mmsAlertObject.mmsAlertHostAndPort) OID.

    Why is it a problem? (the pain) User is blocked to act quickly on the alert and identify the host where Ops Manager's Automation/Monitoring/Backup Agent is in DOWN state. Missing <HOSTNAME>:<PORT> information at .1.3.6.1.4.1.41138.1.1.1.4 SNMP OID does not allow user to map AUTOMATION_AGENT_DOWN, MONITORING_AGENT_DOWN, BACKUP_AGENT_DOWN alert types into a particular hostname.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  10. Automated restores between multiple Ops Manager deployments

    Ability to utilize API and/or UI to restore a snapshot downloaded from one Ops Manager deployment to a separate distinct Ops Manager deployment.

    Some environments require separation of Production and Staging systems, including Ops Manager deployments. It is desirable to use Production data in some testing scenarios. Currently the data has to be loaded or restore manually.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  11. Allow assignment of Backup Resources before starting Backup Job

    For very large clusters, ideally backup resources should be assignable before the backup begins. For each shard and config server, assignment of the following would assist in scaling Ops Manager backups.


    • Snapshot Store

    • Oplog Store

    • Backup Daemon (if using FCV <= 4.0)

    Starting the backup could trigger an email to the Backup Administrator who could then assign these resources on the Admin page.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  12. Support Arbiters with MongoDB Kubernetes Operator

    Support arbiters with MongoDB Kubernetes Operator so that Replicasets should be deployed in PSA configuration.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  13. 4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  14. Allow Ops Manager users to move/migrate backup job snapshots from one S3 bucket to a different S3 bucket

    Ops Manager users with S3 blockstores may need to move snapshots and backup jobs to a new S3 bucket. For MongoDB blockstores, this is accomplished using a groom.

    Move Blocks to a Different Blockstore
    https://docs.opsmanager.mongodb.com/current/core/administration-interface/#groom-priority-page

    This feature request is to provide the same feature to groom backup snapshots/jobs to a new bucket for S3 blockstores.

    3 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  15. Add `serverStatus.uptime` counter info into Metrics

    What is the problem that needs to be solved? We already collect serverStatus.uptime counter info from each and every MongoDB Server process, so we just need to add serverStatus.uptime counter info into Metrics so that it will be possible to track serverStatus.uptime changes through the time.

    Why is it a problem? (the pain) If you'd like to calculate MongoDB Server process availability to know for how long your MongoDB Server process(es) was/were up and running, you'll need to analyze MongoDB Server process logs (in case if they are ever available for required period of time) to see last time MongoDB…

    3 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  16. Provide recurring/daily reporting on backup status from Ops Manager

    Ops Manager should generate a recurring/daily report of the status of all backups. This report should include at least a list of successful snapshots, a list of unsuccessful snapshots (over the configured reporting period), and the latest successful snapshot for each deployment being backed up. Additionally, this report may include resource availability such as storage available for future snapshots.

    3 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  17. Add support for collections with default locale

    At present if a collection has a default collation configured, sharding such a namespace via Ops Manager results in a failure with the following symptom:

    &lt;myCluster_mongos_131&gt; [13:21:17.050] Plan execution failed on step ShardCollections as part of move ShardCollections : &lt;myCluster_mongos_131&gt; [13:21:17.050] Failed to apply action. Result = &lt;nil&gt; : &lt;myCluster_mongos_131&gt; [13:21:17.050] Error calling shardCollection on sh.myColl with key = [[a 1]] : &lt;myCluster_mongos_131&gt; [13:21:14.994] Error executing WithClientFor() for cp=mubuntu:27017 (local=false) connectMode=AutoConnect : &lt;myCluster_mongos_131&gt; [13:21:14.993] Error running command for runCommandWithTimeout(dbName=admin, cmd=[{shardCollection sh.myColl} {key [{a 1}]} {unique false}]) : result={} identityUsed=mms-automation@admin[[MONGODB-CR/SCRAM-SHA-1]][24] : (BadValue) Collection has default collation: collation: { locale: &quot;fr&quot;, caseLevel:

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  18. Add Ops Manager's Org ID/Name into all SNMP Alert Traps

    What is the problem that needs to be solved? Ops Manager's Org ID/Name is not included into any of SNMP Alert Traps sent from Ops Manager's Application Server.

    Why is it a problem? (the pain) Operator who watch Monitoring System (the one that receive SNMP Alert Traps from Ops Manager) needs to see Ops Manager's Organization ID/Name in order to quickly understand to where that Ops Manager's Alert is related to. Monitoring System (the one that receive SNMP Alert Traps from Ops Manager) needs to do additional work for each SNMP Alert Trap received (via GET /groups/{PROJECT-ID}/GET /orgs/{ORG-ID}

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  19. Allow to configure separate SNMP v2C community for SNMP v2C Heartbeat Traps and SNMP v2C Alert Traps

    What is the problem that needs to be solved? Allow to configure separate SNMP v2C community for SNMP v2C Heartbeat Traps and SNMP v2C Alert Traps.

    Why is it a problem? (the pain) As of now (2020-03-24) there's no way to configure separate SNMP v2C community for SNMP v2C Heartbeat Traps and SNMP v2C Alert Traps (snmp.community controls both, Heartbeat and Alert Traps sent from Ops Manager's Application Server), some SNMP Monitoring Teams require different SNMP v2C communities for different set of SNMP v2C Traps (to separate Heartbeat Traps and Alert Traps).

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  20. Allow Point-In-Time restores going back to a configured amount of hours/days

    What is the problem that needs to be solved? Allow Point-In-Time restores going back to a configured amount of hours/days (similar to Ops Manager).

    Why is it a problem? (the pain) Oplogs are captured for the last 24 hours only, and sometimes the requirement is to be able to execute Point-In-Time restores for longer than 24 hours (48 hours, etc. (to be defined by customer's project/goal)).

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Cloud Manager  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3
  • Don't see your idea?

Feedback and Knowledge Base