Ops Tools

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Add the ability to trigger alerts for testing purposes

    It would be useful to have a "Test Alert" button for each configured alert in order to integrate and test alerts with third-party systems. Otherwise, it is difficult if not impossible to determine what the alert will look like until it is triggered.

    13 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  2. Configure MongoDB Automation Agent collecting stats on some collection to not trigger alerts

    We just had a support case about some alerts being raised on our cluster because the MongoDB Automation Agent collecting stats on some collection doing queries without index triggers "Scanned Objects / Returned" ratio has went over 1000.

    It would be really nice to at least not raise alerts when it's the mongodb automation agent that triggered it. Were monitoring our alerts a lot and these are false positive we can't do anything about it seems other than create all the indexes it needs, which might change over time. We have no guarantee of which index it needs.

    Another alternative…

    9 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Automation  ·  Flag idea as inappropriate…  ·  Admin →
  3. Sharded Cluster Snapshot Restores - Throttling and Src/Dst Mapping

    When restoring a sharded cluster snapshot, provide a means of mapping the source shard replica set names to the target shard replica set names. This will allow users to predictably restore large/small shards to the appropriate target hosts.

    Currently, this can be done for sharded clusters with fewer than 10 shards by naming the shards in a predicable shard## pattern. However, greater than ~10 shards leads to an alphabetical => alphanumeric restore plan (e.g. shardA10 restores to shardB2).

    Restoring large sharded clusters can also overwhelm networks where MongoDB Agents are downloading snapshots from Ops Manager(s) at the same time. Please…

    6 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  4. Integration with Microsoft Teams

    Add third-party service integration for Microsoft Teams, as we do for Slack.
    Most likely the following can be leveraged to achieve the integration: https://docs.microsoft.com/en-us/graph/teams-proactive-messaging

    17 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  5. Add support for virtualization volume management in Ops Manager Backup snapshots and restores.

    Streaming snapshots during a restore from a blockstore out to the MongoDB Agent cases the RTO (recovery time objective) to grow linearly with the compressed datasize in the largest shard/RS of the snapshot. The RTO of an Ops Manager restore could be very significantly improved by leveraging volume management infrastructure (such as from VMWare) to restore previously acquired snapshots as virtual filesystems (volumes).

    14 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  6. Add the ability for the backup daemon to download and validate snapshots

    In order to test snapshots automatically, create a new job type that allows the Backup Daemon to download the snapshot from the snapshot store, then run validate on each collection.

    If any collection fails validation, send an alert to the Backup Admin with the list of corrupted data.

    7 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  7. Project Alert Integration With Rocket.Chat

    Integration with Rocket.Chat like done in Slack.

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  8. Restore Backup Snapshots to Sharded Clusters via mongos

    Migrating large sharded clusters to a different cluster pre-sharded with different shard keys requires a significant amount of time to balance post-restore.

    This is a feature request to restore a snapshot on a per document basis through a mongos. The desired result is a completed restore with collections/documents residing on their target shards.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  9. add details to alert Query Targeting: Scanned / Returned to identify what triggered it

    If you want to catch full scan on collection you could set up alert "Query Targeting: Scanned / Returned > 1".
    OPS Manager does not provide any details where it was triggered.
    it would be nice to get db/collection/query which caused this issue. Query still can be fast (for example, not many document in collection yet or hardware is idle/fast at the moment) and will not go to slow operation group.

    12 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  10. Deploy MongoDB across different Kubernetes clusters

    MongoDB Operator can only deploy and manage MongoDB in a single Kubernetes cluster. However, for DR and global apps, it is important to deploy a single DataBase across multiple Kubernetes clusters to allow for DR or globally distributed apps.

    26 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →

    We are going to start POC in July to figure this out. Our goal is to form a single cluster that can fallback between different K8S clusters.

  11. Ability to make use of new S3 buckets without having to terminate backups

    We would like to see the ability to "migrate" from an existing S3 storage to new S3 storage. Currently, when we create new S3 buckets for storing snapshots we can only make use of it when we terminate the existing backup stored in the "old" S3 bucket. This means we delete all existing backups before we can use the new S3 snapshot storage, which is considered very high risk. We must keep backups for up to few months and cannot delete them just to move to a new S3 backup storage. It would be very good to be able to…

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  12. Add Alert for Projects which are not in Goal State

    Add an Alert type that is triggered if a project has not reached the goal state for certain amount of time.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  13. Improve backup process with automatic reparation of broken job

    Enhance the Ops Manager feature for the detecting of broken processes, validating its sanity, so that Ops Manager could "un-break" processes and start regular backups without any manual intervention.

    3 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  14. Support migrations to different snapshot stores

    Currently it is not possible to transition between snapshot store types.

    There are two options currently when transitioning to the new one.


    1. Terminate backups (deleting all previous snapshots)

    2. Create a new project and abandon the previous one to allow automated restores at a later time

    Both of these options are difficult to manage for large deployments. The first option requires you to store the snapshots elsewhere and disallows automated restores. The second option requires many operations and clutters the Ops Manager project list.

    Ideally we should be able to transition from any store location/type to any other location/type. One of…

    7 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  15. Add 4.4 tool to PATH

    Since MongoDB 4.4 the tools are located in another folder that is not included in the automation agent bash profile:

    [root@n1 mongodb-mms-automation]# cat /etc/profile.d/mongodb-mms-automation-agent.sh
    export PATH=/var/lib/mongodb-mms-automation/bin:${PATH}

    Would be nice to have also the mongo tools in the path

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  16. Ability to configure all destinations for SNMPv2c Alert Traps in a single place

    What is the problem that needs to be solved? Ops Manager needs to have ability to configure all destinations for SNMPv2c Alert Traps in a single place (so that single place needs to be updated instead of dozens of individual Ops Manager Alerts).

    Why is it a problem? (the pain) In case if there's a change in SNMPv2c Alert Trap destination(s), it becomes effort to change the respective hosts for each of the alert. This process requires some time (unless customer script it via Ops Manager's API) if amount of configured Ops Manager Alerts is high, and the process itself…

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Monitoring  ·  Flag idea as inappropriate…  ·  Admin →
  17. Repilca Set Election - Even Number of Nodes

    WE have a 4 node replica set and when we removed 2 of the nodes at one time the election process didn't know how to elect a new primary. We would like the election process to accommodate an even number of replica set nodes.

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  18. Headless OPS Manager deployment

    Currently Ops Manager CRD deployment requires configuration using GUI which is a manual step. An option completely define all OPS Manager settings / Org in declarative manner via yaml will be great in building completely automated CI/CD Pipelines

    7 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  19. Create AppDB user with backup role to allow execution of mongodump

    For the purpose of regularly performing backups of the AppDB using mongodump --oplog.

    3 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  20. Automation should handle multiple hostname aliases for each server

    In order to separate replication, client and administrative traffic, servers may have multiple network interfaces using different IP and hostname aliases associated with them.

    According to the requirements described on https://docs.opsmanager.mongodb.com/current/tutorial/provisioning-prep/#server-networking-access Automation currently can use only the server hostname defined as hostname -f and cannot use any of the other aliases matching to other IP addresses for the other machine host aliases.

    Please add some way to customize which host alias Automation should use as a configuration parameter for the Agent.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Automation  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3 4 5 6 7 8
  • Don't see your idea?

Feedback and Knowledge Base