Ops Tools

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Possibility to send an alert over all/selected alert channels, if all/selected alerts can't be delivered

    What is the problem that needs to be solved? Send an alert over all/selected alert channels, if all/selected alerts can't be delivered by Ops Manager's Application Server.

    Why is it a problem? (the pain) We can miss an alert (or multiple alerts) from Ops Manager's Application Server if configured alert channel become unavailable since all Ops Manager Alerts are working in fire-and-forget style without any checks/mechanisms to see if the alert was delivered (some alert types can't have that guarantee at all (e.g. SNMP Alert Traps)).

    Ops Manager - Alerting Framework.png flow diagram has been attached to this Feature Request…

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  2. Allow to configure options for automation agent logs

    Currently there is no way in Kubernetes operator to configure how long automation/backup/monitoring agent logs should be stored. they can easily occupy all space in pod.

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    planned  ·  0 comments  ·  Kubernetes Operator  ·  Flag idea as inappropriate…  ·  Admin →
  3. Add support for collections with default locale

    At present if a collection has a default collation configured, sharding such a namespace via Ops Manager results in a failure with the following symptom:

    <myCluster_mongos_131> [13:21:17.050] Plan execution failed on step ShardCollections as part of move ShardCollections : <myCluster_mongos_131> [13:21:17.050] Failed to apply action. Result = <nil> : <myCluster_mongos_131> [13:21:17.050] Error calling shardCollection on sh.myColl with key = [[a 1]] : <myCluster_mongos_131> [13:21:14.994] Error executing WithClientFor() for cp=mubuntu:27017 (local=false) connectMode=AutoConnect : <myCluster_mongos_131> [13:21:14.993] Error running command for runCommandWithTimeout(dbName=admin, cmd=[{shardCollection sh.myColl} {key [{a 1}]} {unique false}]) : result={} identityUsed=mms-automation@admin[[MONGODB-CR/SCRAM-SHA-1]][24] : (BadValue) Collection has default collation: collation: { locale: "fr", caseLevel:

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  4. Add Ops Manager's Org ID/Org Name/Project Name into Project/Global Alerts API calls & Alert Webhooks

    What is the problem that needs to be solved? Ops Manager's Org ID/Org Name/Project Name attributes needs to be added to Project (GET /groups/{PROJECT-ID}/alerts) / Global (GET /globalAlerts) Alerts API calls and Alert Webhooks.

    Why is it a problem? (the pain) Ops Manager's Org ID/Org Name/Project Name attributes are currently missed in Project (GET /groups/{PROJECT-ID}/alerts) / Global (GET /globalAlerts) Alerts API calls and Alert Webhooks. Operator who watch Monitoring System (the one that receive Ops Manager Alerts) needs to see Ops Manager's Organization ID/Organization Name/Project Name in order to quickly understand to…

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  5. Add Ops Manager's Org ID/Name into all SNMP Alert Traps

    What is the problem that needs to be solved? Ops Manager's Org ID/Name is not included into any of SNMP Alert Traps sent from Ops Manager's Application Server.

    Why is it a problem? (the pain) Operator who watch Monitoring System (the one that receive SNMP Alert Traps from Ops Manager) needs to see Ops Manager's Organization ID/Name in order to quickly understand to where that Ops Manager's Alert is related to. Monitoring System (the one that receive SNMP Alert Traps from Ops Manager) needs to do additional work for each SNMP Alert Trap received (via GET /groups/{PROJECT-ID}/GET /orgs/{ORG-ID}

    8 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  6. Allow to configure separate SNMP v2C community for SNMP v2C Heartbeat Traps and SNMP v2C Alert Traps

    What is the problem that needs to be solved? Allow to configure separate SNMP v2C community for SNMP v2C Heartbeat Traps and SNMP v2C Alert Traps.

    Why is it a problem? (the pain) As of now (2020-03-24) there's no way to configure separate SNMP v2C community for SNMP v2C Heartbeat Traps and SNMP v2C Alert Traps (snmp.community controls both, Heartbeat and Alert Traps sent from Ops Manager's Application Server), some SNMP Monitoring Teams require different SNMP v2C communities for different set of SNMP v2C Traps (to separate Heartbeat Traps and Alert Traps).

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  7. Allow Point-In-Time restores going back to a configured amount of hours/days

    What is the problem that needs to be solved? Allow Point-In-Time restores going back to a configured amount of hours/days (similar to Ops Manager).

    Why is it a problem? (the pain) Oplogs are captured for the last 24 hours only, and sometimes the requirement is to be able to execute Point-In-Time restores for longer than 24 hours (48 hours, etc. (to be defined by customer's project/goal)).

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  8. Snapshots taken by Ops Manager Backups should include config file

    When restoring a snapshot through Ops Manager, automation will create the config file for you. But if you're restoring a snapshot manually, there's no config file! Surely we can include a sample mongod.conf withe sufficient information filled out to help a user get up and running again.

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  9. Collect hardware metrics even if there's no managed mongo process

    Collect hardware metrics even if there's no managed mongo process

    Have Automation Agent collect hardware metrics on unmanaged mongo hosts.

    Automation agents doesn't collect hardware metrics unless there's a managed mongo process. This means we can't provide centralized system monitoring for a heterogeneous environment, where some clusters are running on their own and others are under automation, or on any non-managed host.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Monitoring  ·  Flag idea as inappropriate…  ·  Admin →
  10. MMS Alert for Balancer Down status

    please provide an option in Ops Manager to monitor the balancer status and add to alerts?
    So, we will know if balancer is not running.

    Note:
    Normally the balancer would be disabled during backups and during a scheduled Balancing Window downtime.
    I believe that the Balancer has a duty cycle of either 10 secs when nothing recently to balance, or 1 sec when there's a bunch of balancing to do.
    Any alert would need to account for these:
    the changelog shows chunk moves commanded
    the actionlog shows balancer state change history
    the settings collection has the balancer state

    7 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Monitoring  ·  Flag idea as inappropriate…  ·  Admin →
  11. Allow honorSystemUmask to be set on Ops Manager HEADDBs

    If honorSystemUmask is set to false, new files created by MongoDB have permissions set to 600, which gives read and write permissions only to the owner. New directories have permissions set to 700.

    As a result it makes it difficult to read the HEADDB logs in some environments. Allowing honorSystemUmask to be configurable would allow the customer to choose permissions based on their own security policies.

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  12. Allow to configure `maxTimeMS` for commands executed from Ops Manager's Data Explorer

    What is the problem that needs to be solved? Allow to configure maxTimeMS for MongoDB commands which are executed from Ops Manager's Data Explorer.

    Why is it a problem? (the pain) A) Ops Manager's Data Explorer cannot work with [views](https://docs.mongodb.com/manual/core/views/index.html) in case if the view is taking >15000 ms to be load. Data Explorer cannot work with [find](https://docs.mongodb.com/manual/reference/command/find/index.html) operations in case if that find operation is taking >15000 ms to be completed.

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Monitoring  ·  Flag idea as inappropriate…  ·  Admin →
  13. List shards in Deployment > Metrics' shard list in alphabetical order

    List shards in Deployment > Metrics' shard list in alphabetical order in Cloud Manager UI.

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Monitoring  ·  Flag idea as inappropriate…  ·  Admin →
  14. Add Ops Manager check to prevent making backups of itself (Backing databases - AppDB, Oplog, Blockstore)

    Otherwise, an Out Of Memory condition may result and disable Ops Manager.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  15. Add Global MongoDB Agent Upgrade ability

    Add the ability to upgrade all MongoDB Agents across all Projects at the same time instead of clicking on the banner for each Project.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  16. Allow Ops Manager users to move/migrate backup job snapshots from one S3 bucket to a different S3 bucket

    Ops Manager users with S3 blockstores may need to move snapshots and backup jobs to a new S3 bucket. For MongoDB blockstores, this is accomplished using a groom.

    Move Blocks to a Different Blockstore
    https://docs.opsmanager.mongodb.com/current/core/administration-interface/#groom-priority-page

    This feature request is to provide the same feature to groom backup snapshots/jobs to a new bucket for S3 blockstores.

    4 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Backup  ·  Flag idea as inappropriate…  ·  Admin →
  17. Add `serverStatus.uptime` counter info into Metrics

    What is the problem that needs to be solved? We already collect serverStatus.uptime counter info from each and every MongoDB Server process, so we just need to add serverStatus.uptime counter info into Metrics so that it will be possible to track serverStatus.uptime changes through the time.

    Why is it a problem? (the pain) If you'd like to calculate MongoDB Server process availability to know for how long your MongoDB Server process(es) was/were up and running, you'll need to analyze MongoDB Server process logs (in case if they are ever available for required period of time) to see last time MongoDB…

    5 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Monitoring  ·  Flag idea as inappropriate…  ·  Admin →
  18. Group Host Mapping IPs to be added by default to Programmatic API Keys IP Whitelisting

    As part of the Programmatic API Keys introduced in MongoDB Ops Manager 4.2, it would be good to have the IPs listed in the Host Mappings of a Project Deployment to be added as default whitelisted IPs when setting up a Programmatic API Key for same Project.

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  19. Single Project Programmatic API Keys should not require Org User Admin role for IP Whitelists update

    When setting a Programmatic API Key in the MongoDB Ops Manager for a given Project, it seems Project Owners are unable after setting the Key to update the IP Whitelist as they require Organization User Admin role to perform such action (screenshot attached).

    I guess this makes sense if the same Programmatic API Key is shared between multiple Projects inside the same Organization, but not really if this is applied only to one single Project (i.e. Project Owners should be able to amend the IP Whitelisting of their own API Keys).

    I wonder if this could be enhanced in further…

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  20. SNMP traps for `AUTOMATION_AGENT_DOWN`, `MONITORING_AGENT_DOWN`, `BACKUP_AGENT_DOWN` alert types does not contain hostname information

    What is the problem that needs to be solved? SNMP traps for AUTOMATION_AGENT_DOWN, MONITORING_AGENT_DOWN, BACKUP_AGENT_DOWN alert types does not contain hostname information in .1.3.6.1.4.1.41138.1.1.1.4 (.iso.org.dod.internet.private.enterprises.mms.server.serverMIBObjects.mmsAlertObject.mmsAlertHostAndPort) OID.

    Why is it a problem? (the pain) User is blocked to act quickly on the alert and identify the host where Ops Manager's Automation/Monitoring/Backup Agent is in DOWN state. Missing <HOSTNAME>:<PORT> information at .1.3.6.1.4.1.41138.1.1.1.4 SNMP OID does not allow user to map AUTOMATION_AGENT_DOWN, MONITORING_AGENT_DOWN, BACKUP_AGENT_DOWN alert types into a particular hostname.

    11 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Ops Manager  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base