Ops Tools
488 results found
-
mongomirror compatibility with SRV strings
It would be ideal if the mongomirror utility could accept SRV-based connection strings for the source and destination clusters. The inability to do this can cause pain for customers. For example, this does not work:
--destination "mongodb+srv://mgo-aura-dgs-prdsrv-tk-pl-0.yrmiy.mongodb.net"
So users are stuck doing this:
--destination "atlas-nkaylx-shard-0/pl-0-ap-northeast-1.yrmiy.mongodb.net:1036,pl-0-ap-northeast-1.yrmiy.mongodb.net:1037,pl-0-ap-northeast-1.yrmiy.mongodb.net:1038"
Two issues with that:
1. It's painful looking up the host/port and atlas defined replica set name
2. The destination info is unreadable by most humans. The service name (i.e. SRV record connstring) includes the name of the cluster that the customer defined, and not the random has that Atlas generates2 votes -
Configure MongoDB Automation Agent collecting stats on some collection to not trigger alerts
We just had a support case about some alerts being raised on our cluster because the MongoDB Automation Agent collecting stats on some collection doing queries without index triggers "Scanned Objects / Returned" ratio has went over 1000.
It would be really nice to at least not raise alerts when it's the mongodb automation agent that triggered it. Were monitoring our alerts a lot and these are false positive we can't do anything about it seems other than create all the indexes it needs, which might change over time. We have no guarantee of which index it needs.
Another alternative…
9 votes -
index review before sharding with OPS Manager
Currently, when you use OPS Manager to shard a collection, it automatically creates a foreground index that exactly matches the sharding key, when such index doesn't exist. When the sharding key column(s) is(are)already prefixing an existing index, it's not sufficent.
This is dangerous in Live environments because the whole database is blocked for a long time (sharded collections are usually big collections).
So, several features could exist :
- before continuing with sharding, OPS Manager warns that it needs to create this foreground index first. You can stop if you don't agree (and create this index by yourself first).
-…5 votes -
avoid generating alert with error message if one oplog node in replicaset got rebooted.
currently if one of the node in appdb/oplogdb goes down for any reasons (for example, linux patch rebooting the node), ops manager generates alert
"Ops Manager was unable to connect to this database and run the
ping
command. The database could be down, unreachable, or running with authencation and Ops Manager does not have adequate permissions."there are still 2 other running nodes in replicaset. so this alert is misleading and generates false alarms.
2 votes -
Add compound indexes support for Ops Manager managed Sharding
What is the problem that needs to be solved? Ops Manager Automation does not take into account compound indexes (https://docs.mongodb.com/manual/core/index-compound/), e.g. if we have
{ a: 1, b: 1 }
index already exist Ops Manager will still create{ a: 1 }
index fora
Shard Key.Why is it a problem? (the pain) This creates unnecessary indexes with performance impact on the MongoDB Server process.
2 votes -
S3 Snapshot Store Speed Test
It is often quite difficult to diagnose latency/bandwidth/generally slow S3 storage. It would be useful if Ops Manager could run a short test to show:
- How fast a single large object can be PUT and GET
- Measure parallel PUTs and GETs against test objects
- How much latency there is between Ops Manager and S3
1 vote -
Allow to create custom roles
Allow to create custom roles for Atlas/CM/OM
5 votes -
Option to clear deleted alerts
Deleted alert definitions pile up in the "deleted alerts" tab of Ops Manager.
This information may be useful for auditing purpose, but in the long run, the number of deleted alerts may grow too large. Especially in our use case, where alert configurations are deployed through a script that deletes/recreate all alerts.
Feature suggestion: add an action to clear all deleted alerts (or better: clear all deleted alerts older than N days).
2 votes -
Support kubernetes taints and tolerations
I believe kubernetes taints and tolerations are not supported by the operator, yet I find it a much needed capability.
1 vote -
mongorestore from metadata
hi
When start mongorestore, data is restored first.
so, performance is poor, and recovery takes a long time.
would please proceed from the metadata(index) and change it so that it can be restored quickly.as-is: data > metadata(index)
regards,
park1 vote -
connection-pool monitoring
hi
like this, would you please support connection-pool monitoring.regards,
park1 vote -
Add Timezone support to Ops Manager Application logs
All of our hosts are in the same TZ. We would like to be able to set the Ops Manager related logs timestamps to our local TZ.
12 votes -
Allow informational alerts to be sent to PagerDuty
Right now informational alerts can't go to PagerDuty, OpsGenie, or VictorOps. Ideally all alert notifications should go to the same endpoint, even if those alerts are "informational" and no acknowledgement is required.
2 votes -
Headless OPS Manager deployment
Currently Ops Manager CRD deployment requires configuration using GUI which is a manual step. An option completely define all OPS Manager settings / Org in declarative manner via yaml will be great in building completely automated CI/CD Pipelines
20 votes -
Never delete the most recent backup snapshot
When backup snapshots cannot be completed as scheduled, and they are behind, Ops Manager really should NEVER delete the most recent snapshot. For example, if the snapshot retention policy is to take a backup every 24 hours and keep daily backups for 2 days, and weekly backups for 2 weeks, a new snapshot should not automatically expire after 2 days. If backups are not completing for some reason, and it goes 2 days without a successful new snapshot, the most recent daily backups are deleted and we are left with the most recent weekly backup instead. There really should never…
3 votes -
MongoDB Operator Deployment Env Variables Push Down
This is a feature request to have custom environment variables, configured in the MongoDB Operator's Deployment manifest, push down or propagate to all resources created by the Operator.
For example, it may be desired to add environment variables with context. A more specific example could include setting a TZ timezone environment variable that is automatically added to all pod containers created by the Operator.
2 votes -
arm64 support for Cloud Manager Agent
We would like to install Cloud Manager Agent on arm64 based linux machines.
8 votes -
Drag and drop
This is strictly cosmetic, but in Ops Mgr Deployment->Processes->deployment, it would be nice to be able to drag-and-drop the order of the servers. I believe it shows them in the order they were added to the replica set, the same as rs.status() or rs.config() would show, but for our deployments, we typically have 2 "main" systems, and then a third "DR" system. It would be great if I could always have our main systems as the first 2 systems, and then our "DR" system last, regardless of how they were added to the replica set. The order they were added…
1 vote -
Restore Backup Snapshots to Sharded Clusters via mongos
Migrating large sharded clusters to a different cluster pre-sharded with different shard keys requires a significant amount of time to balance post-restore.
This is a feature request to restore a snapshot on a per document basis through a mongos. The desired result is a completed restore with collections/documents residing on their target shards.
5 votes -
The Project Global Admin Agent logLevel settings are redundant and confusing
As explained here:
https://docs.opsmanager.mongodb.com/current/tutorial/manage-project-settings/index.html#admin-project-settingsThe Ops Manager global admins can override the logLevel setting on a project for Automation and Monitoring Agent modules using the webUI.
This will cause the Agent to override the local config logLevel setting to what's set in the webUI.
Please note that this global admin project setting is redundant as the logLevel of the Agent modules can be also set by using the "Edit custom configuration" window in the project agent settings page as explained here:
https://docs.opsmanager.mongodb.com/current/reference/mongodb-agent-settings/#configuration-file-settings-locations1 vote
- Don't see your idea?