Ops Tools
469 results found
-
Attain granularity on the Ops manager backups & restore
Backups:Under a project, in a single cluster we may deploy multiple databases which belongs to different applications that are somehow interlinked in terms of their functionality. Different databases requires different backup strategies, retention periods, and so different snapshot schedules. However, the level of granularity on Ops Manager "Continuous Backup" doesn't match the above requirement.
Restore: How could we restore a single collection, or a database without touching the other databases/collections within a cluster? The restore process that we have today seems to be restoring the whole cluster and not to the level of database/collection. Queryable backup snapshot seems to be…
6 votes -
adminCredentials secret should always be source of truth for OpsManager
The secret is only taken into account by OpsManager initially when OpsManager is deployed. As soon as the password of this user is changed in OpsManager, this secret is out of sync.
From the docs: "Use these credentials to log in to Ops Manager for the first time. Once Ops Manager is deployed, you should change the password or remove this secret."
https://docs.mongodb.com/kubernetes-operator/v1.4/tutorial/plan-om-resource/#prerequisitesOption 1: This secret should be in-sync with the OpsManager database. Preferably the sync should be from the k8s secret to the OpsManager database.
Option 2: Create a CRD "MongoDBOpsManagerUser" that handles User/Password management for OpsManager similar…
6 votes -
Snapshots taken by Ops Manager Backups should include config file
When restoring a snapshot through Ops Manager, automation will create the config file for you. But if you're restoring a snapshot manually, there's no config file! Surely we can include a sample
mongod.conf
withe sufficient information filled out to help a user get up and running again.6 votes -
Allow to configure `maxTimeMS` for commands executed from Ops Manager's Data Explorer
What is the problem that needs to be solved? Allow to configure
maxTimeMS
for MongoDB commands which are executed from Ops Manager's Data Explorer.Why is it a problem? (the pain) A) Ops Manager's Data Explorer cannot work with views in case if the view is taking >15000 ms to be load. Data Explorer cannot work with find operations in case if that find operation is taking >15000 ms to be completed.
6 votes -
Add Global MongoDB Agent Upgrade ability
Add the ability to upgrade all MongoDB Agents across all Projects at the same time instead of clicking on the banner for each Project.
6 votesWe are going to expose this via mongocli first as it would probably makes a fastest solution.
-
Identify the snapshot of each project and its size in S3 blockstore
Calculating the storage size consumed by snapshots for each projects deployments for the storage size consumed individually in S3 Blockstore. However, on the S3 snapshot store the data will be stored in the below format which doesn't include any project id to identify the specific project.
s3://bucket_name/0E3AA1971D5CF1CA52F9AF22A4228F10293AE9804D43FBF7EB5DDE38DB06B74A/5b27b0e4083826088f259f28_A s3://bucket_name/1860B12165FB7ED336DDAB9D306EF38E18FCBD36BF695904C497B825F83581DC/5b27b0e4083826088f259f28_A
This feature is helpful for the customers to understand and segregate the storage consumption by each deployments5 votes -
Supporting the installation of mongosh in Ops Manager Local Mode
Ops Manager v6.0+ supports installing the new mongo shell (mongosh) to the deployment nodes. This feature is not supported in Ops Manager Local Mode.
It would be convenient to the Ops Manager Automation user if they could upload the mongosh binary to the Ops Manager Versions Directory and Ops Manager will install the binary to the deployment nodes similar to the MongoDB Binary and MongoDB Database Tools.
Currently, the customer with Local Mode needs to manage the installation of mongosh outside of Ops Manager Automation.
6 votes -
custom defined roles In OPS Manager
We need a custom defined role to perform specific functions in the OPS Manager.
For Example --> We need a custom defined role which can perform subset of functions from Project Automation Admin Role + Project Read Only Role + rs.stepDown() functionality
Project Automation Admin Role:
View deployments.
Provision machines.
Edit configuration files.
Download the MongoDB Agent.
+ Project Read Only role.Project Read Only Role:
Activity
Operational data
Ops Manager Users
Ops Manager User roles.** This feature becomes very useful to contain the access of certain privileges and to have the flexibility tailormade privileges instead of giving the…
5 votes -
Multi Region S3 BlockStore
On an S3 Blockstore, right now we are able to stick to only one region backup because S3 do not support versioning & Replication. For an high availability systems a backup also needs to be in multiple regions.
If we could add the following that will be good.
1. Configure Multiple backups in a project because S3 snapshot store can be only used in one region.5 votes -
Loadbalance functionality for Ops Manager
When Ops Manager is under heavy load from many active database services, f.e. to handle backup snapshots, the recommendation from the case support portal is to use a loadbalancer in front of Ops Manager.
It would be great, if Ops Manager could have this as an integrated functionality to forward connections from agents to additional Ops Manager instances.
Background: Our nTSE did confirm, that we could set a different Ops Manager URL in the agent config, but the transferred data from this agent is still sent to the initially configured Ops Manager instance.5 votes -
Reset duplicates button for Ops Manager Admin System overview page
Sometimes customer want to clean up the System Overview page and/or the application db from because:
- They dismissed one or more Ops Manager server running Application servers or backup daemon components
- Their hostnames / domains have changed over the time
- Their hostnames have changed from lowercase to capital letters or vice versa because upgrading Ops Manager brought a JVM providing different values for getHostname() (this happens often in Windows environment).It would be nice to have a button or at least an API call similar to the project reset duplicates that forces Ops Manager to refresh the…
5 votes -
Add ability to configure Pod Distruption Budget for STS
During maintenance work EKS admins may need to evict nodes. This should not cause outage for MongoDB cluster/replicaset running on these nodes. we can create manually PDB for STS, but it would be nice to have an option to do it as part of MongoDB Kubernetes Operator.
5 votesSupporting Pod Disruption Budget natively is something we do hope to do at some point.
But for now it is still possible by creating the PodDisruptionBudget resource and targeting the deployment using labels. (As per https://kubernetes.io/docs/tasks/run-application/configure-pdb/)
-
index review before sharding with OPS Manager
Currently, when you use OPS Manager to shard a collection, it automatically creates a foreground index that exactly matches the sharding key, when such index doesn't exist. When the sharding key column(s) is(are)already prefixing an existing index, it's not sufficent.
This is dangerous in Live environments because the whole database is blocked for a long time (sharded collections are usually big collections).
So, several features could exist :
- before continuing with sharding, OPS Manager warns that it needs to create this foreground index first. You can stop if you don't agree (and create this index by yourself first).
-…5 votes -
Restore Backup Snapshots to Sharded Clusters via mongos
Migrating large sharded clusters to a different cluster pre-sharded with different shard keys requires a significant amount of time to balance post-restore.
This is a feature request to restore a snapshot on a per document basis through a mongos. The desired result is a completed restore with collections/documents residing on their target shards.
5 votes -
Project Alert Integration With Rocket.Chat
Integration with Rocket.Chat like done in Slack.
5 votes -
Add 4.4 tool to PATH
Since MongoDB 4.4 the tools are located in another folder that is not included in the automation agent bash profile:
[root@n1 mongodb-mms-automation]# cat /etc/profile.d/mongodb-mms-automation-agent.sh
export PATH=/var/lib/mongodb-mms-automation/bin:${PATH}Would be nice to have also the mongo tools in the path
5 votes -
Support __exec setting for the MongoDB deployment in Automation
The deployment that utilizes __exec setting is unable to import to Automation as well as it is unable to deploy the deployment by Automation with this setting.
5 votes -
Automation should handle multiple hostname aliases for each server
In order to separate replication, client and administrative traffic, servers may have multiple network interfaces using different IP and hostname aliases associated with them.
According to the requirements described on https://docs.opsmanager.mongodb.com/current/tutorial/provisioning-prep/#server-networking-access Automation currently can use only the server hostname defined as
hostname -f
and cannot use any of the other aliases matching to other IP addresses for the other machine host aliases.Please add some way to customize which host alias Automation should use as a configuration parameter for the Agent.
5 votes -
Binaries should be provided via Docker images
To implement this in a Kubernetes native way for offline deployments, the binaries should be provided as Docker images and distributed through the Docker repositories that are already present in the k8s environment.
Option 1 (intermediate): Allow Mount of Docker images in the MongoDBOpsManager resource for the path defined in automation.versions.directory. This would allow us to pre-package the binaries needed and bring up the OpsManager in a kubernetes native way. Everything else (e.g. agent downloading the tgz) would stay the same and still be the "OpsManger-way".
Option 2 (long term): The MongoDB TGZ package is provided as Docker image by…
5 votes -
Make the option of "security.TransitionToAuth" available through Ops Manager Advanced Configuration Options
Currently the option of "security.TransitionToAuth" is not available in Ops Manager as "transitionToAuth" is automatically added to each node in a rolling fashion by the Automation agent and then ultimately removed when authentication is finally turned on for all nodes.
Allowing this option through Ops Manager will enable the mongod to accept and create authenticated and non-authenticated connections to and from the connected clients. Thus the clients can use this feature to avoid downtime at their end while the connection settings are updated to use the appropriate user to connect to mongod.
5 votes
- Don't see your idea?