Ops Tools
45 results found
-
Allow the Kubernetes Operator to delete a project
Currently it is not possible to delete a project via kubectl command.
As the Kubernetes Operator allows one to create a project (configmap) and deploy a replica set, we would expect it to also allow the deletion of a project so that we can fully automate the solution.
10 votesWhile deletion of a deployment is possible via Kubernetes, deleting a MongoDB resource doesn’t remove it from the Ops Manager UI. You must remove the resource from Ops Manager manually. To learn more, see Remove a Process from Monitoring.
Deleting a MongoDB resource for which you enabled backup doesn’t delete the resource’s snapshots. You must delete snapshots in Ops Manager.
Work is planned to remove Ops Manager as a prerequisite (though it's use will still be optional and supported) and as part of that we hope to address this deletion aspect.
-
Support Service Binding Specification for Kubernetes
Service Binding Specification for Kubernetes standardizes exposing backing service secrets to applications. The spec is available here: https://github.com/servicebinding/spec
This blog post would be helpful: https://muthukadan.net/kubernetes/binding/support-service-binding-specification-for-kubernetes/
1 voteLow customer demand. Potentially in the future if we hear sufficient demand.
-
Allow disabling Blockstore for assignment through the Ops Manager CRD
By default, when enabling backups and configuring a Blockstore for an Ops Manager custom object, the specified Blockstore will be set as "Assignment enabled" in the UI.
It would be helpful to expose the enable/disable button for the blockstore through the CRD since disabling it through the UI, results in the parameter being reverted every time the operator consolidates. This is useful for the case when more than a single store is configured and as a user you would like to disable the blockstore to make it unavailable for new backup jobs.
3 votes -
Enable S3 Snapshot Storage via Kubernetes Operator with IAM role
Configuring an S3 Snapshot Storage with IAM roles is only possible via Ops Manager UI or API.
It would be great to be able to do this configuration via the MongoDB Kubernetes Operator.
1 vote -
Add ability to configure Pod Distruption Budget for STS
During maintenance work EKS admins may need to evict nodes. This should not cause outage for MongoDB cluster/replicaset running on these nodes. we can create manually PDB for STS, but it would be nice to have an option to do it as part of MongoDB Kubernetes Operator.
2 votesSupporting Pod Disruption Budget natively is something we do hope to do at some point.
But for now it is still possible by creating the PodDisruptionBudget resource and targeting the deployment using labels. (As per https://kubernetes.io/docs/tasks/run-application/configure-pdb/)
-
sharding
Should provide sharding feature in community Operator.
4 votes -
Provide support to update version manifest to Ops Manager that uses local mode
With Ops Manager Local Mode on Kubernetes, the version manifest is required to be updated manually via UI or API.
It would be the best practice to support updating version manifest using a command to the Operator or OM Pods.
1 vote -
Ops Manager and Backup infrastracture Disaster Recovery support with K8s Operator
We have carried out tests with MongoDB v1.5.5 K8s Operator and Ops Manager 4.2.18 with Backup infrastructure (S3 Snapshots) in an Openshift 3.11 environment (MongoDB Support case attached).
In this case, a "Disaster Recovery" simulation has been carried out. However, several components created by the Operator had to be restored to obtain a similar state to the one before the "disaster".
Furthermore, it is very likely that the S3 Snapshots will be lost if the process is not completed in a certain manner.
It would be great to have an official approach to deploy/restore an OM resource using MongoDB K8s…
4 votesThere is no current supported mechanism for backing up Ops Manager in a way that guarantees the data. As Ops Manager is itself a backup tool, it's challenging to maintain the integrity of the data in DR scenarios.
For this reason we recommend multi-site high availability for OM and AppDB. This is already possible when running OM on hardware of in VMs, but not currently supported in Kubernetes (unless a Kubernetes cluster is spanning sites).
Later this year (2023) we hope to support OM deployments across multiple Kubernetes clusters - as we already support (in beta) for Replica Sets (full release in April 2023 with Sharded cluster support in May/June 2023). Doing so will reduce the criticality of a OM/AppDB backup solution within Kubernetes.
-
Add ability to have systemLog redirected to stdout (just have to remove systemLog.destination and path)
To be able to have MongoDB logs redirect to stdout and this having it into GKE CloudLogs, we should NOT configure a systemLog.destination nor a systemLog.path.
In 0.6.0 release, systemLog.destination and path are hardcoded and cannot be nullable.
see automationconfigbuilder.go at line 208:
...
process.SetSystemLog(SystemLog{
Destination: "file",
Path: path.Join(DefaultAgentLogPath, "/mongodb.log"),
})
...1 vote -
Support kubernetes taints and tolerations
I believe kubernetes taints and tolerations are not supported by the operator, yet I find it a much needed capability.
1 vote -
MongoDB Operator Deployment Env Variables Push Down
This is a feature request to have custom environment variables, configured in the MongoDB Operator's Deployment manifest, push down or propagate to all resources created by the Operator.
For example, it may be desired to add environment variables with context. A more specific example could include setting a TZ timezone environment variable that is automatically added to all pod containers created by the Operator.
2 votes -
Headless OPS Manager deployment
Currently Ops Manager CRD deployment requires configuration using GUI which is a manual step. An option completely define all OPS Manager settings / Org in declarative manner via yaml will be great in building completely automated CI/CD Pipelines
17 votes -
Allow to pin specific MongoDB Agent version to be used
What is the problem that needs to be solved? In some rare situations where upgrade of Cloud Manager's MongoDB Agent to the latest version leads to Golang panic (or any other critical issue) there's no way for Cloud Manager user to rollback MongoDB Agent version in case if this environment is running in Kubernetes Operator. The script which launch MongoDB Agent is using the latest version from Cloud Manager Project, without any option to change it other then editing the script itself which is not possible in Kubernetes pod.
Why is it a problem? (the pain) If after Cloud Manager's…
2 votesWe're currently planning work to avoid pulling mongod and the agent from Ops Manager. This is expected to give the ability to manually control the version of the agent in use.
-
Assignment labels in YAML for Snapshot storage
Currently, if you want to assign a snapshot store to a certain project, it is required to access the Admin view and configure the "Assignment Labels" property under Backup > Snapshot Storage with the name of the corresponding project.
AFAIK, it is not possible to assign this configuration in the Ops Manager's YAML. E.g.:
s3Stores: - mongodbResourceRef: name: s3-metadata-db mongodbUserRef: name: s3-meta-store-user name: s3store1 pathStyleAccessEnabled: false s3BucketEndpoint: endpoint1.corp s3BucketName: backup1-bucket s3SecretRef: name: s3-credentials - mongodbResourceRef: name: s3-metadata-db mongodbUserRef: name: s3-meta-store-user name: s3store2 pathStyleAccessEnabled: false s3BucketEndpoint: backup2.corp s3BucketName: backup2-bucket s3SecretRef: name: second-credentials
1 vote -
Create AppDB user with backup role to allow execution of mongodump
For the purpose of regularly performing backups of the AppDB using mongodump --oplog.
4 votes -
Operator automatically provision an Ops Manager programmatic API key
Operator automatically provision an Ops Manager programmatic API key, The current instructions require human intervention to create an AP
1 vote -
Allow using other port than 8080 (or 8443) when deploy Ops Manager
The default port is 8080 or 8443 (for https) and cannot be changed
1 vote -
Introduce Helm Chart for MongoDB, MongoDBUser and secrets
Provide a helm chart that deploys MongoDB, MongoDBUser, secrets and all other resources needed.
The goal is to simplify the deployment of a MongoDB instance and everything that comes with it down to a helm one-liner.
9 votesWe have an example of a HELM chart that can deploy all resources.
We will be working on adding more refined charts
https://github.com/mongodb/mongodb-enterprise-kubernetes/tree/master/helm_chart -
adminCredentials secret should always be source of truth for OpsManager
The secret is only taken into account by OpsManager initially when OpsManager is deployed. As soon as the password of this user is changed in OpsManager, this secret is out of sync.
From the docs: "Use these credentials to log in to Ops Manager for the first time. Once Ops Manager is deployed, you should change the password or remove this secret."
https://docs.mongodb.com/kubernetes-operator/v1.4/tutorial/plan-om-resource/#prerequisitesOption 1: This secret should be in-sync with the OpsManager database. Preferably the sync should be from the k8s secret to the OpsManager database.
Option 2: Create a CRD "MongoDBOpsManagerUser" that handles User/Password management for OpsManager similar…
6 votes -
To setup number of backup daemons in ops-manager.yaml
In ops-manager.yaml, can we define the number of the initial backup daemons?
5 votes
- Don't see your idea?