Ops Tools
23 results found
-
Headless OPS Manager deployment
Currently Ops Manager CRD deployment requires configuration using GUI which is a manual step. An option completely define all OPS Manager settings / Org in declarative manner via yaml will be great in building completely automated CI/CD Pipelines
17 votes -
Support Arbiters with MongoDB Kubernetes Operator
Support arbiters with MongoDB Kubernetes Operator so that Replicasets should be deployed in PSA configuration.
12 votes -
Authentication mode MONGODB-OIDC
Support for authentication: MONGODB-OIDC
security:
authentication:
enabled: true
modes:
- "MONGODB-OIDC"currently we get the following error wir kuberntes operator 1.26.0, OpsManager 7.0.7 and RS 7.0.11:
Unsupported value: "MONGODB-OIDC"9 votesWe're considering this for inclusion into Q4 (by end of January) - but we're currently reviewing priorities for a number of projects competing for attention.
-
8 votes
-
Show Kubernetes resources in Ops Manager
Show some of the Kubernetes resources in Ops Manager
1. show namespaces in projects,
2. Show list of resources in cluster view7 votes -
adminCredentials secret should always be source of truth for OpsManager
The secret is only taken into account by OpsManager initially when OpsManager is deployed. As soon as the password of this user is changed in OpsManager, this secret is out of sync.
From the docs: "Use these credentials to log in to Ops Manager for the first time. Once Ops Manager is deployed, you should change the password or remove this secret."
https://docs.mongodb.com/kubernetes-operator/v1.4/tutorial/plan-om-resource/#prerequisitesOption 1: This secret should be in-sync with the OpsManager database. Preferably the sync should be from the k8s secret to the OpsManager database.
Option 2: Create a CRD "MongoDBOpsManagerUser" that handles User/Password management for OpsManager similar…
6 votes -
Add Global MongoDB Agent Upgrade ability
Add the ability to upgrade all MongoDB Agents across all Projects at the same time instead of clicking on the banner for each Project.
6 votesWe are going to expose this via mongocli first as it would probably makes a fastest solution.
-
To setup number of backup daemons in ops-manager.yaml
In ops-manager.yaml, can we define the number of the initial backup daemons?
5 votes -
Add ability to configure Pod Distruption Budget for STS
During maintenance work EKS admins may need to evict nodes. This should not cause outage for MongoDB cluster/replicaset running on these nodes. we can create manually PDB for STS, but it would be nice to have an option to do it as part of MongoDB Kubernetes Operator.
4 votesSupporting Pod Disruption Budget natively is something we do hope to do at some point.
But for now it is still possible by creating the PodDisruptionBudget resource and targeting the deployment using labels. (As per https://kubernetes.io/docs/tasks/run-application/configure-pdb/)
-
Ops Manager and Backup infrastracture Disaster Recovery support with K8s Operator
We have carried out tests with MongoDB v1.5.5 K8s Operator and Ops Manager 4.2.18 with Backup infrastructure (S3 Snapshots) in an Openshift 3.11 environment (MongoDB Support case attached).
In this case, a "Disaster Recovery" simulation has been carried out. However, several components created by the Operator had to be restored to obtain a similar state to the one before the "disaster".
Furthermore, it is very likely that the S3 Snapshots will be lost if the process is not completed in a certain manner.
It would be great to have an official approach to deploy/restore an OM resource using MongoDB K8s…
4 votesThere is no current supported mechanism for backing up Ops Manager in a way that guarantees the data. As Ops Manager is itself a backup tool, it's challenging to maintain the integrity of the data in DR scenarios.
For this reason we recommend multi-site high availability for OM and AppDB. This is already possible when running OM on hardware of in VMs, but not currently supported in Kubernetes (unless a Kubernetes cluster is spanning sites).
Later this year (2023) we hope to support OM deployments across multiple Kubernetes clusters - as we already support (in beta) for Replica Sets (full release in April 2023 with Sharded cluster support in May/June 2023). Doing so will reduce the criticality of a OM/AppDB backup solution within Kubernetes.
-
Create AppDB user with backup role to allow execution of mongodump
For the purpose of regularly performing backups of the AppDB using mongodump --oplog.
4 votes -
MongoDB kubernetes operator - follow recommended kubernetes object labeling -
Hi, i would like to thank you first for this operator, good job 👍 . It works well.
Did you consider using this label convention for objects (statefulset, svc, secrets) https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/?
Currently in my cluster Im trying to follow these recommended labels for objects while Im forwarding kubernetes logs using EFK but cannot store log to elasticsearch because there is object mapping for
kubernetes.labels.app
field as object not a concrete value. Right now there is hard-coded service selector https://github.com/mongodb/mongodb-kubernetes-operator/blob/1aa7093d2cc977bc3b1f5a5fa7e1e902d37768c8/controllers/replica_set_controller.go#L455 which expects pods to be labeled withapp=<serviceName>
Example labels following conventions for statefulset:
…apiVersion: apps/v1 kind: StatefulSet metadata: labels:
2 votesNo current plans, but under consideration for inclusion on the roadmap in the future.
-
MongoDB Operator Deployment Env Variables Push Down
This is a feature request to have custom environment variables, configured in the MongoDB Operator's Deployment manifest, push down or propagate to all resources created by the Operator.
For example, it may be desired to add environment variables with context. A more specific example could include setting a TZ timezone environment variable that is automatically added to all pod containers created by the Operator.
2 votes -
Operator crashes when it doesn't have permissions to watch a namespace
If one of many namespaces does not set the permissions for the operator to watch the namespace, it throws exceptions and goes to crashLoopBackOff state.
This is clearly a bug. One misconfigured namespace should never be able to take the operator down with it.1 voteThis is expected behavior and common among operators; it can't function and do what's needed if it lacks the permissions needed.
I know we have an open support case around this to try and understand more about your use case, and we're hoping that we'll be able to offer some guidance to avoid this problem and still achieve what you need. It may even be a new use case that we look to support.
-
Disable point-in-time restores
It would be nice to have the ability to set the parameter "Allow point-in-time restores going back" to ZERO (disabling PIT restores). This could be useful in situation where a database is producing a lot of oplog and DBA wants to avoid the saturation of oplog-store. In other words: "I want to mantain shapshot backup functionality, but deactivate PIT functionality".
1 vote -
EmptyDir as data-volume and log-volume
spec:
members: 1
type: ReplicaSet
version: "4.4.5"
statefulSet:
spec:
template:
spec:
volumes:
- name: data-volume
emptyDir: {}
- name: log-volume
emptyDir: {}This type of override would be very helpful for automated testing pipelines - pipeline should spin up single mongodb instance, populates data and proceed with application testing. For that, we don't need persistent volumes, we need clear folder on each invocation.
1 vote -
Support Service Binding Specification for Kubernetes
Service Binding Specification for Kubernetes standardizes exposing backing service secrets to applications. The spec is available here: https://github.com/servicebinding/spec
This blog post would be helpful: https://muthukadan.net/kubernetes/binding/support-service-binding-specification-for-kubernetes/
1 voteLow customer demand. Potentially in the future if we hear sufficient demand.
-
Enable S3 Snapshot Storage via Kubernetes Operator with IAM role
Configuring an S3 Snapshot Storage with IAM roles is only possible via Ops Manager UI or API.
It would be great to be able to do this configuration via the MongoDB Kubernetes Operator.
1 vote -
Provide support to update version manifest to Ops Manager that uses local mode
With Ops Manager Local Mode on Kubernetes, the version manifest is required to be updated manually via UI or API.
It would be the best practice to support updating version manifest using a command to the Operator or OM Pods.
1 vote -
Support kubernetes taints and tolerations
I believe kubernetes taints and tolerations are not supported by the operator, yet I find it a much needed capability.
1 vote
- Don't see your idea?