13 results found
Currently, the operator does not enable backup function for AppDB, but it does enable monitoring. For everything else (non-AppDB) the operator does enable the Backup and Monitoring functions, even if backup is not configured
At the moment none of these functions are configurable with the Operator. The only method to disable the Backup or Monitoring function is through the Ops Manager UI.
Requesting the ability to manage the functions through the operator.3 votes
AppDB is automatically monitored by Ops Manager when installed via the Enterprise Operator.
But backup of AppDB is not available via Ops Manager. There is no official backup and restore mechanism for Ops Manager as preserving the data integrity is not ensured. Instead we recommend running Ops Manager will high availability and resilience. This can be done on VMs and will soon be available via the Operator.
A customer would like the ability to create a snapshot "right now", and mark it to be preserved.
The scenario is for a system being upgraded or changed or for auditing purposes. The benefits are:
- the ability to restore this PIT is longer than the default 24 hours.
- recovering the snapshot is quick because no oplogs have to be replayed.
- auditing purpose if you are in a vertical for which it is required.
Since Ops Manager APIs seem to be very similar (if not, the same) as Atlas API, and since Atlas Terraform modules seem to run based on Atlas API, it would be very nice to have Terraform modules for Ops Manager.2 votes
Thank you for the submission. We do not have any plans to create a Terraform provider for Ops Manager.
MongoDB Kubernetes Operator can manage OpsManager Resources including Backup infrastructure.
When Users want to disable backup infrastructure, the Operator does not remove BackupDeamon Stateful set or disables the backup configuration.
This request is to make Operator clean up Backup configuration for OpsManager and delete K8S resources as well reconfigure OpsManager2 votes
Some users may want them to remain around in case they decide to re-enable backup. Since deletion and creation take time, users can manually delete these is they need to.
Unfortunately this isn't viable. Including the AppDB configuration in the Ops Manager CRD already adds to the complexity and the AppDB options are deliberately quite simple. AppDB is also not a MongoDB CR, whereas BlockStore and OplogStores are as they need more configuration.
We are going to be looking into making backup easier to setup, but this won't be the way we achieve it.
Currently not, and we have no current plans to implement it. (Increased user demand may change that).
Is there a way to migrate existing non-kubernetes MongoDB clusters to MongoDB Kubernetes Operator?1 vote
The recommendation is migration of the data from a non-Kubernetes deployment to a newly created Kubernetes deployment.
For now there is no more automated way planned.
We're developing a Microservices-based product that is based on MongoDB and Kafka. In this context, we're currently aiming at implementing most of our DevOps-related activities in a GitOps way. Setup, rolling upgrades and scaling the number of replicas can be achieved with the Operator today, but it would be great also if activities like index creation and sharding of collections could be done via the Operator.
A similar approach has been taken for Kafka, where cluster installation, rolling upgrades and scaling out is handed by the operator (Strimzi), but also topic management: https://strimzi.io/docs/operators/latest/overview.html#overview-concepts-topic-operator-str
We have developed an internal tool to…1 vote
For now at least we don't offer management of the MongoDB data plane via the Enterprise Operator.
We may reconsider this stance based on ongoing customer feedback.
currently for release 1.9.0, the helm chart has no way to explicitly use a specific version of the following images:
This is not ideal and hard to scale in the enterprise. Request the ability to pin versions to all images from values file.1 vote
we can’t pin them as a user choice of Ops Manager would require different tags to be pulled.
OpsManager Version will choose the tag of an image we are going to pull.
Currently, there is no way to enable LDAP Auth for the Ops Manager Users on my kubernetes Ops Manager pods using manifests.
This essentially means that one would not be able to use LDAP and ci/cd simultaneously with Ops Manager with the enterprise kubernetes operator.
Mongodb enterprise support has confirmed that in the event of disaster recovery or a deployment of a new cluster, manual steps must be done to enable LDAP during a ci/cd deployment.
It should not be expected to sign in and manually do anything in a web gui in an enterprise solution. It is simply not…1 vote
We don't manage Ops Manager users, with the exception of the first local user created to enable admin access.
Have the MongoDB operator support k8s route sharing for connecting to replicas to simplify network and not use nodeport
It's still very new and only kube-router is the only "free" one that supports it. But this is great simplify the network configurations to connect to replicas and shards.
Route Sharing is currently supported on Red Hat OpenShift 4.4+1 vote
We currently support nodeport and load balancer but have no plans to extend that due to the current lack of customer interest.
So that way multiple projects can leverage the single operator to provision mongodb. https://github.com/mongodb/mongodb-enterprise-kubernetes/issues/1641 vote
Setting it to watch all namespaces by default may result in conflicts and unexpected behaviour in environments where more than one instance of the Operator is running.
As a result, it's safer to default to one namespace and allow users to manually chose all or many.
In order to speed up deployments, please add support for "podManagementPolicy" in the MongoDB and Ops Manager CR to allow parallel pod deployment of Statefulsets.1 vote
- Don't see your idea?