Ops Tools
30 results found
-
Deploy MongoDB across different Kubernetes clusters
MongoDB Operator can only deploy and manage MongoDB in a single Kubernetes cluster. However, for DR and global apps, it is important to deploy a single DataBase across multiple Kubernetes clusters to allow for DR or globally distributed apps.
44 votesMongoDB Enterprise Operator now support multi-Kubernetes-cluster replica set deployments.
Multi-Kubernetes-cluster Ops Manager support is in progress right now with a likely delivery date towards the end of this year (2023). Sharding support (across multiple Kubernetes clusters) will follow.
If you're an Enterprise Advanced customer and interested in this, please feel free to reach out to me at dan.mckean@mongodb.com.
-
14 votes
Operator v 1.7.0 will have a full LDAP support
-
Support Any MongoDB configuration option in MongoDB Custom Resource
Support all MongoDB configurations in Kubernetes CRD so that it is possible to deploy a fine-tuned cluster with Kubernetes resources
11 votes -
Add Backup configuration to MongoDB Custom Resource
Allow configuration of backup settings in MongoDB Custom resource.
This includes management of backed up resources when updating, moving or deleting clusters11 votesWe have feature parity between the backup capabilities of Ops Manager and the Enterprise Operator
Docs: https://www.mongodb.com/docs/kubernetes-operator/master/tutorial/back-up-mdb-resources/
-
Define current limits of Kubernetes Operator
1.) What is the limiting factor of the Operator? Is it number of Pods, number of Custom Resources (e.g. MongoDB, MongoDBUser) or something else?
2.) What does the number of "Clusters" refer to? Does it differ for Standalone, ReplicaSet and ShardedCluster?
3.) How many instances of the "Clusters" in 2.) are supported per MongoDB Operator?
Is there any way to add this sort of data to our documentation?
Thanks
9 votesCurrent scale recommendations are 20-50 deployments (ReplicaSet/ShardedCluster/Standalone).
While the Operator can handle hundreds, the limiting factor is API calls to Ops Manager/Cloud Manager. Updating one deployment at a time is fine, the issue arrises when making concurrent changes to many deployments simultaneously, where reconciliation will be slow for those later in the queue - for example during a DR scenario.
We have work planned for later in 2023 to start removing Ops Manager as a prerequisite for many of the basic operations, and we expect that to greatly improve these limits.
-
Automation Agent Client Certification Validation
Many customers require that the MMS automation agent has a valid x509 for TLS communications with Ops Manager. With the Kubernetes Operator this is not currently possible so these customers cannot use the Operator to deploy MongoDB instances within their environments.
This feature would improve the security of communications between agents and the Ops Manager and meet the security requirements of many of our customers who cannot move to services like Atlas.
7 votesnew spec option spec.agent.startupOptions is available. it can be used to configure client Certificates.
-
Allow S3 Oplog Store to be defined and configured using the Operator.
Ops Manager can utilize S3 storage for the Oplog Store. It should be possible to define and configure an S3 Oplog Store from the Operator.
4 votesWe do now support this, but it's not yet covered in our docs (a ticket is open but not yet completed.)
We do however have a public example of setting this up, which should enable you to use it.
-
Enhance security by leveraging PodSecurityPolicies
PodSecurityPolicies are a way to enhance security in a k8s cluster.
Currently the Kubernetes Operator and the Helm Chart does not offer a way to integrate PSPs. If an administrator wants to enforce PSPs for the cluster where the MongoDB Kubernetes Operator is deployed, he would need to do this manually which leads to additional manual steps (e.g. editing the Operator role to allow "use" "psp").
Please introduce a way to secure the MongoDB Management (Ops Manager, Operator) and Workload (MongoDB custom resources) with PSPs in the Kubernetes Operator / Helm ecosystem.
4 votes -
Add K8S namespace as a tag in Ops Manager project
Add K8S namespace as a tag in Ops Manager project so it is easier to identify what project belongs in what namespace
3 votes -
Add spec.externalAccess.externalDomain to running deployments
The follow option:
- spec.externalAccess.externalDomain
Is tremendously useful as we would no longer have to worry about terminating TLS connections through a proxy and then re-establishing a TLS connection for internal communication due to security reasons.
This would remove a point of failure for self-hosted Kubernetes environments and safe resources.
However, you cannot do this for existing replica sets:
WARNING
Specifying this field changes how Ops Manager registers mongod processes. You can specify this field only for new replica set deployments starting in Kubernetes Operator version 1.19. You can’t change the value of this field or any processes[n].hostname fields in…
2 votesWe've just tested this for existing deployments and confirmed that this does in fact work fine for existing replica sets.
This was an incorrect inference made while documenting this feature and we've raised a ticket to have our docs amended.
-
Fine-tune RBAC rules for mongodb.com resources
Right now, the default RBAC rules for the mongodb-enterprise-operator role/clusterrole are:
apiGroups: - mongodb.com resources: - mongodb - mongodb/finalizers - mongodb/status - mongodbusers - mongodbusers/status - opsmanagers - opsmanagers/finalizers - opsmanagers/status verbs: - "*"
This doesn't doesn't work well with privilege escalation because it won't work for service accounts that individually mention the allowed verbs.
For example, my service account has permissions for everything (create, delete, deletecollection, get, list, patch, update, watch), but it fails with(...) is attempting to grant RBAC permissions not currently held
because they are not equal to "*".The proposed change is…
2 votesWe have since fine-tuned the RBAC as much as possible.
The updated RBAC requirements can be seen in https://github.com/mongodb/mongodb-enterprise-kubernetes/blob/master/mongodb-enterprise.yaml
-
Allow customizing mongod port in kubernetes
The additionalMongodConfig feature was a great addition to the Operator.
Setting the spec.additionalMongodConfig.net.port to a value other than the default 27017 is not working as expected. The default port is still used despite the custom port value appearing in the MongoDB resource description/manifest. A common security compliance checklist often includes running services on non-default ports.
Please consider allowing the net.port to be set to a custom value; this may have implications with the services that are automatically created in the cluster.
2 votesThis is now confirmed to work as expected.
-
MongoDB CR should support topologySpreadConstraints
As PodAntiAffinity does not really give enough flexibility in achieving High Availability and enforcing distribution across nodes, it should be possible to add topologySpreadConstraints to the podSpec (of both ShardedCluster and other deployment types). As of now topologySpreadConstraints are ignored by the Operator.
2 votesThis is now confirmed to work as expected.
-
Release K8S Ops Manager image when Ops Manager release is out
Currently there is a time lag between the Ops Manager version releases and the availability of K8S images to be used with the MongoDB Kubernetes Operator.
It would be nice if they are released at the same time.
2 votesWe now update Ops Manager images on the same day with Ops manager releases.
-
Allow to configure options for automation agent logs
Currently there is no way in Kubernetes operator to configure how long automation/backup/monitoring agent logs should be stored. they can easily occupy all space in pod.
2 votes -
Support Helm Chart for operator
Provide Helm charts for MongoDB Enterprise operator
2 votesWe have support for deploying the Operator with Helm.
We do not currently cover deploying and managing the custom resources for deployments etc using Helm. Please raise a separate feedback item if this is of interest.
-
OpsManager in Kubernetes Deployment
Deploy MongoDB Ops Manager in Kubernetes with Operator that will allow MongoDB Clusters to be run and managed from Kubernetes platform entirely
2 votes -
Add new SCRAM Authz to MongoDBUser CR
Support SCRAM authentication for MongoDB Users
2 votes -
Custom Pod Annotations
This is regarding usage of service mesh / policy agent automations for stateful sets.
1 voteThis is already possible actually! Though it's definitely an area where we need to improve our docs.
In any valid working MongoDBCommunity deployment you'd need to specify:
spec:
statefulSet:
spec:
template:
metadata:
annotations:
example.com/annotation-1: "value-1"
example.com/annotation-2: "value-2"And obviously alter the annotations according to your needs!
-
Allow creation of Custom Roles in AtlasDeployment
I'm missing a feature to create custom roles from deployment level for Atlas operator (as already possible in community operator: https://github.com/mongodb/mongodb-kubernetes-operator/blob/b232901f5c6e4f9c1ab04bc9725458ca70a19930/config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml#L173-L258)
1 vote
- Don't see your idea?