Skip to content

Ops Tools

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

38 results found

  1. Add spec.externalAccess.externalDomain to running deployments

    The follow option:

    https://www.mongodb.com/docs/kubernetes-operator/master/reference/k8s-operator-specification/#spec.externalAccess.externalDomain

    • spec.externalAccess.externalDomain

    Is tremendously useful as we would no longer have to worry about terminating TLS connections through a proxy and then re-establishing a TLS connection for internal communication due to security reasons.

    This would remove a point of failure for self-hosted Kubernetes environments and safe resources.

    However, you cannot do this for existing replica sets:

    WARNING

    Specifying this field changes how Ops Manager registers mongod processes. You can specify this field only for new replica set deployments starting in Kubernetes Operator version 1.19. You can’t change the value of this field or any processes[n].hostname fields in…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We've just tested this for existing deployments and confirmed that this does in fact work fine for existing replica sets. 


    This was an incorrect inference made while documenting this feature and we've raised a ticket to have our docs amended.

  2. Custom Pod Annotations

    This is regarding usage of service mesh / policy agent automations for stateful sets.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    This is already possible actually! Though it's definitely an area where we need to improve our docs.


    In any valid working MongoDBCommunity deployment you'd need to specify:

    spec:
     statefulSet:
       spec:
         template:
           metadata:
             annotations:
               example.com/annotation-1: "value-1"
               example.com/annotation-2: "value-2"


    And obviously alter the annotations according to your needs!

  3. mongodb-atlas-kubernetes operator need to adjust instanceSize/disk for autoscaled cluster base on instanceSize range

    Scenario:

    Assumptions:
    autoscaling = true
    current instanceSize = M40
    minInstanceSize = M30
    maxInstanceSize = M70

    If we want to increase the minInstanceSize from M40 to M50 operator will return an error because our current instance size is less than M50, so we need to go to UI and increase the current instance size to M50(at least), and then apply the change to Kubernetes object (atlasdeployment).

    We expect that if the current instanceSize is not in the range, the operator automatically increases the instanceSize to minInstanceSize to remove the extra step.


    We expect the same logic for increasing/decreasing - minInstance/maxInstance and…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    Resolved in version 1.4 of the Atlas Operator


    It now allows the user to update the autoscaling config (min and max) to outside of current instance size and atlas will update the instance to the closest boundary. So, for example, with a current instance size on Atlas of M10, and a min and max of M10 and M30, the min and max could be changed to M30 and M80, and the instance size would now be auto-scaled up to M30 (as the nearest boundary to the old instance size). Side note: If the instance size in the config is not changed and is left at (for example M10), then you will get a warning in the logs that the spec contains an instance size outside of the current min and max. But that won't block the change or the auto-scaling.


    Similarly, you can also now change the fixed instance size in…

  4. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. k8s operator - Support different different size shard configuration

    We would like to implement the Hot-cold shard strategy to move the cold data to a shard which has more disk usage and less compute power and not frequently used and Hot data in a shard with more compute power. This strategy is described here: https://docs.mongodb.com/manual/tutorial/sharding-tiered-hardware-for-varying-slas/

    Currently the enterprise operator does not support different size shards, this request is to allow operator to create different size shards.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We have recently released the ability to support different sized shards and that enables the scenario described in this idea.


    This is supported in 1.19.1 of the Operator, the release notes can be found here and the relevant section is as follows:

    • Allows you to configure podSpec per shard in a MongoDB sharded cluster by specifying an array of podSpecs under the spec.shardSpecificPodSpec setting for each shard.
  6. Kubernetes Operator - Enable S3 Oplog store

    Currently, only Replica Sets are the only to deploy an Oplog Store with the Kubernetes Operator.
    This causes issues related to sizing for Ops Manager deployments managing a big number of projects.
    Enabling S3 Oplog Store would help a lot.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. MongoDB Agent (Automation Module): don't attempt to auth with `__system` (SCRAM) user when `security.clusterAuthMode` is set to `x509`

    Problem Statement,
    What is the problem? MongoDB Agent (Automation Module) attempts to auth with __system (SCRAM) user when security.clusterAuthMode is set to x509.

    Why is this a problem? MongoDB Server process logs are flooded by unnecessary noise from such MongoDB Agent (Automation Module) failed auth attempts.

    Example,
    {"t":{"$date":"2021-05-10T11:08:02.115+0000"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn115","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","principalName":"__system","authenticationDatabase":"local","client":"10.10.10.10:46765","result":"AuthenticationFailed: ###"}}

    Proposal,
    * Don't attempt to auth with __system (SCRAM) user when security.clusterAuthMode is set to x509 in MongoDB Server

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Automation  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Allow S3 Oplog Store to be defined and configured using the Operator.

    Ops Manager can utilize S3 storage for the Oplog Store. It should be possible to define and configure an S3 Oplog Store from the Operator.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Fine-tune RBAC rules for mongodb.com resources

    Right now, the default RBAC rules for the mongodb-enterprise-operator role/clusterrole are:

    apiGroups:
      - mongodb.com
    resources:
      - mongodb
      - mongodb/finalizers
      - mongodb/status
      - mongodbusers
      - mongodbusers/status
      - opsmanagers
      - opsmanagers/finalizers
      - opsmanagers/status
    verbs:
      - "*"
    

    Available at https://github.com/mongodb/mongodb-enterprise-kubernetes/blob/b4c0a9b167f21114dc276cb163a1b207ae2f9359/helm_chart/templates/operator-roles.yaml#L90

    This doesn't doesn't work well with privilege escalation because it won't work for service accounts that individually mention the allowed verbs.
    For example, my service account has permissions for everything (create, delete, deletecollection, get, list, patch, update, watch), but it fails with (...) is attempting to grant RBAC permissions not currently held because they are not equal to "*".

    The proposed change is…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Automatic labeling of pods by replicaset role (primary/secondary)

    Hi, I think it would be great if the Operator could watch and automatically mark individual pods of statefulset with some label indicating whether the node is primary or secondary to be able to route service just to the primary instance (or load balance secondary instances for read-only access on one IP).

    Currently I use a script that periodically checks roles and adds label "mongodb-replicaset-role": "primary" or "secondary" and a service that uses this as a selector.
    EDIT: (I'm thinking about writing own operator for this instead of script; maybe it's the best way?)

    Motivation: Linode (and possibly others') kubernetes…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Pin image tags in Enterprise Kubernetes Operator values file

    Should be able to pin tags for images like mongodb-enterprise-appdb in https://github.com/mongodb/mongodb-enterprise-kubernetes/blob/master/helm_chart/values.yaml

    Currently, we are forced to use the latest tag, which has caused issues and broke disaster recovery for our project.

    If we would have been able to use an older tag of the image, we could have recovered fast. since we were forced to use the latest tag, it took several days to recover. This does not seem acceptable for an enterprise software.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Allow customizing mongod port in kubernetes

    The additionalMongodConfig feature was a great addition to the Operator.

    Setting the spec.additionalMongodConfig.net.port to a value other than the default 27017 is not working as expected. The default port is still used despite the custom port value appearing in the MongoDB resource description/manifest. A common security compliance checklist often includes running services on non-default ports.

    Please consider allowing the net.port to be set to a custom value; this may have implications with the services that are automatically created in the cluster.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. MongoDB CR should support topologySpreadConstraints

    As PodAntiAffinity does not really give enough flexibility in achieving High Availability and enforcing distribution across nodes, it should be possible to add topologySpreadConstraints to the podSpec (of both ShardedCluster and other deployment types). As of now topologySpreadConstraints are ignored by the Operator.

    https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#comparison-with-podaffinity-podantiaffinity

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Release K8S Ops Manager image when Ops Manager release is out

    Currently there is a time lag between the Ops Manager version releases and the availability of K8S images to be used with the MongoDB Kubernetes Operator.

    It would be nice if they are released at the same time.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    completed  ·  Andrey responded

    We now update Ops Manager images on the same day with Ops manager releases.

  16. Do not delete backups of deleted deployments

    Backups normaly protect from accidental deletion of a database. In a devOps environment it can happen that a MDB resource gets deleted by a Kubernetes deployment. At the moment, Ops Manager then deactivates the backup and deletes all snapshots. We would like to have the snapshots stored as long as their retention limit tells or at least until the project itself is deleted in Ops Manager.

    So that, if a developer detects the mistake and re-deploys the MDB resource(s), someone or even himself can restore the database(s) from the backups.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Ops Manager  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    completed  ·  Andrey responded

    We do not delete Ops Manager project when Backup is present.
    This has been addressed

  17. OpsManager Pod should not have credentials in environment variables but store and retrieve from a k8s secret

    OpsManager database password is exposed as environment variable.

    OMPROPmongo_mongoUri holds credentials of the OpsManager Database.

    OpsManager Pod should not have credentials in environment variables but store and retrieve from a k8s secret.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Prevent users from importing a replica set or shard with the same name as other pre-existing replica sets/shards

    When a user imports a cluster into a project with the same name, it causes issues like breaking backups of pre-existing clusters.

    Checking the replica set name against the names of other replica set names will prevent having to terminate backups and remove and re-import clusters, starting over.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    completed  ·  0 comments  ·  Automation  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Define current limits of Kubernetes Operator

    1.) What is the limiting factor of the Operator? Is it number of Pods, number of Custom Resources (e.g. MongoDB, MongoDBUser) or something else?

    2.) What does the number of "Clusters" refer to? Does it differ for Standalone, ReplicaSet and ShardedCluster?

    3.) How many instances of the "Clusters" in 2.) are supported per MongoDB Operator?

    Is there any way to add this sort of data to our documentation?

    Thanks

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    Current scale recommendations are 20-50 deployments (ReplicaSet/ShardedCluster/Standalone). 


    While the Operator can handle hundreds, the limiting factor is API calls to Ops Manager/Cloud Manager. Updating one deployment at a time is fine, the issue arrises when making concurrent changes to many deployments simultaneously, where reconciliation will be slow for those later in the queue - for example during a DR scenario.


    We have work planned for later in 2023 to start removing Ops Manager as a prerequisite for many of the basic operations, and we expect that to greatly improve these limits.

  20. Enhance security by leveraging PodSecurityPolicies

    PodSecurityPolicies are a way to enhance security in a k8s cluster.

    Currently the Kubernetes Operator and the Helm Chart does not offer a way to integrate PSPs. If an administrator wants to enforce PSPs for the cluster where the MongoDB Kubernetes Operator is deployed, he would need to do this manually which leads to additional manual steps (e.g. editing the Operator role to allow "use" "psp").

    Please introduce a way to secure the MongoDB Management (Ops Manager, Operator) and Workload (MongoDB custom resources) with PSPs in the Kubernetes Operator / Helm ecosystem.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1
  • Don't see your idea?

Feedback and Knowledge Base