Skip to content

Ops Tools

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

47 results found

  1. EmptyDir as data-volume and log-volume

    spec:
    members: 1
    type: ReplicaSet
    version: "4.4.5"
    statefulSet:
    spec:
    template:
    spec:
    volumes:
    - name: data-volume
    emptyDir: {}
    - name: log-volume
    emptyDir: {}

    This type of override would be very helpful for automated testing pipelines - pipeline should spin up single mongodb instance, populates data and proceed with application testing. For that, we don't need persistent volumes, we need clear folder on each invocation.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. MongoDB kubernetes operator - follow recommended kubernetes object labeling -

    Hi, i would like to thank you first for this operator, good job 👍 . It works well.

    Did you consider using this label convention for objects (statefulset, svc, secrets) https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/?

    Currently in my cluster Im trying to follow these recommended labels for objects while Im forwarding kubernetes logs using EFK but cannot store log to elasticsearch because there is object mapping for kubernetes.labels.app field as object not a concrete value. Right now there is hard-coded service selector https://github.com/mongodb/mongodb-kubernetes-operator/blob/1aa7093d2cc977bc3b1f5a5fa7e1e902d37768c8/controllers/replica_set_controller.go#L455 which expects pods to be labeled with app=<serviceName>

    Example labels following conventions for statefulset:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Support Service Binding Specification for Kubernetes

    Service Binding Specification for Kubernetes standardizes exposing backing service secrets to applications. The spec is available here: https://github.com/servicebinding/spec

    This blog post would be helpful: https://muthukadan.net/kubernetes/binding/support-service-binding-specification-for-kubernetes/

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Kubernetes Operator - Prefix Annotations and Labels

    Labels and annotations added to Kubernetes resources by the MongoDB Enterprise Operator should include a prefix designating that it was added by MongoDB. The lack of a prefix suggests the field and values are private to the user.

    For example, the MongoDB statefulset and service selector should use a label prefixed with a MongoDB domain.

    https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We're gradually starting to change things to prefix most annotations and labels with mdb.


    It's a gradual thing but in progress.

  5. Enable S3 Snapshot Storage via Kubernetes Operator with IAM role

    Configuring an S3 Snapshot Storage with IAM roles is only possible via Ops Manager UI or API.

    It would be great to be able to do this configuration via the MongoDB Kubernetes Operator.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Provide support to update version manifest to Ops Manager that uses local mode

    With Ops Manager Local Mode on Kubernetes, the version manifest is required to be updated manually via UI or API.

    It would be the best practice to support updating version manifest using a command to the Operator or OM Pods.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Add ability to have systemLog redirected to stdout (just have to remove systemLog.destination and path)

    To be able to have MongoDB logs redirect to stdout and this having it into GKE CloudLogs, we should NOT configure a systemLog.destination nor a systemLog.path.

    In 0.6.0 release, systemLog.destination and path are hardcoded and cannot be nullable.

    see automationconfigbuilder.go at line 208:

    ...
    process.SetSystemLog(SystemLog{
    Destination: "file",
    Path: path.Join(DefaultAgentLogPath, "/mongodb.log"),
    })
    ...

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Allow disabling Blockstore for assignment through the Ops Manager CRD

    By default, when enabling backups and configuring a Blockstore for an Ops Manager custom object, the specified Blockstore will be set as "Assignment enabled" in the UI.

    It would be helpful to expose the enable/disable button for the blockstore through the CRD since disabling it through the UI, results in the parameter being reverted every time the operator consolidates. This is useful for the case when more than a single store is configured and as a user you would like to disable the blockstore to make it unavailable for new backup jobs.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Support kubernetes taints and tolerations

    I believe kubernetes taints and tolerations are not supported by the operator, yet I find it a much needed capability.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Assignment labels in YAML for Snapshot storage

    Currently, if you want to assign a snapshot store to a certain project, it is required to access the Admin view and configure the "Assignment Labels" property under Backup > Snapshot Storage with the name of the corresponding project.

    AFAIK, it is not possible to assign this configuration in the Ops Manager's YAML. E.g.:

    s3Stores:
    - mongodbResourceRef:
        name: s3-metadata-db
      mongodbUserRef:
        name: s3-meta-store-user
      name: s3store1
      pathStyleAccessEnabled: false
      s3BucketEndpoint: endpoint1.corp
      s3BucketName: backup1-bucket
      s3SecretRef:
        name: s3-credentials
    - mongodbResourceRef:
        name: s3-metadata-db
      mongodbUserRef:
        name: s3-meta-store-user
      name: s3store2
      pathStyleAccessEnabled: false
      s3BucketEndpoint: backup2.corp
      s3BucketName: backup2-bucket
      s3SecretRef:
        name: second-credentials
    
    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Add ability to configure Pod Distruption Budget for STS

    During maintenance work EKS admins may need to evict nodes. This should not cause outage for MongoDB cluster/replicaset running on these nodes. we can create manually PDB for STS, but it would be nice to have an option to do it as part of MongoDB Kubernetes Operator.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. MongoDB Operator Deployment Env Variables Push Down

    This is a feature request to have custom environment variables, configured in the MongoDB Operator's Deployment manifest, push down or propagate to all resources created by the Operator.

    For example, it may be desired to add environment variables with context. A more specific example could include setting a TZ timezone environment variable that is automatically added to all pod containers created by the Operator.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Operator automatically provision an Ops Manager programmatic API key

    Operator automatically provision an Ops Manager programmatic API key, The current instructions require human intervention to create an AP

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Ops Manager and Backup infrastracture Disaster Recovery support with K8s Operator

    We have carried out tests with MongoDB v1.5.5 K8s Operator and Ops Manager 4.2.18 with Backup infrastructure (S3 Snapshots) in an Openshift 3.11 environment (MongoDB Support case attached).

    In this case, a "Disaster Recovery" simulation has been carried out. However, several components created by the Operator had to be restored to obtain a similar state to the one before the "disaster".

    Furthermore, it is very likely that the S3 Snapshots will be lost if the process is not completed in a certain manner.

    It would be great to have an official approach to deploy/restore an OM resource using MongoDB K8s…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    There is no current supported mechanism for backing up Ops Manager in a way that guarantees the data. As Ops Manager is itself a backup tool, it's challenging to maintain the integrity of the data in DR scenarios.


    For this reason we recommend multi-site high availability for OM and AppDB. This is already possible when running OM on hardware of in VMs, but not currently supported in Kubernetes (unless a Kubernetes cluster is spanning sites).


    Later this year (2023) we hope to support OM deployments across multiple Kubernetes clusters - as we already support (in beta) for Replica Sets (full release in April 2023 with Sharded cluster support in May/June 2023). Doing so will reduce the criticality of a OM/AppDB backup solution within Kubernetes.

  15. Allow using other port than 8080 (or 8443) when deploy Ops Manager

    The default port is 8080 or 8443 (for https) and cannot be changed

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Allow the Kubernetes Operator to delete a project

    Currently it is not possible to delete a project via kubectl command.

    As the Kubernetes Operator allows one to create a project (configmap) and deploy a replica set, we would expect it to also allow the deletion of a project so that we can fully automate the solution.

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    While deletion of a deployment is possible via Kubernetes, deleting a MongoDB resource doesn’t remove it from the Ops Manager UI. You must remove the resource from Ops Manager manually. To learn more, see Remove a Process from Monitoring.


    Deleting a MongoDB resource for which you enabled backup doesn’t delete the resource’s snapshots. You must delete snapshots in Ops Manager.


    Find out more.


    Work is planned to remove Ops Manager as a prerequisite (though it's use will still be optional and supported) and as part of that we hope to address this deletion aspect.

  17. Allow to pin specific MongoDB Agent version to be used

    What is the problem that needs to be solved? In some rare situations where upgrade of Cloud Manager's MongoDB Agent to the latest version leads to Golang panic (or any other critical issue) there's no way for Cloud Manager user to rollback MongoDB Agent version in case if this environment is running in Kubernetes Operator. The script which launch MongoDB Agent is using the latest version from Cloud Manager Project, without any option to change it other then editing the script itself which is not possible in Kubernetes pod.

    Why is it a problem? (the pain) If after Cloud Manager's…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We're currently planning work to avoid pulling mongod and the agent from Ops Manager. This is expected to give the ability to manually control the version of the agent in use.

  18. sharding

    Should provide sharding feature in community Operator.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. cert-manager & external-dns integration

    Since the kube CA integration is deprecated, the operator should have an option to integrate with cert-manager and external-dns for automatic tls and dns records.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Create AppDB user with backup role to allow execution of mongodump

    For the purpose of regularly performing backups of the AppDB using mongodump --oplog.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base