JOSE
My feedback
7 results found
-
1 vote
An error occurred while saving the comment JOSE shared this idea · -
2 votesJOSE supported this idea ·
-
2 votesJOSE supported this idea ·
-
4 votes
There is no current supported mechanism for backing up Ops Manager in a way that guarantees the data. As Ops Manager is itself a backup tool, it's challenging to maintain the integrity of the data in DR scenarios.
For this reason we recommend multi-site high availability for OM and AppDB. This is already possible when running OM on hardware of in VMs, but not currently supported in Kubernetes (unless a Kubernetes cluster is spanning sites).
Later this year (2023) we hope to support OM deployments across multiple Kubernetes clusters - as we already support (in beta) for Replica Sets (full release in April 2023 with Sharded cluster support in May/June 2023). Doing so will reduce the criticality of a OM/AppDB backup solution within Kubernetes.
JOSE supported this idea · -
10 votes
While deletion of a deployment is possible via Kubernetes, deleting a MongoDB resource doesn’t remove it from the Ops Manager UI. You must remove the resource from Ops Manager manually. To learn more, see Remove a Process from Monitoring.
Deleting a MongoDB resource for which you enabled backup doesn’t delete the resource’s snapshots. You must delete snapshots in Ops Manager.
Work is planned to remove Ops Manager as a prerequisite (though it's use will still be optional and supported) and as part of that we hope to address this deletion aspect.
JOSE supported this idea · -
41 votes
An error occurred while saving the comment JOSE commentedEven if you have an HA environment, it may be needed to do a full restore sometimes, even if this means some downtime of the OpsManager service for some time.
JOSE supported this idea · -
4 votes
Supporting Pod Disruption Budget natively is something we do hope to do at some point.
But for now it is still possible by creating the PodDisruptionBudget resource and targeting the deployment using labels. (As per https://kubernetes.io/docs/tasks/run-application/configure-pdb/)
JOSE supported this idea ·An error occurred while saving the comment JOSE commented100% agree, This should be a task executed automatically by the operator for every new cluster/replicaset. Or at least to have the option to configure it so it is created automatically or not.
PDBs help on the day 2 operation of a sts in Kubernetes, should be a standard setup.
We are trying to mimic what you can do in ATLAS:
https://www.mongodb.com/blog/post/introducing-ability-independently-scale-atlas-analytics-node-tiers
A standard replica set contains a primary node for reads and writes and two secondary nodes that are read only. Analytics nodes provide an additional read-only node that is dedicated to analytical reads.
We were able to add a new node to the replica set, have it tagged and with no votes and lower priority, even with potenttialy a different quality in its persistent volumes, so would lilke to have a connection string customized to include the readPreferenceTags.
We filed a support case, but this is not currently supported in the Operator/MongoDBUser.