Ops Tools
462 results found
-
More fine granular OpsManger roles for API CRUD operations
More fine granular OpsManger roles for API CRUD operations
In order to generate API Keys users need some pretty powerful role https://docs.opsmanager.mongodb.com/current/reference/api/org-api-keys/. Same for project API Keys https://docs.opsmanager.mongodb.com/current/reference/api/project-api-keys/.
Our understanding is that user who can create api keys could also self promote themselves to super admins which is something we don't want and would be a security concern to us. (And admins normally have access to far more things than just user mgmt)
Additionally it would be beneficial to pass in an desired api key - e.g. for initial provisioning and give admins the chance to reset/rotate an…
2 votes -
Incorporate Support CASE management into the OPS Manager, i.e. open a case, upload traces to the case etc
Incorporate Support CASE management into the OPS Manager, i.e. open a case, upload required traces to the case and maybe even automatically create a support case for some critical problems.... Minimally I'd like to be able to upload required traces directly from sevrer(s) using wget/curl functionality and attach the upload files to the existing support case(s).
2 votes -
Add MongoSparkHelper native support for pyspark
I went through Mongo spark connector python documentation but the thing which I could not find in python documentation was "MongoSpark Helper" section which is available in scala and java documentation.
I was wondering if there is a way to use it in python code after adding the mongo spark connector packages.A sample code to demonstrate what is available and what I am looking for (this code is in scala and I am looking for a similar behavior in python)
Code in use:
sparkSession.read.format("mongo").option("uri", "mongodb://dummymongo:27017")Code looking for:
MongoSpark.builder().connector(connectorwithcustomclientfactory)
.sparkSession(sparkSession).readConfig(dummyReadConfig).build().toDF()2 votes -
Allow configuring Ops Manager to ignore proxy for internal requests
A very common Enterprise HTTP Proxy configuration is to deny requests from local networks (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
In hybrid mode, the Backup Mode attempts to download the binaries from itself locally through the proxy. This brings to errors as the proxy is blocking the local traffic.
4 votes -
Add Ops Manager check to prevent making backups of itself (Backing databases - AppDB, Oplog, Blockstore)
Otherwise, an Out Of Memory condition may result and disable Ops Manager.
5 votes -
Add Global MongoDB Agent Upgrade ability
Add the ability to upgrade all MongoDB Agents across all Projects at the same time instead of clicking on the banner for each Project.
6 votesWe are going to expose this via mongocli first as it would probably makes a fastest solution.
-
Ops Mgr "Insufficient oplog size" is confusing and prevents backups
When using Ops Manager UI (I've not checked the API) to declare a MongoDB cluster to be backed up, Ops Manager tries to be a good citizen and check to see if the clusters oplogs are large enough, based on their recent usage to hold at least 3 hours worth of data based on the last 24 hours of usage patterns. if the check fails, the user is prevented from enabling backup and is shown the warning:
"Insufficient oplog size: The oplog window must be at least 3 hours over the last 24 hours for all members of replica set…
2 votes -
View all clusters grouped by projects (like in Ops Manager)
There doesn't seem to be an equivalent for Ops Manager All Clusters page in Cloud Manager. It is a good way to quickly glance over all cluster states, stats and versions.
Can it be added?2 votes -
mongocli feature request : add an option to specify the path to the CA file in the mongocli configuration.
Currently while using mongocli to connect to Ops Manager via https, one needs to add the public CA certificate to local system's trusted certificate store. Please add a parameter in the configuration file where one can specify the path to the local CA file
1 vote -
Cloud Manager should offer Feature Compatibility Version setting for new clusters
When creating new clusters in Cloud Manager, there isn't an option to create them with a specific FCV. We just had a need to build a 4.2 cluster with FCV 4.0, and had to do it manually while shutting down automation.
2 votes -
Allow to configure `maxTimeMS` for commands executed from Ops Manager's Data Explorer
What is the problem that needs to be solved? Allow to configure
maxTimeMS
for MongoDB commands which are executed from Ops Manager's Data Explorer.Why is it a problem? (the pain) A) Ops Manager's Data Explorer cannot work with views in case if the view is taking >15000 ms to be load. Data Explorer cannot work with find operations in case if that find operation is taking >15000 ms to be completed.
5 votes -
Add support for collections with default locale
At present if a collection has a default collation configured, sharding such a namespace via Ops Manager results in a failure with the following symptom:
…
<myCluster_mongos_131> [13:21:17.050] Plan execution failed on step ShardCollections as part of move ShardCollections : <myCluster_mongos_131> [13:21:17.050] Failed to apply action. Result = <nil> : <myCluster_mongos_131> [13:21:17.050] Error calling shardCollection on sh.myColl with key = [[a 1]] : <myCluster_mongos_131> [13:21:14.994] Error executing WithClientFor() for cp=mubuntu:27017 (local=false) connectMode=AutoConnect : <myCluster_mongos_131> [13:21:14.993] Error running command for runCommandWithTimeout(dbName=admin, cmd=[{shardCollection sh.myColl} {key [{a 1}]} {unique false}]) : result={} identityUsed=mms-automation@admin[[MONGODB-CR/SCRAM-SHA-1]][24] : (BadValue) Collection has default collation: collation: { locale: "fr", caseLevel:4 votes -
job scheduling from Ops Manager
Please provide job scheduling from Ops Manager, should be able to run Database jobs and non database jobs, shows history of jobs, options to purge job history.
2 votes -
Support Arbiters with MongoDB Kubernetes Operator
Support arbiters with MongoDB Kubernetes Operator so that Replicasets should be deployed in PSA configuration.
12 votes -
Single Project Programmatic API Keys should not require Org User Admin role for IP Whitelists update
When setting a Programmatic API Key in the MongoDB Ops Manager for a given Project, it seems Project Owners are unable after setting the Key to update the IP Whitelist as they require Organization User Admin role to perform such action (screenshot attached).
I guess this makes sense if the same Programmatic API Key is shared between multiple Projects inside the same Organization, but not really if this is applied only to one single Project (i.e. Project Owners should be able to amend the IP Whitelisting of their own API Keys).
I wonder if this could be enhanced in further…
4 votes -
Atlas Open Service Broker - Roles
Is it possible to add the support for user roles to the bind api for the Atlas Open Service Broker?
Customers that are currently using the OSB are limited in that they are having to separately manage the creation and assignment of roles.
2 votes -
Provide a feature to monitor the Alert payload being sent out from Ops Manager to the configured Webhook endpoint and etc.
What is the problem that needs to be solved?
Provide a feature to monitor the Alerts payload being sent out from Ops Manager to the configured Webhook endpoint and other AlertsServices endpoint.Why is it a problem? (the pain)
Currently, Ops Manager does not log the successful/failed payload of the Alerts being sent out to the configured Webhook endpoint. It is difficult to diagnose the problem with the configured Alert Services.2 votes -
we have certain instances where we have more than one databases for their own respective functionality. and we want to keep their backups se
we have certain instances where we have more than one databases for their own respective functionality. and we want to keep their backups separate from each other. would separate or individual backup/restore process is available through Ops Manager?
2 votes -
In changing the Pager duty service key at cluster level, this key should automatically be used instead of making changes to each alert
When we migrate from one Pager duty or Slack account to another, as we change the PD/slack service key at cluster level, shouldn't this key get used automatically instead of us going to each alert and recreating the PD/slack entries? It defeats the purpose of setting the PD/slack key at cluster level. And if we have multiple clusters in prod, we will pretty much have to modify alerts for all of them.
The ticket number for reference is Case #00675142
1 vote -
University MongoDB - Provide explicit Shell Address to Databases
In places in the text in different units it says:
"We are connected to the class atlas cluster from Compass".You could please be explicit with what that address is, since if one is coming back after a couple of days break and have forgotten how to connect to Compass or Mango Shell. If a link to that is provided its a lot faster to be back on track.
1 vote
- Don't see your idea?