Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
59 results found
-
Make IP linking easy for Atlas customers deployed on Google Cloud
I know it may not seem MongoDB's responsibility, but understanding how to get IP addresses for my Google app engine services so that I can put them in MongoDB (which for some reason wants IP addresses instead of URLs) it's difficult. If MongoDB wants to sell to Google Cloud customers they should make it easy to set up, not only for the billing.
1 vote -
Support for push-based logging integration with Google Cloud Storage
As of now, MongoDB Atlas supports log export functionality exclusively with AWS S3. In addition to the existing AWS S3 bucket integration, would be good to have the capability of pushing logs to GCP Cloud Storage for organisations who depend on GCP for cloud infrastructure.
3 votes -
SIEM
Add audit log integration with enterprise SIEMs
6 votes -
MongoDb Atlas Network Peering with Oracle OCI (Similar to Azure and AWS)
Please add similar Atlas network peering capabilities as that of Azure and AWS, for Oracle Cloud Infrastructure customers. We are currently using Azure, but would like to use OCI as well.
1 vote -
Propagation of NodeType via Prometheus metrics
Context: we rely on Prometheus metrics for various systems in the company. The setup of clusters until now was pretty standard, so basically we had replica sets and sharded clusters with electable nodes only – clusters with three nodes per RS or shard which numbers correspond to <clustername>-<shardnumber>-<(00|01|02)>>. Now we have a fourth analytics node in the clusters, which out of convention seems to get the next incremented number 03.
The thing is we cannot rely on convention and we need to have a way to distinguish the node types from Prometheus metrics (or at least from the Atlas API)…
26 votes -
tgw support for Atlas
Can you please provide TGW support for Atlas? We are using confluent.io kafka with TGW, and found ourselves caught up in situation where kafka has no direct access to atlas, both our EKS and kafka are directly connected to Kafka, but there's no way for kafka to access atlas, because EKS and atlas are connected over vpc peering. We do not take interest in PrivateLink. Our Kafka needs direct access to Atlas because we are running the mongo source connector. Only viable solution now is to run the mongo source connector on our EKS, which while works defeats the purpose…
2 votes -
Prevent exposure of Azure Vault or KMS
Today Mongodb communication with the BYOK key is by internet, it is necessary to allow public IPs:
https://learn.microsoft.com/en-us/azure/key-vault/general/private-link-service?tabs=portal
https://learn.microsoft.com/en-us/security/benchmark/azure/baselines/key-vault-security-baseline
1 vote -
Parity of metrics between Atlas UI and Prometheus integration
It would be great if any metrics that are available in the Atlas UI are also available via the the Prometheus integration, especially since the metrics in the Atlas UI get coarse-grained after a while.
For example, in a recent incident it took us a while to do some preliminary investigations, and we wanted to confirm by looking at historical values for the metric "Operation Execution Time".
Unfortunately the chart in the Atlas UI was already too coarse-grained and the metric also does not seem to get exposed via the Prometheus integration.
6 votes -
DataDog integration: add metrics for any current resource limits
We're using the DataDog integration to bring MongoDB metrics to DataDog. We want to re-create a DataDog monitor which is an equivalent to Altas alert "Connections of configured limit has gone above <>". However, we cannot do that relying solely on metrics, since the only connection related metric we found is "mongodb.atlas.connections.current".
It looks like the only way to solve this issue is to create a DataDog monitor based on the received event from Atlas.1 vote -
Exponential Backoff
Exponential Backoff:
Data come from AWS event bridge, connection issues happened
Exponential backoff would have helped that that this connection issues are recognized and the customer would not have intervened manually.
There was no alarm once trigger was suspended - an automatic alarming would have been helpful.1 vote -
Datadog integration for AP1 region
I am using Datadog region AP1 (Japan) and I learned that Atlas doesn't support the region for the integration.
It would be really great and helpful if the integration supports the AP1 region.
3 votes -
Private endpoint termination protection
Similar to the Cluster Termination Protection, it would be nice to have Private Endpoint termination protection to prevent accidental deletes of private endpoints, which could very likely result in application connectivity loss and downtime.
5 votes -
Integration with Percona Monitoring & Management
Please release Integration to Percona Monitoring & Management. Thank you.
1 vote -
Map IDP groups to Atlas teams
At the moment Atlas does not support mapping IDP groups to existing Atlas teams. We would like that the integration would support that. For example:
okta group "devops" --- mapped to ---> Atlas team "devops"
Each time the customer adds a user to this idp group, the user will be given the proper permissions in Atlas.
10 votes -
Add EventBridge integration for Serverless instances
Currently EventBridge integration doesn't work for serverless instances
2 votes -
AWS ME Central Region Support
I have an ECS cluster in AWS me-central-1 region and I am not able to establish a peering connection with my AWS VPC as me-central-1 region is missing in the options. When will this option be available?
3 votes -
Control KMIP Key Rotation Timing
We need the ability to change the automatic key rotation time for our cluster. Currently the automatic rotation is set at 90 days but we cannot specify the time at which the rotation should occur. We want to avoid automatic key rotation activity during business hours.
2 votes -
1 vote
-
webhook
Have the ability to specify template for the webhook being send. Currently this is static information and it would be better if it is possible to customize it. This will also make the integrations with not supported 3rd parties more easier and it doesn't have to create a mitm service that translates the atlas webhook to some other format.
1 vote -
Improve OpsGenie alert mis-categorization
We noticed that alerts sent to Opsgenie using the Integrations is using "https://api.opsgenie.com/v2/alerts" URL, which causes Opsgenie to mis-categorize the alerts for all customers, according to Opsgenie themselves.
Issue re-production steps:
* Create a team in Opsgenie or use an existing team (use a team that is NOT used for the Default API integration)
* Create a MongoDB integration in that team and take note of the API key
* Create an Opsgenie Integration in MongoDB and use that API key
* Generate an alert or Test the integration
* Verify in Opsgenie that the test alert did…1 vote
- Don't see your idea?