Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
62 results found
-
Add EventBridge integration for Serverless instances
Currently EventBridge integration doesn't work for serverless instances
2 votes -
Send Atlas logs to S3
I would like to automatically send my cluster logs to my S3 bucket. I could then use Atlas Data Lake to query them and Charts to create visualizations on them, or inspect them with other tools.
68 votes -
Allow customers to specify the number of service attachments for PSC
To connect applications to MongoDB Atlas clusters/project via Google Private Service Connect, the documentation says we need to reserve 50 IP addresses in our subnet:
https://www.mongodb.com/docs/atlas/security-private-endpoint/
Each private endpoint in Google Cloud reserves an IP address within your Google Cloud VPC and forwards traffic from the endpoints' IP addresses to the
service attachments
. You must create an equal number of private endpoints to the number of service attachments. The number of service attachments defaults to 50.We would like the ability to not have to reserve 50 IP addresses per project as we have limited internal subnets. We would…
6 votes -
1 vote
-
Prometheus metrics formatting to match Grafana
Building on "Export metrics to Prometheus" (marked as completed) https://feedback.mongodb.com/forums/924145-atlas/suggestions/40463872-export-metrics-to-prometheus
I have configured the Prometheus metrics integration to make metrics available to Grafana Cloud which has a MongoDB Integration built in. I was disappointed to see that most of the metric names don't match the ones expected by the MongoDB Integration in Grafana.
The MongoDB documentation doesn't define a standard metric nameset so I appreciate that this is nominally correct behaviour, but it would be good to see some consistency between the MongoDB Atlas integration and the metrics expected by Grafana.
2 votes -
Enable Private Link for Azure Data Factory
Private Link for Azure Data Factory is not currently supported. This appears to be the most secure way for ADF to connect to Atlas / MongoDB, and support for this feature would be ideal for our use-case.
14 votes -
webhook
Have the ability to specify template for the webhook being send. Currently this is static information and it would be better if it is possible to customize it. This will also make the integrations with not supported 3rd parties more easier and it doesn't have to create a mitm service that translates the atlas webhook to some other format.
1 vote -
Improve OpsGenie alert mis-categorization
We noticed that alerts sent to Opsgenie using the Integrations is using "https://api.opsgenie.com/v2/alerts" URL, which causes Opsgenie to mis-categorize the alerts for all customers, according to Opsgenie themselves.
Issue re-production steps:
* Create a team in Opsgenie or use an existing team (use a team that is NOT used for the Default API integration)
* Create a MongoDB integration in that team and take note of the API key
* Create an Opsgenie Integration in MongoDB and use that API key
* Generate an alert or Test the integration
* Verify in Opsgenie that the test alert did…1 vote -
Allow atlas integration with Jira to create tickets directly based on Atlas alerts
Atlas alerts setup should be able to directly create a ticket in the Jira board to the respective team. This will resolve the burden to check email alerts for alll the setup alerts.
Instead, we can setup the critical alerts in such a way that it will directly create a Jira ticket.
Thanks,
Rohan Kumar3 votes -
New relic integration
We have an atlas account where all of are databases are, and we use new relic for monitoring and alerting.
It would be great if i could show in new relic dashboards of the status of our mongoDB, because all other dashboards are already there2 votes -
Allow more than one DataDog API Key
When I enable the DataDog integration in Atlas, I can add just one DataDog API Key. It'll be great if I could add two or more DataDog API Key.
2 votes -
AppFlow Connector (AWS)
Customer "Midland Credit Management, Inc." is using Marketo to AWS Appflow:
Amazon AppFlow is a fully managed integration service that lets you securely transfer data between Software-as-a-Service (SaaS) applications and AWS services. Use Amazon AppFlow to automate your data transfers in just a few minutes. No coding is required.Customer would like to use MongoDB Atlas as a destination in AppFlow.
1 vote -
Add specific database API Metrics for MongoDB Atlas.
A while back we ran into an issue where we ran out of connections in MongoDB Atlas. It took quite a while to determine which database was being overrun with bad connections. It would be very nice if the API, which we use to pull metrics into another monitoring platform, allowed us to get certain metrics at the database level instead of the overall system level. The metrics I am suggesting are
Active Connections
Response Time
Failed Connections
Number of requests
Operation Execution TimeThese would help us more quickly discover the source of the problem and trace it back…
4 votes -
Add per-user connection limit
To prevent one user/application from exhausting available connections it would be extremely helpful to have the ability to set per-user connection limits.
This connection limit would only prevent the specified user from connecting once their limit is reached and still allow other users to connect.
3 votes -
Link two different clusters to the same Vercel app with the Vercel Integration
At the moment, one can only link one cluster to a Vercel app (which creates a MONGODBURI variable in Vercel). It would be nice to be able to link another cluster (e.g. a dev cluster) to the same vercel app by customising the name of the MONGOBURI variable created (e.g. MONGODBURIDEV).
1 vote -
Have EventBridge send an error message to Atlas to stop flow of Trigger data at the source
Currently, once a Mongo Trigger makes a successful hand-off to EventBridge, it doesn't have insight into EventBridge issues landing the data in its destination. Currently, you have to set up separate code to process failed EventBridge events using the DeadLetter queue. You also need to set up a process to integrate this data from the DeadLetter queue back into the EventBridge destination. If the requested functionality existed, you would only need the Trigger restart process to handle any failure in the data stream from Mongo to EventBridge destination. This will also enable a single place to monitor the data stream.
1 vote -
Add Tailscale peering
In lieu of doing 0.0.0.0/0 network access, it'd be great to peer into Tailscale so that anyone that is authorized in our organization's Tailscale VPN can connect effortlessly.
3 votes -
Custom metric option on Datadog
Currently, there is no option to use the custom metric option when using the Atlas Datadog integration.
We would like to use this functionality in Atlas : https://docs.datadoghq.com/integrations/guide/mongo-custom-query-collection
12 votes -
Need network whitelisting of API key for CI and Terraform
Hi great Mongo people.
The API key under organizational settings operates under a whitelisting model. There is currently no way (I can see) to open the key to 0.0.0.0/0.
But in use cases where you make calls to Atlas to manage infrastructure through Terraform (like I do) and use a CI SAAS tool like Gitlab (like I do) that is built on a cloud (like GCP) there is an insane amount of whitelisting that is required. Also, about every 3rd run I have to come in and add another white list IP so that my Terraform can run.
Could you…
1 vote -
AWS Graviton2
Hello,
in 2020 AWS announced the Graviton2 Processor for EC2 servers. Do you support that architecture? If you don't, Do you have any plan for this?
1 vote
- Don't see your idea?