Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
62 results found
-
Please bring Atlas to digital ocean PaaS as a first class offering!
Hey there,
I know it's hard for big companies to get along, but I'd love to have a seamless experience for using MongoDB alongside DO's new PaaS offering. Can you all strike a deal please?
2 votes -
"Native" Azure integration (Costing, Resource Groups)
I would love to do away with MongoDB CosmosDB on Azure - simply because
a) CosmosDB emulator is troublesome to install (for local development)
b) Limited MongoDB feature support
c) Difficulty getting Bulk inserts/upserts to work well in CosmosDB MongoDB (due to rate limiting and bugs with IsUpsert=true)Currently we have 18 CosmosDB databases - nfortunately, the main barrier of entry for my organization in adopting Atlas is it's Azure integration - specifically:
a) Atlas resources are not visible/manageable via Azure Resource Groups - this makes our existing Azure DevOps practices/infrastructure as code obsolete for this (versus all our other…2 votes -
Domain Mapping for Federated Authentication
Improvements to Domain Mapping for Federated Authentication.
1. Update the documentation (https://www.mongodb.com/docs/atlas/security/manage-domain-mapping/) to clearly state if the TXT/HTML record should persist or if they can be deleted after a domain has been verified.
2. If the record has to persist, consider changing the name of the key. TXT records are public and can be used by security researchers to map all companies using Atlas by looking for TXT records with "mongodb-site-verification".1 vote -
logs - pull from s3 over private link
When reviewing push to s3 feature, our security teams have raised concerns around 3rd Party having the ability to write into our application team's s3 bucket (push model), the existing pull model (pulling logs via API) transmits logs over internet and additionally requires HA log agents to deployed at our end. As a middle ground if Atlas can expose logs over s3 API that's consumable via private link it will help to address security concerns while reducing the complexity involved in pulling logs over atlas control plane API.
1 vote -
Make IP linking easy for Atlas customers deployed on Google Cloud
I know it may not seem MongoDB's responsibility, but understanding how to get IP addresses for my Google app engine services so that I can put them in MongoDB (which for some reason wants IP addresses instead of URLs) it's difficult. If MongoDB wants to sell to Google Cloud customers they should make it easy to set up, not only for the billing.
1 vote -
Prevent exposure of Azure Vault or KMS
Today Mongodb communication with the BYOK key is by internet, it is necessary to allow public IPs:
https://learn.microsoft.com/en-us/azure/key-vault/general/private-link-service?tabs=portal
https://learn.microsoft.com/en-us/security/benchmark/azure/baselines/key-vault-security-baseline
1 vote -
DataDog integration: add metrics for any current resource limits
We're using the DataDog integration to bring MongoDB metrics to DataDog. We want to re-create a DataDog monitor which is an equivalent to Altas alert "Connections of configured limit has gone above <>". However, we cannot do that relying solely on metrics, since the only connection related metric we found is "mongodb.atlas.connections.current".
It looks like the only way to solve this issue is to create a DataDog monitor based on the received event from Atlas.1 vote -
Exponential Backoff
Exponential Backoff:
Data come from AWS event bridge, connection issues happened
Exponential backoff would have helped that that this connection issues are recognized and the customer would not have intervened manually.
There was no alarm once trigger was suspended - an automatic alarming would have been helpful.1 vote -
Integration with Percona Monitoring & Management
Please release Integration to Percona Monitoring & Management. Thank you.
1 vote -
1 vote
-
webhook
Have the ability to specify template for the webhook being send. Currently this is static information and it would be better if it is possible to customize it. This will also make the integrations with not supported 3rd parties more easier and it doesn't have to create a mitm service that translates the atlas webhook to some other format.
1 vote -
Improve OpsGenie alert mis-categorization
We noticed that alerts sent to Opsgenie using the Integrations is using "https://api.opsgenie.com/v2/alerts" URL, which causes Opsgenie to mis-categorize the alerts for all customers, according to Opsgenie themselves.
Issue re-production steps:
* Create a team in Opsgenie or use an existing team (use a team that is NOT used for the Default API integration)
* Create a MongoDB integration in that team and take note of the API key
* Create an Opsgenie Integration in MongoDB and use that API key
* Generate an alert or Test the integration
* Verify in Opsgenie that the test alert did…1 vote -
AppFlow Connector (AWS)
Customer "Midland Credit Management, Inc." is using Marketo to AWS Appflow:
Amazon AppFlow is a fully managed integration service that lets you securely transfer data between Software-as-a-Service (SaaS) applications and AWS services. Use Amazon AppFlow to automate your data transfers in just a few minutes. No coding is required.Customer would like to use MongoDB Atlas as a destination in AppFlow.
1 vote -
Link two different clusters to the same Vercel app with the Vercel Integration
At the moment, one can only link one cluster to a Vercel app (which creates a MONGODBURI variable in Vercel). It would be nice to be able to link another cluster (e.g. a dev cluster) to the same vercel app by customising the name of the MONGOBURI variable created (e.g. MONGODBURIDEV).
1 vote -
Have EventBridge send an error message to Atlas to stop flow of Trigger data at the source
Currently, once a Mongo Trigger makes a successful hand-off to EventBridge, it doesn't have insight into EventBridge issues landing the data in its destination. Currently, you have to set up separate code to process failed EventBridge events using the DeadLetter queue. You also need to set up a process to integrate this data from the DeadLetter queue back into the EventBridge destination. If the requested functionality existed, you would only need the Trigger restart process to handle any failure in the data stream from Mongo to EventBridge destination. This will also enable a single place to monitor the data stream.
1 vote -
AWS Graviton2
Hello,
in 2020 AWS announced the Graviton2 Processor for EC2 servers. Do you support that architecture? If you don't, Do you have any plan for this?
1 vote -
Need network whitelisting of API key for CI and Terraform
Hi great Mongo people.
The API key under organizational settings operates under a whitelisting model. There is currently no way (I can see) to open the key to 0.0.0.0/0.
But in use cases where you make calls to Atlas to manage infrastructure through Terraform (like I do) and use a CI SAAS tool like Gitlab (like I do) that is built on a cloud (like GCP) there is an insane amount of whitelisting that is required. Also, about every 3rd run I have to come in and add another white list IP so that my Terraform can run.
Could you…
1 vote -
Import IP ranges from cloud-provider list
Automatic rules based on cloud-provider list (region/zones selected by user).
Source for AWS IP ranges: https://docs.aws.amazon.com/pt_br/general/latest/gr/aws-ip-ranges.html
1 vote -
Discord Integration
Hi, I would like to know if there is a possibility to deploy Discord (webhook) integration like Slack.
1 vote -
Zabbix integration
It would be interesting, Atlas could also be monitored via SNMP, in order to be monitored by Zabbix.
Many companies have their own CommandCenter, where they centralize monitoring in a single tool.
1 vote
- Don't see your idea?