Skip to content

Atlas

Share your idea. In order to help prioritize, please include the following information

  1. A brief description of what you are looking to do
  2. How you think this will help
  3. Why this matters to you

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

59 results found

  1. Add Datadog integration to Atlas Serverless

    Datadog ( and other monitoring tools ) integrate great with Atlas, it would be awesome to copy this integration to Atlas Serverless. Without this we can't monitor Atlas Serverless with our standard toolset. In conclusion, please bring 3rd party monitoring integration to Atlas Serverless !

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Prometheus metrics formatting to match Grafana

    Building on "Export metrics to Prometheus" (marked as completed) https://feedback.mongodb.com/forums/924145-atlas/suggestions/40463872-export-metrics-to-prometheus

    I have configured the Prometheus metrics integration to make metrics available to Grafana Cloud which has a MongoDB Integration built in. I was disappointed to see that most of the metric names don't match the ones expected by the MongoDB Integration in Grafana.

    The MongoDB documentation doesn't define a standard metric nameset so I appreciate that this is nominally correct behaviour, but it would be good to see some consistency between the MongoDB Atlas integration and the metrics expected by Grafana.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. AppFlow Connector (AWS)

    Customer "Midland Credit Management, Inc." is using Marketo to AWS Appflow:
    Amazon AppFlow is a fully managed integration service that lets you securely transfer data between Software-as-a-Service (SaaS) applications and AWS services. Use Amazon AppFlow to automate your data transfers in just a few minutes. No coding is required.

    Customer would like to use MongoDB Atlas as a destination in AppFlow.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Link two different clusters to the same Vercel app with the Vercel Integration

    At the moment, one can only link one cluster to a Vercel app (which creates a MONGODBURI variable in Vercel). It would be nice to be able to link another cluster (e.g. a dev cluster) to the same vercel app by customising the name of the MONGOBURI variable created (e.g. MONGODBURIDEV).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Allow atlas integration with Jira to create tickets directly based on Atlas alerts

    Atlas alerts setup should be able to directly create a ticket in the Jira board to the respective team. This will resolve the burden to check email alerts for alll the setup alerts.

    Instead, we can setup the critical alerts in such a way that it will directly create a Jira ticket.

    Thanks,
    Rohan Kumar

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. New relic integration

    We have an atlas account where all of are databases are, and we use new relic for monitoring and alerting.
    It would be great if i could show in new relic dashboards of the status of our mongoDB, because all other dashboards are already there

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Allow more than one DataDog API Key

    When I enable the DataDog integration in Atlas, I can add just one DataDog API Key. It'll be great if I could add two or more DataDog API Key.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Allow customers to specify the number of service attachments for PSC

    To connect applications to MongoDB Atlas clusters/project via Google Private Service Connect, the documentation says we need to reserve 50 IP addresses in our subnet:

    https://www.mongodb.com/docs/atlas/security-private-endpoint/

    Each private endpoint in Google Cloud reserves an IP address within your Google Cloud VPC and forwards traffic from the endpoints' IP addresses to the
    service attachments
    . You must create an equal number of private endpoints to the number of service attachments. The number of service attachments defaults to 50.

    We would like the ability to not have to reserve 50 IP addresses per project as we have limited internal subnets. We would…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Have EventBridge send an error message to Atlas to stop flow of Trigger data at the source

    Currently, once a Mongo Trigger makes a successful hand-off to EventBridge, it doesn't have insight into EventBridge issues landing the data in its destination. Currently, you have to set up separate code to process failed EventBridge events using the DeadLetter queue. You also need to set up a process to integrate this data from the DeadLetter queue back into the EventBridge destination. If the requested functionality existed, you would only need the Trigger restart process to handle any failure in the data stream from Mongo to EventBridge destination. This will also enable a single place to monitor the data stream.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. AWS Graviton2

    Hello,

    in 2020 AWS announced the Graviton2 Processor for EC2 servers. Do you support that architecture? If you don't, Do you have any plan for this?

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Add specific database API Metrics for MongoDB Atlas.

    A while back we ran into an issue where we ran out of connections in MongoDB Atlas. It took quite a while to determine which database was being overrun with bad connections. It would be very nice if the API, which we use to pull metrics into another monitoring platform, allowed us to get certain metrics at the database level instead of the overall system level. The metrics I am suggesting are
    Active Connections
    Response Time
    Failed Connections
    Number of requests
    Operation Execution Time

    These would help us more quickly discover the source of the problem and trace it back…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Add per-user connection limit

    To prevent one user/application from exhausting available connections it would be extremely helpful to have the ability to set per-user connection limits.

    This connection limit would only prevent the specified user from connecting once their limit is reached and still allow other users to connect.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Add Tailscale peering

    In lieu of doing 0.0.0.0/0 network access, it'd be great to peer into Tailscale so that anyone that is authorized in our organization's Tailscale VPN can connect effortlessly.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Enable Private Link for Azure Data Factory

    Private Link for Azure Data Factory is not currently supported. This appears to be the most secure way for ADF to connect to Atlas / MongoDB, and support for this feature would be ideal for our use-case.

    14 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. description in sso group mapping

    Add a description field in the Atlas UI in the group mapping page when SSO is enabled. Today only an object ID is available.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Need network whitelisting of API key for CI and Terraform

    Hi great Mongo people.

    The API key under organizational settings operates under a whitelisting model. There is currently no way (I can see) to open the key to 0.0.0.0/0.

    But in use cases where you make calls to Atlas to manage infrastructure through Terraform (like I do) and use a CI SAAS tool like Gitlab (like I do) that is built on a cloud (like GCP) there is an insane amount of whitelisting that is required. Also, about every 3rd run I have to come in and add another white list IP so that my Terraform can run.

    Could you…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Support for customer-managed keys (CMK) on the volume-level instead of Encryption-at-Rest

    For some customers managing a set of low-latency workloads is crucial, so volume-based encryption using their own KMS encryption keys is preferred over the encryption-at-rest feature of the WiredTiger storage engine. https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
    The support is required for clusters, their backups and Atlas Data Lake.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Send Atlas logs to S3

    I would like to automatically send my cluster logs to my S3 bucket. I could then use Atlas Data Lake to query them and Charts to create visualizations on them, or inspect them with other tools.

    63 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Custom metric option on Datadog

    Currently, there is no option to use the custom metric option when using the Atlas Datadog integration.

    We would like to use this functionality in Atlas : https://docs.datadoghq.com/integrations/guide/mongo-custom-query-collection

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Import IP ranges from cloud-provider list

    Automatic rules based on cloud-provider list (region/zones selected by user).

    Source for AWS IP ranges: https://docs.aws.amazon.com/pt_br/general/latest/gr/aws-ip-ranges.html

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Integrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base