Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
1415 results found
-
Support Google IdP for OIDC Workforce Federation
The Atlas supports federated login with external Identity Providers via OIDC (https://www.mongodb.com/docs/atlas/workforce-oidc/) for authenticating human users in tools like mongosh or Mongo Compass.
Unfortunately the OIDC login doesn't work with the GCP IdP: OAuth2 clients in Google IdP always have a client secret (even clients considered as "public"). There is no way to specify the client secret in Atlas UI in the Workload Federation configuration and this leads to "invalidrequest (clientsecret is missing.)" error returned from the IdP as it always expects a client secret to be present.
The support of an optional client secret in…
15 votes -
Manage authentication tokens in account overview
When using Atlas CLI, you need to authenticate your account so that you can access the organisation/cluster. Unfortunately, there is no way to manage a list of previous authentications in your account settings.
This is important in case you are working on a machine that you have no control over and don't have a change to start the logout process from Atlas CLI on the machine you logged in.
A central UI that would allow to revoke previously granted access would be very helpful.1 vote -
Allow API key with project owner rights the update of project API keys
We would like to use the terraform provider
mongodbatlas_access_list_api_key
to maintain the access list of our existing API keys.
We don't have an API key with organization owner rights. We have only an API key with project owner rights.
When I do the changes via web ui project owner rights are enough. I don't understand why the terraform provider needs organization owner rights.
In my understanding it should be possible to execute the providermongodbatlas_access_list_api_key
also with project owner rights.4 votes -
Use Semantic Versioning
Hi,
My problem is that the terraform provider doesn't use semantic versioning.
This has caused me quite a few problems.
Firstly - it's difficult when scrolling through your version releases to understand what's breaking and what's not (I lost an hour today having to check all the releases for updates, and then applying every couple of versions from an outdated provider to make sure there were no breaking changes).
Secondly, it means I have to pin a specific version in my terraform provider rather than leaving it to auto-update to the latest minor version "~> 1.0".
Lastly, it makes using…1 vote -
2 problems in Atlas metrics webUI
There are two annoying bugs in the metrics page on Atlas web related to the browsers refresh button:
1) Try to rearrange the different metric graphs (panels) to anything other than the default order (like move them up and down to a different order). When you hit refresh - their order is reset.
2) Open two different pages with graphs of any replica sets (or shards). They can be the same or different. Then, on one of them, change the zoom pulldown menu for a different setting (like 1 hour to 8 hours). Wait for the graphs to update to…
1 vote -
Configuring provider with shared credentials file for secrets manager
Currently the provider allows configuration for secrets manager for the API key, however it looks like only static AWS credentials can be used which require assuming a role first and exporting environment variables. It would be much cleaner if you could support shared profiles, much like the AWS provider does https://registry.terraform.io/providers/hashicorp/aws/latest/docs#profile-1.
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html
Specifically https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/internal/provider/credentials.go#L49 is static credentials, but would be great to add profile as an option as well.8 votes -
Include “Tag” Field in the Cost Explorer API Response
We would like to request an enhancement to the API functionality to better support our reporting needs. Specifically, we are asking for a “tag” field to be included in the response from the following endpoint:
https://cloud.mongodb.com/api/atlas/v2/orgs/%s/billing/costExplorer/usageBackground
Our team generates detailed monthly reports on departmental usage costs using the API. Each month, we provide information such as:
• Organization ID
• Project ID
• Cluster NameTo categorize costs, we use tags to assign each cluster to specific internal department categories. These tags are already included in the CSV file available at billing > invoices > download > csv, allowing…
4 votes -
Create One Rolling Index in Terraform
This is a request to add our One Rolling Index request to our Terraform.
References:
* https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Rolling-Index/operation/createRollingIndexBenefits:
Many teams interact with Atlas via automation using Terraform. This has been highlighted as one of the important ones to have in Terraform.
2 votes -
Update the regex used to split a database user import id to match the database name constraint
Hello,
While doing terraform import of the mongo db users, i'm facing an issue with the mongo terraform provider.
The database name contains an underscore so my imported user ID is 5ceClusterId-username-my_database.
I've got the following error when i launch my terraform import:
Error: error splitting database User info from ID
│ import format error: to import a Database User, use the format {projectid}-{username}-{authdatabase_name}Indeed the mongo tf provider uses a regex to split this ID and doesn't allow characters for db name others than $a-z.
=> https://github.com/mongodb/terraform-provider-mongodbatlas/blob/ebb67f86165e0a364e486e769678377db507f005/internal/service/databaseuser/resource_database_user.go#L349Is it possible to update the regex to allow others…
6 votes -
DataBases Prices
Several clients from Brazil frequently complain that MongoDB is expensive. One of the reasons is the need to create separate clusters (even M10-sized clusters) for development and/or staging for each production cluster.
One alternative they are using to reduce MongoDB costs is to share DEV and/or staging clusters among multiple teams. However, this introduces another challenge: how to split these costs internally between different departments.
To address this demand, which has been raised by several clients—including Ambev, Anhanguera, DASA, and Digibee—my suggestion is for Atlas to provide a way for customers to see the cost per database within a cluster.…
1 vote -
Atlas GovCloud Archival
Please add the online archive feature for GovCloud
1 vote -
Add single link between workload identity provider and organization via Terraform
We currently have multiple Terraform workspaces for different environments that each set up their own MongoDB Atlas workload identity provider (AKS cluster). In order to link these providers to the organization you have to manage a "mongodbatlasfederatedsettingsorgconfig" resource and pass a list of ALL identitiy provider ids. The Terraform workspace only knows (or rather: should only know) about its own identity provider, so it would be nice to have a single Terraform resource that manages a single workload identity provider <-> organization link.
1 vote -
Support auth token from service accounts
Support auth token from service accounts in the provider configuration, as well as API keys, https://www.mongodb.com/docs/atlas/configure-api-access/#make-an-api-request.
1 vote -
API - Version 2
We saw that the api version is now in v2 for some resources (as clusters) - https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Clusters/
We tried to change some App services functions from v1 to v2, but ended resulting in some errors (or needing to add more parameters than the original one - in version 1).
Using version 1, we only inform what we need to change ("instanceSize") plus the providerName.to use version 2, we need to inform all this parameters (if it's a replica-set. If it is a sharding we also need to inform the numShards):
"replicationSpecs":[{"regionConfigs":[{"electableSpecs":{"instanceSize":"M10","nodeCount":"3"},"priority":"7","providerName":"GCP","regionName":"CENTRAL_US"}]}]}What I need to change is only the…
38 votes -
Expose PSC connection details in Atlas
Allowing the users to view the details of PSC connections like GCP project id, VPC, Subnets once after the PSC is in available state which will help us in troubleshooting if there are any issue from the cloud providers side.
1 vote -
Allow user to be able to download queries in Query Insights. Also a search bar in Query Insights to find certain queries
Allow user to be able to download queries in Query Insights would be nice. Also a search bar in Query Insights to find certain queries. Clicking through each operation to look at each query takes too much time. I want to be able to search any queries with any specific criteria so I don't have to click on each one to check for collection or index scan or duration time etc.
1 vote -
Export Aggregation Results as Metrics to Prometheus
Add support for exporting MongoDB aggregation results as Prometheus metrics. This would allow users to track custom queries and dynamic data, enabling more granular and meaningful monitoring and alerting in Prometheus and Grafana.
8 votes -
Improvement to Atlas API - return the cluster name in the json
Hi,
We use Atlas administration API extensively in some admin apps we created over Atlas. For example, we use the get all processes in project to get all the node names in a project and then iterate over them to perform various operations. For some of the operations, we need also the cluster name.
Until recently, we could count on the userAlias part of the returned json to get the cluster name. It is an anti-pattern to rely on string manipulations but it was working. Now, when some clusters are migrated from serverless and/or flex and stay with the shared…1 vote -
When migrating shared/serverless/flex to dedicated - name the nodes correctly
We recently had a few clusters migrated from serverless and flex to dedicated clusters. In some of them - the nodes of the new dedicated cluster were still the automatic, hashed names of the previous type.
I think it would be a super great idea to create the new nodes in the same way they are created for new dedicated clusters - with the cluster name as their prefix and not ac-XXXXX like we got.Thanks,
Oren1 vote -
general cpu class for M400 and M600
Currently when selecting M400 or M600 only "Low CPU" class is available. Our only option to increase CPU is going back to M300 with "General" class to get 96cpu. We would benefit a lot from "General" option of 128cpu and 160cpu.
8 votes
- Don't see your idea?