Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
1390 results found
-
Expose just the server name (without the rest of the connection string) as a cluster attribute
There are many
connection_strings
available, but all of them are a full URI. Since the canonical way to connect is with the login and password in the URI, I always need to parse the value of aconnection_strings
, insert login info, and then reformat it.I'd like to just have the server name available as an attribute.
1 vote -
Support Google IdP for OIDC Workforce Federation
The Atlas supports federated login with external Identity Providers via OIDC (https://www.mongodb.com/docs/atlas/workforce-oidc/) for authenticating human users in tools like mongosh or Mongo Compass.
Unfortunately the OIDC login doesn't work with the GCP IdP: OAuth2 clients in Google IdP always have a client secret (even clients considered as "public"). There is no way to specify the client secret in Atlas UI in the Workload Federation configuration and this leads to "invalidrequest (clientsecret is missing.)" error returned from the IdP as it always expects a client secret to be present.
The support of an optional client secret in…
11 votes -
As a DBA I need to be able to create/delete a single backup policy via CLI/API
As a DBA I need to be able to create/delete a single backup policy via CLI/API.
For some projects, I would like to be able to create a single hourly policy.
Currently it's impossible. Default policies are created and via API I cannot delete all the other policies one by one leaving just hourly one. I also can't delete all and add a single one.I would like this functionality to be available both in API and CLI.
Currently you can only
- delete all policies
- update existing policy by policyIdI need to be able to delete a…
1 vote -
Improve Atlas cost utilization tracking by time and label
Currently for any Atlas instance, cost granularity is limited to the cluster. It would be a significant improvement to enable cost utilization tracking by database for storage and compute. Specifically, storage and cost records should be granular in time and label, where labels can be assigned to databases. That is, as a common example with GCP for example, costs should be tracked by second, and in the billing data, aggregated to the hour by label.
This will allow chargeback, showback, and improved allocation methods for modern FinOps practices. As it is, MongoDB Atlas falls well short of any ability to…
1 vote -
Control shard balancing window with Terraform
This documentation page talks about how to manage shard balancing - https://www.mongodb.com/docs/manual/tutorial/manage-sharded-cluster-balancer/#schedule-the-balancing-window
but it requires connecting to the db first, I see no way to manage this through the Atlas Gui or terraform.
We have had some instances recently where shard balancing has caused a large resource usage spike on our cluster and affected our services and we would really like to be able to set a shard balancing window using terraform to prevent this from happening during the middle of the day2 votes -
Improve Admin API for API keys rotation
Given we have security mandates where we need to rotate API keys for an organization, every 365 days. It would be ideal if when calling
https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Programmatic-API-Keys/operation/getApiKeygetApiKey from admin api that it should return the created date. This way programmatically we can rotate keys and maintain security posture. As well if there was a way to just refresh the api secret and not generate a new one that would be a plus
7 votes -
Option to prevent backups restores into particular cluster
Today it is possible to accidentally restore backup into live production database. There is a "I confirm" dialog in the UI but it is not fool proof, plus when using terraform provider you still could do yourself harm.
It would be nice to have an option similar to Termination Protection where I could mark a cluster that it should be impossible to restore backup into. If I want to restore backup into existing cluster I would have to turn the restore protection off. Or I could restore into a new cluster.
Another way to do it would be to have…
3 votes -
The oplog configuration is a bit confusing
The oplog configuration is a bit confusing. I think it's not clear that if we set the oplog window, we don't need to set the maximum oplog size. I believe this configuration should be more specific.
1 vote -
Could we increase the vCPU's of the Atlas cluster without increasing its memory.(RAM)
For example: suppose right now, we have 2 vCPU's with 8GB RAM. But we are facing high CPU utilization error. Can we change the vCPU's to 4 and keep RAM as it is i.e.8GB?
1 vote -
cloud.mongodb.com,Since this week, it often cannot be opened
cloud.mongodb.com,Since this week, it often cannot be opened
1 vote -
Request for Addition of Seoul Region to Reduce Latency
Sure! Here's the translation of your request:
A brief description of what you are looking to do
I would like to request the addition of the Seoul region because there is currently only the Japan region, which results in a 2-second delay for requests.How you think this will help
Adding the Seoul region will reduce the latency and improve the overall performance of our application.Why this matters to you
This matters to me because the current delay is affecting the user experience and efficiency of our services. Reducing latency is crucial for maintaining high performance and user satisfaction.17 votes -
Allow move jumbo chunks to admin users
As an administrator I want to be able to move jumbo chunks by myself. Currently we are not allowed to clean the jumbo flag, so we need Support to do this.
3 votes -
Migrate users and roles with between Atlas Projects
It would be great to be able to transfer/copy users with their credentials and permissions from one Project to another.
In a longer time, from one Org to another.
2 votes -
Live Migration with granular permissions
Today, to carry out migrations you need to be the Project Owner of the project, however we have very restricted permissions regarding Mongo projects so that people can use the company's stack, which would be via terraform. The dba's have readonly permissions on the projects so as not to modify items in the projects using the console.
The idea is to create an intermediate permission so that dba's can carry out cluster migrations without necessarily needing to have a project owner. Is there a way to do this without breaking permissions, is it possible to request this feature?1 vote -
"Atlas Onboarding"
Allow user to escape from onboarding without having to edit the url.
When the system doesn't accept the input, the user cannot escape and see their dashboard.1 vote -
Ability to create granular auditing
I want to be able to create granular auditing for a specific cluster inside a project. Right now, the audit is applied to every cluster inside a project. Some clusters need more auditing than others, and their performance should not be affected by other cluster configuration.
1 vote -
IP address, Mongo Network Issue
Why is it that we always have to provide our ip address in the Netwoek Access section and that's not even the issue the fact that we have to add 3,4 ip addresses everytime we open atlas is the main issue. Every time I login I delete ip's adn add them again or logout and add them again it's a repeating process until I get my connection going please resolve this issue now it's going on for a while.
1 vote -
Support Online Archive in AWS Hong Kong region
Request to make Online Archive available in AWS Hong Kong region
Customer are using MongoDB Atlas in AWS HK region to store data that are required to be located in the same region.
2 votes -
Activity feed filters - allow filtering by cluster node name
We need to be able to filter the activity feed based on the name of the cluster's nodes, not just the name of the cluster itself.
9 votes -
Billing
What am I paying $60 a month for??? We only set it up and wired it to our Azure services; there is no data being run through it, we only hit it 2-3 times to test the APIs and make sure they work - it has been sitting dormant for 5 months, and I've gifted you guys over $300 as a result. HELP ME TO LOWER THE COST!!! Seems like a huge waste of money..
1 vote
- Don't see your idea?