Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
435 results found
-
secrets versioning for database users
We would like to have secrets versioning on atlas end so that secrets rotation is easier - as it is a periodic procedure in most companies, would be great to refer to a new secret version of an existing user instead of recreating the user.
also considering this is sadly not gonna be handled https://feedback.mongodb.com/forums/924145-atlas/suggestions/44283477-vault-should-return-users-only-once-they-can-be-us would be cool to make rotation easier.1 vote -
There is a grammatic error in button text.
The button says "Do not me show again"; it should say "Do not show me again".
Thank you,
Alpesh1 vote -
disabling mongo user
Instead of completely removing a Mongo database user, its better if there is option to disable user so that if there is any untoward incident we can enable it back. After observing for some time, we can delete/disable the user based on organization requirements. Its useful in some cases like mongo db user returned to same department after some time .
3 votes -
Change shortcut for GOTO
The current shortcut (CTRL + SPACE + TAB) for the GOTO functionality is incredibly annoying as it results in a tab switch in Chrome, and it's also quite slow to reach for a shortcut.
A suggestion would be to use CTRL + K which usually in Chrome is "Search on Google", but it's also used by GitHub for their version of "GOTO", so pressumably a whole lot of people are already used to it.
1 vote -
mongoexport - generate multiple compress files as output
This is a feature request for https://github.com/mongodb/mongo-tools/blob/master/mongoexport/main/mongoexport.go
- similar to s3.format.maxFileSize parameter in ADL $out, cut a new file every xxxMB
- similar to s3.format.columnCompression parameter in ADL $out, use gzip/zstd to compress the output BSON or JSON
- support write to locally-mounted file system (EBS/NFS) and cloud object store S3/GCS
The similar logic has probably implemented in the ADL $out codebase already, we just need to port them over to mongoexport
1 vote -
Allow access to Admin API using AWS IAM role
It's possible to authenticate to a database using AWS IAM role. The same should be possible for the Admin API.
The problem with the API keys is that they can be taken away and used elsewhere. They pose an additional risk in an AWS integrated environment.
This also relates to upcoming Cloudformation Resources where the extension needs to store the API Key in AWS Secrets Manager. The resource already has a role that could simply be configured to be trusted on the Atlas side.
2 votes -
Implement SCRAM-SHA-256 mechanism for Atlas or make a SCRAM-SHA-1 as default
The main problem is inconsistency in the default authentication mechanism of clients and Atlas.
This causes a failover from SCRAM-SHA-256 to SCRAM-SHA-1 by clients on authentication failure.
This causes a huge number of errors in "Database Access History" which becomes useless.
1 vote -
Add option to "Explain" a query from the "Profiler" page.
The profiler page shows a list of queries with metadata about them. It would be great to have the option to "explain" any query from this view, or in the slide-out view after clicking on a query.
2 votes -
Carbon Footprint Calculator
A widget or calculator in the Atlas UI, where the user can see the projected carbon footprint of a deployment per year while they are configuring the deployment. And / or a live counter of the carbon Footprint for each deployment/project/org.
17 votes -
show the reason why you can't increase oplog
in the settings=>additional settings=>More Configuration Options=>Set Oplog Size
show the reason why you can't increase this parameter (let's say you have insufficient free space etc.)2 votes -
When Adding new CKM key/role Atlas should validate if it can safely change existing CKM key
When we create new CKM key with new role and update credentials on project level, Atlas validates that new role can read new key. But it does not validate if new role can read existing CKM key.
When Atlas starts re-encrypting existing cluster, first node goes down but can't be started because new role can't read old key. There is no way to restore/rollback this change unless raise a ticket for MongoDB support.
Suggestion: when we upgrade credentials/role/KMS key in UI, Atlas should validate if it can finish this change BEFORE applying changes to nodes.
2 votes -
Support NVMe for Atlas Azure
Support NVMe disk for MongoDB Atlas Azure.
3 votes -
Atlas Administration API: Expose more information on cluster status
We are using the Atlas Administration API in order to automate scaling a production database cluster deployment in a more fine-grained way than the existing autoscaling offer. The specific database cluster is deployed as a replica set, so there is a number of electable nodes (one primary + 2, 4 or 6 secondaries) and any number of read-only (RO) nodes.
The goal is to be able to tune the number of nodes in a specific region, and being able to monitor the progress of the change being rolled out. Currently, the API offerings are too limited to do this effectively,…
1 vote -
Atlas UI enhancement
Hi,
Atlas UI is displaying "Database Deployments" earlier it used to be as cluster or clusters.
- When we hit the create button all it talks about "CLUSTERS > CREATE NEW CLUSTER"
- Right side top corner, we can also see it states "All Clusters"
- Terraform resources/attributes talks about Clusters
- Only place we see in the home page "Database Deployments"
Our customers are getting confused with Database Deployments with Databases within the clusters. Customers are thinking they need to create more database (database deployments) if they want to have for their application, which is not true. They can create multiple database in…
2 votes -
Copy/Duplicate Cluster configuration
It would be really useful to create a new cluster based on an existing configuration.
Snapshots are supposed to be loaded into clusters of the same configuration, but having to set up the configuration manually is prone to human error. Having an option to copy the cluster configuration would be helpful.
Support suggested the following workarounds, but the option should really exist in the UX, given that snapshots can be taken and loaded in the UX.
- use the API to create all clusters so that the config exists in code
- use terraform to create clusters3 votes -
Add ability to chose the new primary node when you perform a test election
There's a "test failover" option, which shuts down the primary and forces an election. However, you have no control over which new primary will be elected (just the preferred region) which sometimes does not work as expected.
It would be really useful if you have the possibility to manually chose which node will be elected as the new primary.
6 votes -
m10 for gcp santiago zone
I would love if i can select m10 than m30 on santiago region.
1 vote -
Don't overwrite explicitl role with default organisation role configured in identity provider
we have user created in Atlas on Organisation level with specific org role. If we configure Identity Provider and set some default role, then after first login this user will receive this default role and original role will be lost. So, even after disabling identity provider, user will have this default role.
Suggestion: explicitly granted roles should be keep untouched, but not be applied in case of active identity provider.
1 vote -
Allow maintenance window definition per cluster
Currently maintenance windows are defined per project. With multiple clusters in a project, maintenance will be performed at the very same time.
We would like to space out maintenance windows per cluster in the same project, to prevent replica set election on multiple clusters taking place simultaneously.
12 votes -
Copying gbs of collection data from one project to another project
Currently it is highly time consuming process of copying a 100gb collection from one project to another project. There is no faster and proper way of copying the data. This feature is highly needed. Because each environment resides in different projects. If someone want to copy only handfull of collections from one project to another(dev to prod and vice versa) then its a nightmare for doing this kind of job. Tried mongomirror and mongodump feature but those are not quite impressive.
1 vote
- Don't see your idea?