Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
480 results found
-
Implement SCRAM-SHA-256 mechanism for Atlas or make a SCRAM-SHA-1 as default
The main problem is inconsistency in the default authentication mechanism of clients and Atlas.
This causes a failover from SCRAM-SHA-256 to SCRAM-SHA-1 by clients on authentication failure.
This causes a huge number of errors in "Database Access History" which becomes useless.
1 vote -
Atlas Administration API: Expose more information on cluster status
We are using the Atlas Administration API in order to automate scaling a production database cluster deployment in a more fine-grained way than the existing autoscaling offer. The specific database cluster is deployed as a replica set, so there is a number of electable nodes (one primary + 2, 4 or 6 secondaries) and any number of read-only (RO) nodes.
The goal is to be able to tune the number of nodes in a specific region, and being able to monitor the progress of the change being rolled out. Currently, the API offerings are too limited to do this effectively,…
1 vote -
Don't overwrite explicitl role with default organisation role configured in identity provider
we have user created in Atlas on Organisation level with specific org role. If we configure Identity Provider and set some default role, then after first login this user will receive this default role and original role will be lost. So, even after disabling identity provider, user will have this default role.
Suggestion: explicitly granted roles should be keep untouched, but not be applied in case of active identity provider.
1 vote -
Copying gbs of collection data from one project to another project
Currently it is highly time consuming process of copying a 100gb collection from one project to another project. There is no faster and proper way of copying the data. This feature is highly needed. Because each environment resides in different projects. If someone want to copy only handfull of collections from one project to another(dev to prod and vice versa) then its a nightmare for doing this kind of job. Tried mongomirror and mongodump feature but those are not quite impressive.
1 vote -
m10 for gcp santiago zone
I would love if i can select m10 than m30 on santiago region.
1 vote -
Automatically detect location for Global Clusters
Currently Global Clusters require a location field for zone targeting. Could the user get an option to be auto-detected based on the incoming connection? The reason for this is, not all app services have the ability to include location, so would be good to auto-detect it.
1 vote -
Multiple connection URLs for Atlas clusters
Allow generation of multiple application-specific connection strings/URIs to the same Atlas cluster. This would ease migration of databases to different clusters over time; consolidating smaller databases to lower costs, or separating busier databases to isolate performance impact.
Multiple connection strings will allow us to redirect connections seamlessly when migrating databases without requiring developers to re-deploy changes to application connection strings, minimizing downtime and coordination.
1 vote -
Need "Inactive" sessions "Terminated" connected via Atlas
We need a feature that would automatically timeout individual user "inactive" sessions after x number of minutes, connected via Atlas. Thanks
1 vote -
Add filters for easily finding projects that have encryption/audit enabled in the UI
The https://cloud.mongodb.com/v2#/clusters has certain filters to get specific Atlas clusters. Can we add couple of more filters that can easily identify, whether that project has been enabled for encryption at rest or if the auditing has been enabled/disabled. This will help us easily generate reports for cis20 reporting.
1 vote -
Allow --removeAutoIndexId option to Atlas live migration as default
Apparently Atlas live migration does not support --removeAutoIndexId option due to which some collections having
autoIndexId set to false
does not allow live migration to work successfully. Please add --removeAutoIndexId option to live migration.1 vote -
API for Granting Infrastructure Access to MongoDB Support for 24 Hours
As per documentation at https://www.mongodb.com/docs/atlas/security-restrict-support-access/#grant-infrastructure-access-to-mongodb-support-for-24-hours one can grant access for support for 24h only via the UI. It would be very useful to be able to do this via the Atlas API.
1 vote -
Trigger runtime is US by default which is not GDPR compliant for EU countries
The ATLAS Trigger UI should let the user select the region for the realm deployment done in the background. At the moment it's not fair because (1) at no moment you're noticed the runtime will be in the US (and for UE countries it matters a lot, it's not GDPR compliant) and (2) it makes the nice ATLAS Trigger UI useless for UE countries...
We're forced to create a realm app manually to be able to select a local deployment. We appreciated the 'trigger' section on the ATLAS side. It's more integrated and makes following trigger operations easier. Now trigger…
1 vote -
Data migration from Cloud manager to Atlas
With reference to case https://support.mongodb.com/case/00918578 we experienced situation where the migration was in a stuck state eternally. If the validation checks fail, then it is only fair and logical to fail the migration with a relevant error rather than the status lying about your shard migration is in progress.
Also, emit status updates on the migration, such as pre-validation checks passed, syncing data, and % complete on data migration or initial sync rather than just a green bar, and whatever else such as if it is building indexes, etc during migration. Basically, make the status as readable and logical as…
1 vote -
MFA Painful to use due to need for frequent logins
Even though I use the same computer, I need to re-login every single day (possibly more frequent) - which is even more painful when using MFA.
Should have the option to stay logged in longer on same machine/IP address - perhaps make it an configurable option1 vote -
We can provide a resume / pause option to Mongodb export & import
The Mongodb export and import when used for uploading large or very large files are susceptible for interruptions.
It would be great, if we can provide a resume options while import or exporting the files like how we have in Oracles expdp1 vote -
Provide "Password Age" for Atlas password auth users via Atlas API
Similar to "Last Login Date" suggestion, add the age of the password for SCRAM users. I see that https://jira.mongodb.org/browse/SERVER-3197 was closed as "Won't Fix", but this will at least allow reporting, audit, external maintenance, etc.
1 vote -
Add an option to include sample data during cluster build
When working through tutorials or University courses, often you need to build a new cluster and then add sample data to it. It would be nice if you could just check a box in the cluster creation page to have the cluster brought up with sample data already provisioned, thus combining two commonly used steps, into one.
1 vote -
Enable and disable balancing on a collection
Shell commands sh.enableBalancing() and sh.disableBalancing() were not permitted on Atlas hosted MongoDB. Is it possible to grant this permission?
Currently it failed with error:
…
uncaught exception: Error: command failed: {
"ok" : 0,
"errmsg" : "not authorized on config to execute command { update: \"collections\", ordered: true, writeConcern: { w: \"majority\", wtimeout: 60000.0 }, lsid: { id: UUID(\"ed83ed26-e07b-4f34-a90a-a145bbe58a48\") }, $clusterTime: { clusterTime: Timestamp(1646788928, 17), signature: { hash: BinData(0, 347249E35ED43BDADAE12FB0F794C091CB86206E), keyId: 7052082196881866761 } }, $db: \"config\" }",
"code" : 13,
"codeName" : "Unauthorized",
"operationTime" : Timestamp(1646788928, 18),
"$clusterTime" : {
"clusterTime" : Timestamp(1646788928, 18),
"signature" : {
"hash" : BinData(0,"NHJJ417UO9ra4S+w95TAkcuGIG4="),
"keyId"1 vote -
Separate Data Lake Administrative Permissions into Roles
Currently Project Owner permission is required to create and manage data lake clusters. This requires dangerously elevated privileges simply to manage Data Lake.
I simply would like to either use existing project roles or create new roles specific to Data Lake with similar duty segregation: Data Lake Manager(similar to Project Cluster Manager), Read-Only, Read-Write, etc.
Project Owner should not be required to administer or use data lake features. Non-granular roles are fine for this urgent need, we simply need reasonable coarse-grained roles that would satisfy usage in any security-minded enterprise.
1 vote -
bad experience
sorry to say but your service is very bad because when i upload a data collection after few hours my data automatically jumbled.
please send me the reason why this is happening.
1 vote
- Don't see your idea?