Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
475 results found
-
Allow --removeAutoIndexId option to Atlas live migration as default
Apparently Atlas live migration does not support --removeAutoIndexId option due to which some collections having
autoIndexId set to false
does not allow live migration to work successfully. Please add --removeAutoIndexId option to live migration.1 vote -
Support migration between Atlas clusters using Atlas API
Support live migration between Atlas Clusters through Atlas API. If we need to migrate, it'd be good to be able to do so through the API.
2 votes -
Add Atlas users to existing teams even though the Atlas user is still pending invite
Currently when we send invite emails for the organization we have to wait for the users to accept the invite before we are able to configure project level permissions. This often creates confusion and bad user experience; when users first sign in they are anticipating to see their projects but instead they see nothing until configuration is finalized.
Please make it possible to add users to teams which are still pending invite acceptance so the user experience is more seamless and less confusing.
8 votes -
API for Granting Infrastructure Access to MongoDB Support for 24 Hours
As per documentation at https://www.mongodb.com/docs/atlas/security-restrict-support-access/#grant-infrastructure-access-to-mongodb-support-for-24-hours one can grant access for support for 24h only via the UI. It would be very useful to be able to do this via the Atlas API.
1 vote -
Deletion Protection
Customer suggested protection from Cluster deletion similar to AWS RDS
https://www.amazonaws.cn/en/new/2018/amazon-rds-now-provides-database-deletion-protection/
TL;DR - a notification when a user tries to delete a cluster, that "this cluster is protected, if you need to delete please visit the console to enable deletion"
This would be a critical piece of functionality where roles/permissions are not sufficient.
6 votes -
Add Atlas instance in Azure Sweden
We host data that is preferred to reside in Sweden. Do you have plans to expand to the newly added Azure Sweden data centers?
2 votes -
Set MongoDB Production Support Employee Access to Atlas Backend Infrastructure at Project Level
The Restrict MongoDB Production Support Employee Access to Atlas Backend Infrastructure is currently set on the organizational level. This should also be possible at the project level. Secondly, add the possibility to grant access indefinitely until revoked.
12 votes -
Trigger runtime is US by default which is not GDPR compliant for EU countries
The ATLAS Trigger UI should let the user select the region for the realm deployment done in the background. At the moment it's not fair because (1) at no moment you're noticed the runtime will be in the US (and for UE countries it matters a lot, it's not GDPR compliant) and (2) it makes the nice ATLAS Trigger UI useless for UE countries...
We're forced to create a realm app manually to be able to select a local deployment. We appreciated the 'trigger' section on the ATLAS side. It's more integrated and makes following trigger operations easier. Now trigger…
1 vote -
Add the In-Memory Storage Engine to Atlas
Enable the use of the in-memory storage engine in atlas shards. That way atlas users can get the same level of functionality as enterprise. Atlas is a great platform, but not being able to use an in-memory storage engine replica set in a shard is a huge letdown (at least for me).
3 votes -
Data migration from Cloud manager to Atlas
With reference to case https://support.mongodb.com/case/00918578 we experienced situation where the migration was in a stuck state eternally. If the validation checks fail, then it is only fair and logical to fail the migration with a relevant error rather than the status lying about your shard migration is in progress.
Also, emit status updates on the migration, such as pre-validation checks passed, syncing data, and % complete on data migration or initial sync rather than just a green bar, and whatever else such as if it is building indexes, etc during migration. Basically, make the status as readable and logical as…
1 vote -
Separate Data Lake Administrative Permissions into Roles
Currently Project Owner permission is required to create and manage data lake clusters. This requires dangerously elevated privileges simply to manage Data Lake.
I simply would like to either use existing project roles or create new roles specific to Data Lake with similar duty segregation: Data Lake Manager(similar to Project Cluster Manager), Read-Only, Read-Write, etc.
Project Owner should not be required to administer or use data lake features. Non-granular roles are fine for this urgent need, we simply need reasonable coarse-grained roles that would satisfy usage in any security-minded enterprise.
1 vote -
MFA Painful to use due to need for frequent logins
Even though I use the same computer, I need to re-login every single day (possibly more frequent) - which is even more painful when using MFA.
Should have the option to stay logged in longer on same machine/IP address - perhaps make it an configurable option1 vote -
We can provide a resume / pause option to Mongodb export & import
The Mongodb export and import when used for uploading large or very large files are susceptible for interruptions.
It would be great, if we can provide a resume options while import or exporting the files like how we have in Oracles expdp1 vote -
bad experience
sorry to say but your service is very bad because when i upload a data collection after few hours my data automatically jumbled.
please send me the reason why this is happening.
1 vote -
Make BI Connector Settings Permanent
Allow the "SET GLOBAL mongodbmaxvarcharlength = n" to be permanent after a Mongosqld restart. Currently, the setting is ephmeral and must be set everytime the mongosqld daemon is restarted. Currently, our production environment mongosqld can restart without our knowledge at anytime. This results in the mongodbmaxvarharlength variable being reset to zero, which can lead to a production outage. Is there anyway this can be automated, maybe through database trigger?
39 votes -
max database users not visible
We need more than 100 database users, and have had the limit raised for existing projects, but new projects require us to have the limit increased. There is no way for us to verify which projects have the default 100 database users and which ones have had the limit raised. It would also be nice to track the percentage or number of database users available as a metric.
16 votes -
Warm up cache automatically when there is any server/node restart events
Usually it takes hour or so for the cache to warm up depending on the data size when an server restart scenario occurs ( increasing the tier / storage etc). And there are alternatives like running cache warm up queries to bring the nodes up to speed .
However if this cache warm up should be inherent and automatic for the server restart scenarios .
3 votes -
Provide "Password Age" for Atlas password auth users via Atlas API
Similar to "Last Login Date" suggestion, add the age of the password for SCRAM users. I see that https://jira.mongodb.org/browse/SERVER-3197 was closed as "Won't Fix", but this will at least allow reporting, audit, external maintenance, etc.
1 vote -
Prevent Federated Users to gain access to other Projects
Hi,
We have set Federated authication and RoleMapping to project. This roleMapping gives Project Owner Rights to a particular project. Project Owner have the right to invite other people into their project.
Role mapping is only applied when a user logs in. However, if the user receives an invitation to a project when logged in (to say Project ***) and accepts an invitation (to say Project YYY, for which they should not have any access), they will receive the Atlas role in that project (Project YYY) designated by the invitation, allowing them to perform any actions provisioned by that role.
…
2 votes -
Add an option to include sample data during cluster build
When working through tutorials or University courses, often you need to build a new cluster and then add sample data to it. It would be nice if you could just check a box in the cluster creation page to have the cluster brought up with sample data already provisioned, thus combining two commonly used steps, into one.
1 vote
- Don't see your idea?