Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
1360 results found
-
Support hourly backup every 24 hours, or support Point-In-Time restore without an hourly backup
To have Point-In-Time restore, an hourly backup policy is required.
However, the maximal interval for an hourly backup is every 12 hours.
I would like to take a backup only once every 24 hours, but still have Point-In-Time restore possible.So, please support either PIT-restore without an hourly backup policy in place, or support an hourly backup policy with an interval of 24 hours.
2 votes -
Allow to change defaultWriteConcernSource to global to avoid flipping the flag
Customers experience following error messages due to the defaultWriteConcernSource mismatch between the source and target cluster when restore the snapshot.
Error: Can not restore a snapshot with custom default write concern 1 to a cluster that does not have a custom default write concern.
Error: Can not restore a snapshot with custom default write concern majority to a cluster that does not have a custom default write concern.
Ability to directly change the defaultWriteConcernSource without flipping between w:1 and w:majority will be convenient for the users.
Option location:
Cluster > configuration > Additional Settings >
More Configuration Options > Default…2 votes -
SIEM
Add audit log integration with enterprise SIEMs
10 votes -
Improve Metric Correlation
Would be nice to better correlate metrics, server events, and individual operations.
For example would be helpful to have in the profiler indication of automatic scale up/down of the cluster in order to easily correlate actions that trigger them in the last 24h.
1 vote -
Metric Grouping
A huge improvement and help when it comes to metrics would be the ability to query by grouping (e.g. for database access users). This way if you were to use a specific database user per a specific service connection, we could see how much load to the database that specific service is causing.
Any form of implementation would be helpful, one example could be adding labels to the prometheus metrics per user, replica/shard etc.2 votes -
Profiler: expose the dashboard ID and chart ID from which the query is originating
My MongoDB analytics node performance is decreasing.
Looking at the profiler it seems that some specific queries are super slow.
I can see the full query, I can see that they originate from MongoDB charts ("appName": "atlas-data-federation|app-services|xxxxx").
But I cannot see which dashboard ID and which specific chart ID this query is originating from.
So I cannot, for example, easily find which chart to delete.Is this something you could help with? or any work around?
thanks!1 vote -
BI connector host name via API call
BI connector hostnames in the Atlas UI: https://www.mongodb.com/docs/atlas/bi-connection/ is not supported via API , Currently, customer has to manually copy and store the values to be reused by other applications. The customers are hoping to get these values in an automated way, for example with API calls or SDK.
5 votes -
All Clusters permission for ORG level roles
Today, for you to be able to use All Clusters you need to have permissions on each project, your user needs to be included in each project.
When we use a federation, we generally don't grant permission for each project to a team of administrators. We grant ORGOWNER, for example. By doing this, we lose access to All Clusters. The idea was to include the permission to use All Clusters for project-level permissions that have permissions to read project resources, for example ORGOWNER or ORGREADONLY.
9 votes -
MongoDb Atlas Network Peering with Oracle OCI (Similar to Azure and AWS)
Please add similar Atlas network peering capabilities as that of Azure and AWS, for Oracle Cloud Infrastructure customers. We are currently using Azure, but would like to use OCI as well.
2 votes -
AtlasProject CRD status fields
Hi, we have created the Atlas project via Terraform and used the mongodb operator for cluster and user management, but for now, we use sidecar to set the status.id field in AtlasProject CRD since we use skip-sync flag ( so operator will not change project config) and the operator will not sync this field for us.
If it is possible add a parameter to the spec where I could set this ID within resource creation ( so if we have a skip flag it will automatically set status.id ), so we can get rid of sidecar ( sidecar was implemented…1 vote -
View Epoch in Compass
I use epoch times in almost every collection for created time, last modified, etc. Currently I have to copy the time value and paste it into epochconverter.com. It would be very handy if the time showed as a string in a bubble when you hovered over it in Compass.
1 vote -
Larger disk for smaller cluster tier (instance type)
We have TB-scale data in production, and we also have several databases of the same-size for development environment.
Since Atlas's smaller cluster tier such as M20, M30 doesn't support large disks, we need to pay a lot for staging environment too even though we don't need many CPUs / RAMs.
We hope Atlas to support varied storage options in smaller cluster tier
2 votes -
A new role for security auditing purposes
Currently MongoDB Atlas provides two read only roles at project level ("Project Read Only", and " Project Data Access Read Only").
"Project Data Access Read Only" seems to allow access to the data also, while "Project Read Only" role does not allow access to the logs. (https://www.mongodb.com/docs/atlas/reference/user-roles/)
For security officers (internal/external), they need to access to the logs (audit, access, etc) and also to review the configuration; but don't need access to the data.
Therefore, I would like to request a new project level role for security officers with following features.
- access to "Download Logs"
- access…7 votes -
Setup the Atlas web user's team membership as part of the invitation.
Doing so, removes the extra step of assigning the team after the user has accepted the invitation.
2 votes -
Add storage class configuration option to Cloud Backup snapshot Export to S3
Setting the desired value for
x-amz-storage-class
header for the S3 PUT Object for snapshot export so that exported snapshots can be put directly to the desired storage class without any custom lifecycle rules and wait time on S3 side.6 votes -
Introduce Serverless for AWS Frankfurt region
Company policy (speaking for a customer) requires data to be deployed in Frankfurt. Serverless would be very convenient to reduce pause/resume and scale-up times.
2 votes -
Disable Query Targeting alert notifications for $changeStream operations
Change streams require scanning of the oplog and cannot make use of indexes, meaning that $changeStream operations frequently trigger Query Targeting: Scanned Objects / Returned alerts that cannot be actioned. It would be ideal to allow $changeStream operations to be filtered from the Query Targeting alerts.
22 votes -
Investigation Buckets (dropdown menu) to enable predefined set of MongoDB & Hardware Metrics to investigate certain Data Layer SLAs
We have many metrics under "MongoDB Metrics" & "Hardware Metrics". User will have a good idea which metrics to enable while troubleshooting certain requirements.
E.g. User is investigating current lags in the data Layer. User has to select all the metrics that would help them get the data they are looking.
We can help the customer bit more by provide buckets like.
Options ( select one )
1. Investigate Lags
2. Investigate IOPS
3. Investigate Replication
4. Investigate Search Index
... and etc..This will pre-select certain metrics from list of Mongodb & Hardware metrics which will give them all…
1 vote -
Export mongo dump to AWS S3 Bucket
It would be good to have the ability to export mongodump's and mongoexport's files directly from Atlas to the S3 bucket. This kind of feature exists for backup snapshots. In our particular case, we want to delete a cluster but before that just export all data to S3. The backup snapshot takes about 1.6TB and mobgodump's file is around 650Gb. Of course, I can take a dump to my local machine and upload it to the S3 bucket. The problem is that my SSD is only 512GB on my laptop so it is impossible to export a full dump.
Thank…2 votes -
Propagation of NodeType via Prometheus metrics
Context: we rely on Prometheus metrics for various systems in the company. The setup of clusters until now was pretty standard, so basically we had replica sets and sharded clusters with electable nodes only – clusters with three nodes per RS or shard which numbers correspond to <clustername>-<shardnumber>-<(00|01|02)>>. Now we have a fourth analytics node in the clusters, which out of convention seems to get the next incremented number 03.
The thing is we cannot rely on convention and we need to have a way to distinguish the node types from Prometheus metrics (or at least from the Atlas API)…
33 votes
- Don't see your idea?