Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
1186 results found
-
Create an AtlasRole custom resource
Currently roles are configured as a list inside the AtlasProject resource. I propose managing these as individual AtlasRole custom resources instead.
Consider a typical microservice based application with several microservices deployed by individual Helm charts, each sharing the same project and cluster. In this scenario the app developer may want to create custom roles for each app / group of apps with access to certain collections, eg to stop them from being able to read collections with sensitive data
Currently it is possible for apps to create their own users, but those users must assume one or more existing roles.…
1 vote -
Incorporate Resource Tags in Atlas alerts
Enhance the alert content by incorporating the resource tags of the entity that triggered the alert.
4 votes -
SimplifiedJson format
Kafka source connector has an output format option of SimplifiedJson which is easier to parse by the downstream applications. Mongo utilities or any of the Atlas features do not support this as an output format as of now.
Mongo does have canonical format output for mongo export which can be matched with kafka extended json format but it requires additional translation by the 3rd party applications.
Is there a feature in place already to have atlas $out or mongo export or dump utilities to output data in Simplified json format? If not can this be looked into?
4 votes -
Data Usage Reporting & Improved Profiler
The Current Problem:
The existing issue stems from MongoDB's inability to provide concrete evidence supporting the data charges for your data usage. This predicament becomes especially troublesome when your system typically operates within a data usage threshold of less than 100GB daily. Suddenly, over a span of 7 days, you are billed for data usage exceeding 1000-2000GB daily, only to subsequently revert to using less than 100GB daily. The absence of substantiated evidence leaves you in a quandary, unsure whether the issue lies with your system or is a reporting error on MongoDB's part. MongoDB Support relies on the slowest…1 vote -
Change global default compressor setting
Hello,
I wanted to request a feature that would enable us to change the global default compressor setting ourselves? Something that could possibly be exposed on the Advanced Configuration settings page in Atlas? We're looking to re-compress our existing collections usingzstd
from the defaultsnappy
option and we want to create future collections using thezstd
block compressor automatically.Today, we have to create a support ticket and co-ordinate with them. It would be great if we could have this feature on the Atlas portal so that we can monitor the progress and trigger it whenever we want to…
1 vote -
Ability to have UI Users with access to list Indexes but not browse collection documents
We'd like to have users that are read-only and that have access to see metrics and list indexes but not do operations or be able to explore the collection documents.
This is important for application developers to be able to operate their databases without needing to have read access to contents.
1 vote -
Customize CPU threshold for Autoscaling
Currently, to autoscale a cluster to the next tier, the "Average CPU Utilization must have exceeded 75% of available resources for the past hour."
Problem:
Some workloads remain near the 75% CPU threshold, given the nature of the workload. This can cause constant scaling events to occur which creates unnecessary noise.
We don't want to turn autoscale off because this is a beneficial feature to keep enabled on the cluster.
Solution:
We want to be able to configure the CPU % threshold for cluster tier scaling. For example, rather than 75%, we'd like the option to set the threshold to…
9 votes -
AWS gp3 Decouple IOPs from Disk Size
With AWS EBS gp3 volumes, IOPs and throughput can be provisioned separately from EBS storage size. While Atlas now uses gp3 volumes and provides a base throughput of 3,000 IOPs, higher throughputs are still directly tied to disk size.
We run IO-intensive workloads that require high throughput, and we have to severely over-size disks to get the needed throughput. We don't require the extra-low latency of provisioned IOPs (which is much more expensive than over-provisioning storage).
According to the linked AWS documentation below, if the M60 instance size is running on an ec2 m5.4xlarge instance type, it can support a…
2 votes -
Incremental backup on Serverless Instances
Hey,
I would like to know when the access read/write of the "oplog" collection in the "local" database will be enabled in Serverless instances. I need this access for making an incremental backup/restore of the database. The doc does not explain the reason of this limitation and I would like understand why.
There is mainly 2 reasons of my request. First is for a local backup/restore for dev purpose, I don't need/want to restore all the databases each time in local due to the size and time consuming from the restore. The second one is that if I need to…
1 vote -
Bulk explain() command
I want to be able to run the "explain" command for multiple database operations so that I can quickly estimate the RPUs and WPUs for my workload (instead of having to do it one by one for each operation).
2 votes -
tgw support for Atlas
Can you please provide TGW support for Atlas? We are using confluent.io kafka with TGW, and found ourselves caught up in situation where kafka has no direct access to atlas, both our EKS and kafka are directly connected to Kafka, but there's no way for kafka to access atlas, because EKS and atlas are connected over vpc peering. We do not take interest in PrivateLink. Our Kafka needs direct access to Atlas because we are running the mongo source connector. Only viable solution now is to run the mongo source connector on our EKS, which while works defeats the purpose…
1 vote -
Prevent exposure of Azure Vault or KMS
Today Mongodb communication with the BYOK key is by internet, it is necessary to allow public IPs:
https://learn.microsoft.com/en-us/azure/key-vault/general/private-link-service?tabs=portal
https://learn.microsoft.com/en-us/security/benchmark/azure/baselines/key-vault-security-baseline
1 vote -
Support OIDC as Authentication Protocol for access to Mongo Portal
Currently SAML is supported: https://www.mongodb.com/docs/atlas/security/federated-authentication/#configure-federated-authentication
It would be preferable if OIDC was supported.
1 vote -
Manage federated database views via Terraform
The mongodbatlasfederateddatabase_instance resource allows managing tables in the federated instance, but it does not allow us to manage views. Please update the provider so we can also create and manage views
1 vote -
Yearly backup option is required
We are planning to take backup yearly , but as of now available max monthly backup. If i want to take yearly backup need to take 12 months for single year which expensive , as per our audit team need 20 years back up retention then it is required 12X20 = 240 backups which is very expensive. If yearly backup is available only 20X1 = 20 only.
As of now only available below options:
Hourly
Daily
Weekly
MonthlyRaised case also in our support portal: 01198437
2 votes -
Migrate specific collection / database from a replica set to sharded cluster(not empty) without downtime.
It would be great if MongoDB has a tool to migrate specific collections/databases from a replica set to sharded cluster and vice versa. Migrating data with this tool should not cause any downtime in both source and destination clusters.
2 votes -
Support for GCP Saudi Arabia/Dammam me-central2 for Atlas
GCP announced availability of services in me-central2. Can MongoDB please support this region in GCP?
3 votes -
better index usage
Currently most of our indexes show a usage of <1/min which isn't very useful. If I hover over them I can see the usage which might say 20/day but I would prefer to not have to hover.
1 vote -
Include "fields" and "options" in index build email notification
Hi, I'm wondering if it would be possible to not only include the database and collection names in the "Index build succeeded" email notification, but also the "fields" and "options" json objects.
It's great to know what indices are built by the team as the database admin, and since there isn't a "createdAt" timestamp in mongodb, the emails would serve as a workaround for that as well.
In short, it would be a valuable addition for our team if the "fields" and "options" data was included in the "Index build succeeded" email notification.
Thanks in advance!
4 votes -
List user id in access screen, needed for project owner id in when setting up a project inpulumi
As title. I want to easily find user IDs from teh Atlas UI
1 vote
- Don't see your idea?