Atlas
- A brief description of what you are looking to do
- How you think this will help
- Why this matters to you
1364 results found
-
federated authentication to terraform provider
Allow OIDC authentication to the terraform provider to eliminate the need for secrets or static configuration
2 votesOnce Atlas itself supports this we will implement in the Terraform provider. We are in close contact with the PM who owns IAM and have alerted him to this request.
-
Node specific defaultMaxTimeMS
As this new functionality in 8.0 is intended to set a fixed time limit on all nodes across the cluster.
Is there a possibility to set it different for each node?Our non-operational (Analytics) or secondaries may be capable to tolerate higher limits than Primary.
Yes, 'maxTimeMS()' would overwrite this but adding it to all the queries is not feasible & occasionally the same deployed query may need to run little longer to retrieve/scan more docs than it was originally supposed to.2 votes -
Make Atlas UI aggregation pipeline builder limitations configurable
There is currentl an aggregation pipeline builder limitation on $group, $bucket, and $bucketAuto which cannot be changed or increased in the Atlas UI. Make the limit configurable just like it is MongoDB Compass.
2 votes -
Use Semantic Versioning
Hi,
My problem is that the terraform provider doesn't use semantic versioning.
This has caused me quite a few problems.
Firstly - it's difficult when scrolling through your version releases to understand what's breaking and what's not (I lost an hour today having to check all the releases for updates, and then applying every couple of versions from an outdated provider to make sure there were no breaking changes).
Secondly, it means I have to pin a specific version in my terraform provider rather than leaving it to auto-update to the latest minor version "~> 1.0".
Lastly, it makes using…2 votes -
DB Growth
Hi Team,
Please create a new alert for DB Growth . For Example if DB growth is more than 100% in last 6 months . we need to get alert.
2 votes -
Create One Rolling Index in Terraform
This is a request to add our One Rolling Index request to our Terraform.
References:
* https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Rolling-Index/operation/createRollingIndexBenefits:
Many teams interact with Atlas via automation using Terraform. This has been highlighted as one of the important ones to have in Terraform.
2 votes -
take autosnap shot before upgrade
When upgrading to a new major version automatically take a snapshot of the current cluster before completing the upgrade to alleviate one more step in the process.
2 votes -
Add connection pooling metrics for sharded clusters
We recently ran into an issue where we hit the internal mongoS -> mongoD connection pool limit when reading from secondary's requiring Atlas support to increase the value of ShardingTaskExecutorPoolMaxSize.
As a result it would be great to be able to monitor the internal mongoS -> mongoD connection pool usage so we can monitor it and set up alarms if it gets near the limit.
2 votes -
MongoDB::Atlas::DatabaseUser is missing Description
Please add the possibility within MongoDB::Atlas::DatabaseUser to add a Description
2 votes -
fix example at https://www.mongodb.com/docs/atlas/operator/current/atlascustomrole-custom-resource/#basic-example
in the example at spec:
- "name" indentation is wrong
- also action should be specified as in API example connPoolStats should be actually "CONNPOOLSTATS" (with underscores, seems they disappear here when I post)
- at inheritedRoles there is no "role" as child field, should be databasea correct manifest would be:
apiVersion: atlas.mongodb.com/v1
kind: AtlasCustomRole
metadata:
name: tester
namespace: tester
labels: {}
annotations:
mongodb.com/atlas-resource-policy: keep
spec:
projectRef:
name: tester
namespace: tester
role:
name: tester
actions:
- name: CONNPOOLSTATS
resources:
- cluster: true
database: tester-database
collection: tester-collection
inheritedRoles:
- name: operator-role-1
database: tester-database2 votes -
Expose hourly cost data as a metric for monitoring cluster cost
The hourly cost of a cluster is already availble in the Atlas UI. Expose this same data as a metric for monitoring cluster cost. We understand it may not include the data transfer and some other costs but monitoring the spike or valleys in the monitor over time for a given cluster is helpful when autoscaling is turned on. Then we can also be able to set an alarm on the metric.
2 votes -
backups export to s3 bucket custom folder name
It would be very nice to have a possibility to change
exported_snapshots
folder name to custom name. And also if it would be possible to set this via terraform.2 votes -
Premium Monitoring Granularity for lower tier clusters
CURRENT STATE
Premium Monitoring Granularity (10 second metrics) only available on M40 clusters or higher
IMPACT
Lower tiered environments (such as testing and staging) cannot have 10 second metrics granularity. Some customers export metrics to third parties such as Data Dog who only handle homogenous granularity of metrics.
When Data Dog accepts different granularities e.g. 10 second granularity for PROD environments (M40+) and lower granularity for STAGE environments (lower than M40) - it leads to poor data integration and dashboards failing to load data properly.
Customer does not have a reliable into their data since some environments send 10 second…
2 votes -
Manage Organization Alerts in Terraform
We should be able to manage organization alerts through terraform, not only project level alerts.
I wanted to create a billing alert at organization level with terraform but was not able to do it, because the resource only allows the alert to be created at project level.
Doc: https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/alert_configuration2 votes -
enforcing tagging
Hi Team,
The customer, has requested a feature similar to Reinforcement Tags[https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/implementing-and-enforcing-tagging.html], as offered on AWS. This is a critical need for their operations, as certain units do not consistently include tags on every cluster.
Implementing this feature would greatly enhance their ability to manage and scale their infrastructure effectively.
Thank you for considering this request, and I’m happy to provide further context if needed.
2 votes -
Backup Snapshot Distribution: allow to choose which policies should be copied to other region
Currently we can choose only a policy type (e.g. hourly, daily).
If we have multiple hourly policies with different retention times, we would like to choose which one should be replicated to another region. Currently it would replicate all the policies with hourly schedule.End goal:
We would like to setup a backup policy where the snapshots are retained in the secondary region for one day only (or less if we had hourly option), while the snapshots in primary region are retained for 7 days. For the same hourly schedule.2 votes -
Show warning when user is configured with access to invalid resources
When an invalid resource is specified in data plane user's access policy ("grant access to") authentication errors are misleading.
For Example: if user "John" is limited access to cluster "xo" but cluster is created as "x0" by mistake, when John is authenticated against "x0" we simply get "user not found error" in client.
If the Atlas UI can highlight the potential error, that cluster "xo" is not found, as cluster cannot be implicitly created. it will save valuable time for developers debugging entitlement issues.
2 votes -
Add a Atlas control plane in UK region
Currently Atlas control plane IP addresses originate from your AWS us-east-1 region. With our region being UK, security team does not like request into our network originating from US.
Request a control place in UK region.2 votes -
Auto downscale - Scaling down during peak moments
The auto downscale checks for the average of usage for the last 4h. But, sometimes we have a cluster with little usage for almost all the 4 hours, but at the last 30, 15 minutes, it starts to consume more CPU (something higher than 60% or even higher).
As the autoscale only checks at the last 4h it sometimes happen that a cluster does a auto downscale when it shouldn't. A litler time after it needs to scale back up because the usage goes higher than 90% (or at least higher than 75%)
It would be much better if the…
2 votes -
Already course where well awesome
By learning the game make us learn course
2 votes
- Don't see your idea?