Fulton
My feedback
14 results found
-
166 votes
An error occurred while saving the comment Fulton supported this idea ·
-
79 votes
Thank you for your patience. Connecting Atlas clusters with GCP service accounts (workload identity federation) is in development and currently planned to be available in Q1 2024 (which is subject to change). The functionality will require MongoDB 7 in Atlas and initially will be supported in Java, Node, Python, C#/.NET, and Go drivers.
An error occurred while saving the comment Fulton commented
The current ordering priority is not ideal.
Generally we do not want individual users accessing the database, so why are you prioritizing Workforce Identity Federation to access clusters using SSO? If a user needs to access a cluster directly (in an emergency) it's maybe once every few years.
The highest imperative is for application to access databases securely. The application has the highest access and security needs. Therefore prioritizing Workload Identity Federation (IAM) is most valuable. IAM access makes it easier for developers to build new applications to work with the database (instead of touching the database directly).
Why is Atlas choosing to support Workforce Identity Federation first?
An error occurred while saving the comment Fulton commented
Would be very nice to have this so we can use workload identity in GCP GKE clusters to eliminate yet another credential to distribute.
Fulton supported this idea ·
-
66 votes
An error occurred while saving the comment Fulton commented
VPC Native as well.
Would need a Private Link, VPC Native, and Public discovery endpoints. Or maybe some sort of scrape parameter to add to the request...
Fulton supported this idea ·
-
61 votes
An error occurred while saving the comment Fulton commented
Priorities in terms of logs:
1. Database Instance Logs
2. Audit Logs
3. Activity logsIn the end we _need_ to be able to collect any and all data logged in order to automate managing Atlas on a large scale.
An error occurred while saving the comment Fulton commented
I would request that it supports GCP Storage as well and just Object Storage APIs in general.
I would hope this feature would supercede https://feedback.mongodb.com/forums/924145-atlas/suggestions/39104293-ability-to-stream-logs-to-cloudwatch-logs-or-datad.
If you can get logs into an S3 or Cloud Storage bucket there are a million ways that can reliably ship from there to a million other tools.
Fulton supported this idea ·
-
3 votes
An error occurred while saving the comment Fulton commented
Another approach is to just tack a last access date onto a user document but access happens so frequently that might not perform well.
Fulton shared this idea ·
-
12 votes
Fulton supported this idea ·
-
380 votes
Fulton supported this idea ·
-
56 votes
Fulton supported this idea ·
-
41 votes
Fulton supported this idea ·
-
5 votes
Fulton supported this idea ·
-
15 votes
Fulton supported this idea ·
-
37 votes
Fulton supported this idea ·
-
78 votes
Fulton supported this idea ·
-
66 votes
Fulton supported this idea ·
We believe shipping logs to S3 compatible object storage APIs such as AWS S3 or GCP Cloud Storage to be the highest priority and drive the most value. Many log routing frameworks support pulling logs from S3 or GCP and therefore you could cover more customer needs this way.
We would like to see database logs (all) prioritized first so that it is easy for our teams to view database status across all clusters and projects easily using our log provider (it's not Datadog/Cloudwatch/GCP).
The Atlas team should also consider https://feedback.mongodb.com/forums/924145-atlas/suggestions/43971369-send-atlas-logs-to-s3 when thinking about this functionality. Shipping to S3/GCP Storage would also allow Big Data frameworks such as GCP BigQuery or AWS Datalake to succeed. Therefore servicing the needs of internal data analysts as well as developers who need to view logs.