Michael Gerlach
My feedback
65 results found
-
101 votes
An error occurred while saving the comment An error occurred while saving the comment Michael Gerlach commented
This is already supported. Scrape configs do support a so-called target scheme:
`https://cloud.mongodb.com/prometheus/v1.0/groups/GROUP-ID/discovery?targetScheme=PRIVATE"`
This will return a scrape config referencing the metric endpoint using a private FQDN of the cluster node you want to receive metrics from. This works for VPCpeering-enabled Atlas projects, but regarding AWS PrivateLink your miles may vary.
Michael Gerlach supported this idea ·
-
12 votes
An error occurred while saving the comment Michael Gerlach commented
This is implemented for a long time now.
-
14 votes
Michael Gerlach supported this idea ·
An error occurred while saving the comment Michael Gerlach commented
Does this mean, the integrity of backups is not verifiable at all?
-
13 votes
Michael Gerlach supported this idea ·
-
16 votes
An error occurred while saving the comment Michael Gerlach commented
This is a problem in general with Atlas. There is simply no way to see the current effective limits and the available headroom.
We have been waiting for a Prometheus Integration per Org to scrape such metrics for quite a while now.
Michael Gerlach supported this idea ·
-
13 votes
Michael Gerlach supported this idea ·
-
14 votes
Michael Gerlach supported this idea ·
-
15 votes
Michael Gerlach supported this idea ·
-
16 votes
Michael Gerlach supported this idea ·
-
15 votes
Michael Gerlach supported this idea ·
-
15 votes
Michael Gerlach supported this idea ·
-
18 votes
An error occurred while saving the comment Michael Gerlach commented
We have been waiting for this feature for more than two years now.
We get maintenance scheduled in flight and aren't happy with it.For Black Friday, we have always gotten in contact with support to effectively disable maintenance on all our Atlas Projects.
Michael Gerlach supported this idea ·
Michael Gerlach shared this idea ·
-
22 votes
Michael Gerlach supported this idea ·
An error occurred while saving the comment Michael Gerlach commented
According to docs, this is already implemented.
Have a look for NETWORK_PERMISSION_ENTRY_ADDED, NETWORK_PERMISSION_ENTRY_REMOVED, and NETWORK_PERMISSION_ENTRY_UPDATED -
19 votes
Michael Gerlach supported this idea ·
-
26 votes
Michael Gerlach supported this idea ·
-
23 votes
Michael Gerlach supported this idea ·
-
23 votes
Michael Gerlach supported this idea ·
-
27 votes
Michael Gerlach supported this idea ·
-
25 votes
An error occurred while saving the comment Michael Gerlach commented
This is also crucial for **break-glass access** when using e.g. Okta for Identity Federation.
Our DRPs forsee emergency access in an incident scenario of the Identity Provider.Michael Gerlach supported this idea ·
-
34 votes
An error occurred while saving the comment Michael Gerlach commented
I don't see how you end up with multiple CIDRs.
Every VPC has a single main CIDR block. Even if you have multiple subnets distributed across availability zones, you still have a single CIDR block assigned to your VPC.
> Currently there is needed public egress - and this can be issue in fully private clouds, where all public traffic is blocked.
I do agree.
A generic solution would be to expose the Admin API as a private endpoint through VPC peering. I think I saw such a feature request already.