Google Private Service Connect
Greeting from Fivetran!
This is somewhat related to https://feedback.mongodb.com/forums/924145-atlas/suggestions/45272014-allow-customers-to-specify-the-number-of-service-a . Having 50 service attachments is not scalable for us which requires 50 IP addresses for each PSC. We have a large customer base and having each of them create PSC would require a lot of IP addresses and would quickly exhaust our subnets.
From the support case it seems that the decision to use 50 PSC attachments comes from the fact that GCP load balancer does not allow more than one pool of servers per service attachment and that the ports are passed through as is as opposed to AWS & Azure implementation. However a solution could be to have VM based network appliances which would act as a port based router and route requests further down the network to MongoDB nodes based on incoming ports. This would mimic the way AWS & Azure load balancers work whereby they route to different backend pools based on port.
Thanks!
Said @ Fivetran
-
Diogo commented
Hey all, this is also very important to us, because it’s taking a lot of IPs and it’s generally hard to manage through terraform, due to all the resources it needs to create.
There should be an easier way to perform this, like it’s done in AWS or Azure, where you have just a single IP within a subnet to route traffic to MongoDBs private link