6 results found
A lot of providers support the same API of AWS. I think it will be simple to integrate them !9 votes
Consider an archive scenario when a user of a given app has not logged into the app in [x] number of weeks/months, so all their data is moved to Online Archive. Once they log back into the app again, their "cold" data should now be considered "hot" and be moved back into Atlas. While we can use $out to copy data back to Atlas, there is no current way to remove the "rehydrated" data from S3 once it's been copied back to Atlas5 votes
I would like by Data Lake in Project A to be able to query data in a Cluster in Project B.4 votes
It would be helpful to have options in the Storage Configuration that allow you to see what the source of a document retrieved through federation was.
For instance, it would be helpful if there was a simple way to know that a specific document came from the Collection Foo and Database Bar.2 votes
Our immediate need is that our applications are multi-tenant, so it would be very useful if we could create tenant-specific data lakes, by setting particular constraints in the ingestion configuration (ex. only ingest the documents with tenantId = 'specificTenantId').
However, the usefulness of filtered data lake ingestions can be multifaceted. The ingestion could be done only for archived=false documents, documents with status=ACTIVE, etc.2 votes
It would be helpful to be able to combine collections with the same name from various databases into a single "global" database in the Federated Database Storage Configuration.
Imagine you have a multi tenant application where you have DatabaseCustomerFoo and DatabaseCustomerBar, and each of them has a collection called "events". It would be great if you could easily combine those (at scale) through the storage configuration, so in this instance you would have a database: DatabaseCustomerGlobal and a collection: events which federated over both DatabaseCustomerBar.events and DatabaseCustomerFoo.events.1 vote
- Don't see your idea?