Data Federation and Data Lake
4 results found
-
Write "_SUCCESS" File when finish data exporting
We use MongoDB to store time-series data, and export the data via Data Federation incrementally on daily basis onto s3 as Parquet. The data is relative big, and duration to export data varies from day to day. It’s hard for downstream services to know when data exporting completes. Sometimes, the downstream service start reading the parquets while MongoDB is writing, which causes partial extraction. Normally, a big data job would create a flag file, such as _SUCCESS, to indicate that the job has finished writing the dataset. This file serves as a marker, indicating that all tasks associated with the…
1 vote -
Make mongodump work with Online Archive
Update mongodump so that it can be used against an Online Archive.
2 votes -
Import and Export archiving rules
Ability to import and export archiving rules to be able to restore them if/when we need to restore the cluster. Also useful when replicating prod clusters to our stage environment
3 votes -
Online Archive
Hi Team - With regards to Atlas Data lake and using Online Archive customer request to be able to have (Time + Query) i.e. anything that is older than 60 days that match X query.
4 votes
- Don't see your idea?