Data Federation and Data Lake
57 results found
-
Specify when you'd like Online Archive to migrate data
I'd like the ability to specify when a migration from my Atlas cluster to my Online Archive to take place.
2 votes -
Connect Data Lake to Self Managed MongoDB Clusters
Connect your Atlas Data Lake to Self Managed MongoDB clusters in private data centers, self managed in a public cloud, or locally hosted.
(This would also allow certain cloud services like Charts and a component of Realm Scheduled Triggers to work with these clusters.)
2 votes -
M0 Support for Evaluation
Please provide M0 support for evaluation purposes.
2 votes -
The "Date field to archive on" option under Archiving Rule tab should also accept date in timestamp format.
The "Date field to archive on" option under Archiving Rule tab in Online Archive should also accept date field having timestamp format instead of only having date format.
2 votes -
Add support for XML
I would like to be able to query XML files using my Atlas Data Lake
2 votes -
Support scoped permissions to online archive
Currently it is not possible to add a Mongo user with "scoped" permission to an online archive instance. This should be supported for tighter access control.
Details here: https://support.mongodb.com/case/01376416
1 vote -
Write "_SUCCESS" File when finish data exporting
We use MongoDB to store time-series data, and export the data via Data Federation incrementally on daily basis onto s3 as Parquet. The data is relative big, and duration to export data varies from day to day. It’s hard for downstream services to know when data exporting completes. Sometimes, the downstream service start reading the parquets while MongoDB is writing, which causes partial extraction. Normally, a big data job would create a flag file, such as _SUCCESS, to indicate that the job has finished writing the dataset. This file serves as a marker, indicating that all tasks associated with the…
1 vote -
Allow a single timestamp field to be split by Year Month Day and Hour for folders instead of just one field like Year in filepath for Azure
I checked internally, and it has been confirmed that an attribute can only appear once in a template. If Atlas Data Federation (ADF) has a template like the one you are using, it wouldn't know what value to assign to StatusDatetime because it's being assigned multiple values. Unfortunately, ADF doesn't support defining a single field value across multiple segments of the path. Instead, each of those segments should be different attributes.
{
"path": "/HistoryCollection/{StatusDatetime isodate:Year}/StatusDatetime isodate:Month}/StatusDatetime isodate:Day}/StatusDatetime isodate:Hour}/{RecordSource string}/{Status string}/*",
"storeName": "sampledatabase"
}We would like to have the store we are creating as an archive be queried by StatusDatetime…
1 vote -
1 vote
-
Support Azure Data Federation private endpoint
Now you have supported Azure blobs for data federation it will be great to have a private endpoint connection to the storage account
1 vote -
1 vote
-
AWS IAM AuthN for Atlas SQL
Support AWS IAM Authentication mechanism in JDBC and ODBC drivers (Atlas SQL)
1 vote -
Create a read/write Data Federation connection string
Some customers need a connection string both to the cluster and to Online Archive with the ability to write to the cluster only.
So far, the only option is to use more than a connection string in the application.
1 vote -
Implement a feature to track data download volume per DB user
In order to enhance data security and prevent unauthorized data exfiltration, our team proposes the implementation of a metric within MongoDB Atlas that allows administrators to monitor and measure the amount of data downloaded by each database user over a specified period. This feature would provide critical insights into user behavior, helping to identify unusual data access patterns or potential data breaches. By tracking network data usage at the user level, we can more effectively audit data access and transfer, ensuring that data is used appropriately and in compliance with organizational data governance policies. This granularity in monitoring would be…
1 vote -
Combine data lake snapshots into a single federated collection
A common use case for data analytics is to analyse how your data evolve over time.
For example, imagine you have an e-commerce database and your products have their price change every day. You may only store the price in your database but you'd like to make a chart that shows the evolution of your product prices over time (price y axis and time for x axis).It is possible today to make this happen with the combination of
Data Lake
andData Federation
, but the Storage Configuration JSON need to be manually updated like this:
…{ "databases": [
1 vote -
Schema inference
Schemaless is flexible but it has a big impact for the downstreams especially for data exchange and DW/AI.
It is a must-have effort to derive & infer the schema from the actual documents, so that we can understand/track/evolve/translate the document schema.
https://www.mongodb.com/blog/post/engblog-implementing-online-parquet-shredder is a great article.
I'd like to propose an additional feature in ADL/ADF to make schema inference as a 1st-class citizen with faster turnaround & less operation cost.
After the $out operation of ADL/ADF, please collect the Parquet schema from each data files and union/unify them into a single schema. This schema will be stored in a .schema.json…
1 vote -
Data Uploading process Is Little bit Difficult for new users. Upload a demo vedio of Uploading.
Overall I Found one of the Interesting Software and Friendly use
1 vote
- Don't see your idea?