Skip to content

Data Federation and Data Lake

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

54 results found

  1. Specify when you'd like Online Archive to migrate data

    I'd like the ability to specify when a migration from my Atlas cluster to my Online Archive to take place.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Connect Data Lake to Self Managed MongoDB Clusters

    Connect your Atlas Data Lake to Self Managed MongoDB clusters in private data centers, self managed in a public cloud, or locally hosted.

    (This would also allow certain cloud services like Charts and a component of Realm Scheduled Triggers to work with these clusters.)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. M0 Support for Evaluation

    Please provide M0 support for evaluation purposes.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. The "Date field to archive on" option under Archiving Rule tab should also accept date in timestamp format.

    The "Date field to archive on" option under Archiving Rule tab in Online Archive should also accept date field having timestamp format instead of only having date format.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Add support for XML

    I would like to be able to query XML files using my Atlas Data Lake

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Support Azure Data Federation private endpoint

    Now you have supported Azure blobs for data federation it will be great to have a private endpoint connection to the storage account

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Connectors  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. AWS IAM AuthN for Atlas SQL

    Support AWS IAM Authentication mechanism in JDBC and ODBC drivers (Atlas SQL)

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Connectors  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Create a read/write Data Federation connection string

    Some customers need a connection string both to the cluster and to Online Archive with the ability to write to the cluster only.

    So far, the only option is to use more than a connection string in the application.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Implement a feature to track data download volume per DB user

    In order to enhance data security and prevent unauthorized data exfiltration, our team proposes the implementation of a metric within MongoDB Atlas that allows administrators to monitor and measure the amount of data downloaded by each database user over a specified period. This feature would provide critical insights into user behavior, helping to identify unusual data access patterns or potential data breaches. By tracking network data usage at the user level, we can more effectively audit data access and transfer, ensuring that data is used appropriately and in compliance with organizational data governance policies. This granularity in monitoring would be…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Reporting  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Combine data lake snapshots into a single federated collection

    A common use case for data analytics is to analyse how your data evolve over time.
    For example, imagine you have an e-commerce database and your products have their price change every day. You may only store the price in your database but you'd like to make a chart that shows the evolution of your product prices over time (price y axis and time for x axis).

    It is possible today to make this happen with the combination of Data Lake and Data Federation, but the Storage Configuration JSON need to be manually updated like this:

    {
      "databases": [
    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Schema inference

    Schemaless is flexible but it has a big impact for the downstreams especially for data exchange and DW/AI.

    It is a must-have effort to derive & infer the schema from the actual documents, so that we can understand/track/evolve/translate the document schema.

    https://www.mongodb.com/blog/post/engblog-implementing-online-parquet-shredder is a great article.

    I'd like to propose an additional feature in ADL/ADF to make schema inference as a 1st-class citizen with faster turnaround & less operation cost.

    After the $out operation of ADL/ADF, please collect the Parquet schema from each data files and union/unify them into a single schema. This schema will be stored in a .schema.json…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Data Uploading process Is Little bit Difficult for new users. Upload a demo vedio of Uploading.

    Overall I Found one of the Interesting Software and Friendly use

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 3 Next →
  • Don't see your idea?

Feedback and Knowledge Base