Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

11 results found

  1. Add support for all type of joins like Postgres has and improve performance

    $lookup is a performance killer. Joins are crucial parts in every OLTP system. $lookup is the equivalent to join in SQL, however $lookup is slow, doesn't support hash joins or other efficient join algorithm implemented in Postgres for example.

    Seems that if mongo won't add support, their DB puts behind Postgres.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Parallelize unionWith

    Today $unionWith aggregation command is executed sequentially. EG first we query collection A and then collection B and then the union occurs.
    The process should be parallelized so the query part will run in parallel while the union will be done as best effort tree merge try to speed up the overall Elapsed Time of the query

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. an aggregation stage to load data to DRAM for the fields that are only requested

    When we use $project stage, it loads whole document from disk to memory (if not in working set). Because of this, When we create a data model, We have to create separate collection if the field is not required in frequent access of data. Creating a view is an option, but what if $project itself or $project with some argument in it, or a new pipeline stage or operator gets introduced which fetches the data from disk only for the fields specified instead of loading whole document.

    With memory mapped file, retrieving fields specified alone would not be simply possible…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Should be possible to configure profiling output destination

    When enabling database profiling the output is sent to both the system.profile collection and system logs. Logging to a capped collection is fine but spamming the logs on disk is not good.

    We have a need to be able to react to changing query patterns quickly, so we have profiling enabled in production on a busy system and we do real-time analysis on the the system.profile collection. This works fine and the performance hit is acceptable but our system logs on disk grows a lot.

    Please make it possible to configure if profile logging should go to disk, collection or…

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Avoid truncating the query on the Atlas profiler or system.profile collection

    Slow running queries that are captured in system.profile collection or on profiler page of Atlas are truncated if the query is too long. As an Application DBA, it would be difficult to analyse the query without figuring out the actual query. The current limitation of command document is 50Kb. Request you to consider this limitation to avoid truncation of queries.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. throttle sessions which use too much resources

    We have different types of applications :
    1. Writer - to load data into mongodb from different data sources
    2. Reader - to read data and display to end user.

    Normally, there is strict SLA for reader , but no SLA (or less restricted) for writers. We want to make sure that writer will not impact reader in case when for some reason a lot of data arrived from external sources. So, we would like to slow down writers for the sake of readers.

    Writers can saturate CPUs and IO, that's why we want an option to leave some room…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Improve sorting performance

    Sorting always ends up doing a collection scan when the selected index for the find/match does meet the sort requirement. The sort effectively makes the performance worse by 15-25 times for the "matched" dataset which runs into 10s of thousands (not millions) of documents

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Implement $bucket and $group on indexed values with sub-linear runtime

    We noticed that sum $bucket and $group aggregations such as $min, $max, $count are unexpectedly slow even when fully covered by an index, (partially) because the DB scans through the entire index rather than employing optimization approaches such as binary search.

    An example pipeline that should return instantaneous but scans through the entire index (confirmed on v4.4 and v5):
    [
    {
    $match: {
    status: "DELIVERED",
    },
    },
    {
    $group: {
    id: {
    status: "$status",
    },
    min: {
    $min: "$modify
    time",
    },
    },
    },
    ]
    with an index { status: 1, modify_time: 1}

    Another example is $bucket (same index):
    [
    {…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Build MongoDB with PGO

    I would like to see support for PGO (and even LLVM Bolt) in the upstream. Would be awesome if MongoDB will distribute PGO-optimized binaries, so the users will be able to see an additional performance boost "for free". At least describe to the users somewhere in the documentation, how they could achieve a boost for their own scenarios with PGO.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Boost the performance of bioinformatic annotation queries

    The documents to be selected look something like this:

    {
    "_id": {
    "$oid": "6272c580d4400d8cb10d5406"
    },
    "#CHROM": 1,
    "POS": 286747,
    "ID": "rs369556846",
    "REF": "A",
    "ALT": "G",
    "QUAL": ".",
    "FILTER": ".",
    "INFO": [{
    "RS": 369556846,
    "RSPOS": 286747,
    "dbSNPBuildID": 138,
    "SSR": 0,
    "SAO": 0,
    "VP": "0x050100000005150026000100",
    "WGT": 1,
    "VC": "SNV",
    "CAF": [{
    "$numberDecimal": "0.9381"
    }, {
    "$numberDecimal": "0.0619"
    }],
    "COMMON": 1,
    "TOPMED": [{
    "$numberDecimal": "0.88411856523955147"
    }, {
    "$numberDecimal": "0.11588143476044852"
    }]
    },
    ["SLO", "ASP", "VLD", "G5", "KGPhase3"]
    ]
    }

    For a basic annotation (https://en.wikipedia.org/wiki/SNP_annotation) scenario, we need such query:

    {'ID': {'$in': ['rs369556846', 'rs2185539', 'rs2519062', 'rs149363311', 'rs55745762', <...>]}}
    , where <...> means hundreds/thousands…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. There is a specific collection that I need more performance than others. Is there a way to assign more ram/memory to a specific collection?

    There is a specific collection that I need more performance than others. Is there a way to assign more ram/memory to a specific collection?

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base