Database

  1. Additional checks for storage consistency

    The following opt-in features would add additional check to check for storage layer corruptions of collections.


    1. Upon write read what data was committed to disk.

    2. Periodic or scheduled scanning of a collection. Similar to collection.validate but non blocking.

    6 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  2. Supporting hashed fields in compound indexes and shard keys

    Hello,

    We have been using hashed keys to shard in regular clusters, they are useful for pre-sharding and keeping a balanced cluster without having to rely on the balancer.

    Global clusters and zone sharded clusters, we must use a compound shard key to identify both the zone and the shard key.

    MongoDB does not currently support compound shard keys containing hashed fields, making it cumbersome for us to use zone sharding.

    MongoDB could be improved by supporting hashed fields in compound keys.

    3 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  3. Kafka source connector once only semantics

    Added as a suppport case here : https://support.mongodb.com/case/00634630

    When using the connector as a Source, i.e we capture change streams from the Source Mongo DB and stream that to a Kafka endpoint.

    Imagine these are updates on financial transactions in mongodb and they are NOT tolerant to
    1) missed data and
    2) duplicated data
    in that order.

    So, we need to make sure that the Change Streams that we are observing(matching) on, are delivered once and exactly once to the Kafka pipeline. (Blog on the same : https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/). If exactly-once semantics are enabled, it makes commits transactional by default.

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process

    What is the problem that needs to be solved? Store (historically) serverStatus.uptime counter info on MongoDB Server process, so that it will be possible to track serverStatus.uptime changes through the time.

    Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since serverStatus.uptime counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  5. MongoDB 4.2 Distributed Transaction with Arbiter

    Hello.
    We are preparing to introduce MongoDB 4.2 and expect distributed transaction function.

    I read document which arbiter don't be member when use distributed transaction.
    >> https://docs.mongodb.com/manual/core/transactions/index.html#arbiters

    PSA can do that, but it's weird that it doesn't even work out to PSSA.

    It usually operates as a PSS from an operating point of view, but can temporarily become a PSA in the event of equipment problems.

    Why should there be no Arbiter in the Shard to use Distributed Transaction?
    I can not understand restriction.

    Can you tell me technical reason for not being able to support Distributed Transaction with Arbiter? …

    2 votes
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  6. Add a "Limit" to Delete and Bulk Delete operations

    Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.


    • For the delete, this would make sure we only delete n number of documents.

    • For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.

    Right now, the only solution is a hack,…

    1 vote
    Sign in Sign in with your MongoDB Account
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Database

Categories

Feedback and Knowledge Base