Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

5 results found

  1. More complex balancer windows for sharded clusters

    Currently we can define a single balancer window which is applied for every day of the week. It would useful to extend this with, for example:

    • multiple windows per day (e.g. 2-4am and 9-11pm)
    • custom windows for days of the week (e.g. Sat 5pm-midnight, Sunday 0-24)
    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Unique index in sharded cluster

    For enforcing uniqueness in a sharded cluster, the officially recommended approach provided here https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/#std-label-shard-key-arbitrary-uniqueness is simplistic and in production environment it brings non-trivial amount of work. Some considerations:

    1. Ephemeral issues might cause inconsistencies between the two collections (for example, unique index collection update succeeded but not the main collection) and make some unique keys not useable.
    2. There are many changes needed (we're using ORM Mongoose, there are many hooks for it to change) for enforce this universally.

    What we ended up doing is to use distributed ephemeral locks (a TTLed MongoDB collection) to lock on the unique keys before adding…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Combine reshardCollection+mongosynd idea to support a remote collection on a separate new cluster

    Great for prod productivity and 99.999 SLA if mongo could support this,
    for example,

    Given "mydb.mycoll" in current cluster being sharded with {zip:1} shard key

    1/ New cluster: sh.shardCollection( "mydb.mycoll", {name:1, phone:1} )
    2/ "Mongocopy" till mydb.mycoll in new and current cluster are synced.
    Very much like how reshardCollection works now, but to a remote
    mydb.mycoll on the new cluster, instead of the local coll
    system.resharding.554c8995-2ec9-4bda-9401-a3ad475b9c8c

    This is a combination of mongosync and reshardCollection in one.
    Prod cluster is often very big and busy and requires no downtime with the
    99.999 SLA (Service level agreement). Being able to reshardCollection
    to…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Support $documents on shards

    The new aggregation operator $documents cannot be used together with $merge in a sharded cluster, you get an error:

    db.aggregate([
       {
          $documents: [
             { _id: ObjectId("6616b08a610fab3e84d2d4ee"), a: 'foo', shardKey: 1 },
          ]
       },
       { $merge: { into: { db: 'myDB', coll: 'sharded_Collection' } } }
    ])
    
    

    raises

    $documents must run on mongoS, but cannot :: caused by :: $merge must run on a shard

    Having this new function available in every environment would we great.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Shard Drain/ Removal issue

    If we have more shards, and if want to remove few shards (more than 1), we use below command.

    db.adminCommand( { removeShard : "Shardname" } )

    Ex, if I have Shard 1, 2, 3, 4, 5. want to remove Shard 2 & 5.

    I want to remove one shard at a time to minimize impact to users , then want to remove, say Shard 2, then want to remove Shard 5.

    If we do this, some chunks from Shard#2 also get moved to "Shard#5", which is suppose to removed later. This causes Shard#5 chunk size increase. Then takes more time.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Sharding  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base