Database
295 results found
-
More complex balancer windows for sharded clusters
Currently we can define a single balancer window which is applied for every day of the week. It would useful to extend this with, for example:
- multiple windows per day (e.g. 2-4am and 9-11pm)
- custom windows for days of the week (e.g. Sat 5pm-midnight, Sunday 0-24)
11 votes -
Notification Alert
Whenever a document is created within a collection to send a email to the account holder.
1 vote -
Allow kill connections
Kill session commands only stop current activities on DB, but not closing/dropping connections (connections still remain open in $listSessions).
It´d be useful to be able to close opened connections in situations where too many sessions have been opened incorrectly or not closed.16 votes -
Burstable IOPS for MongoDB Atlas on Azure
According to Azure documentation, bursting is enabled by default for all VMs that using Premium SSD (https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-types#bursting). It would be great if MongoDB Atlas on Azure can get benefit from it.
10 votes -
Validate Window
In 4.4 with validate being able to run in the background.
https://docs.mongodb.com/master/reference/method/db.collection.validate/#behavior
It would be good to have a "validate window" that would cycle though each collection. Similar to the way a balancing window works in sharding.
1 vote -
Allow validate to use replica tags
The use case for this is to be able to target a secondaries in different shards.
If validate accepted tag read preference it could be kicked off from a shell connected to a mongos.
1 vote -
would be helpful, with the online interface, to be able to bulk import collections
For those whose corporate firewalls do not allow for mongo client access to the free mongo cloud db, it would be extremely helpful to be able to upload a pre-formatted json collection of documents, rather than entering those documents one at a time.
1 vote -
mongo binary utility for slaptest
Please provide binary for slaptest for Mongo server just like mysqlslap for mysql server. This enables us to test a better index by doing a slap test on server with multiple combinations of indexes by dba itself instead of depending on load tests from application like jmeter load test , soasta load test etc which needs involvement of multiple resources, developers, testers and lot of time.
1 vote -
Database users should be able to change their own passwords
Currently, there is no way for Database Users to manage their own passwords, (even if they are atlasAdmin@admin). Moreover, as a Project Owner, I cannot create a role that allows them to do so, e.g.:
use admin db.createRole( { role: "changeOwnPasswordRole", privileges: [ { resource: { db: "", collection: ""}, actions: [ "changeOwnPassword"] } ], roles: [] } )
As such, changing passwords always requires a Project Owner setting the new password and sharing it with the Database User. This is a problem, because user-password combinations known by more than one person do not serve as proof of identity.
A…
32 votes -
XA Support
Is there any plan to implement distributed transaction that involves more than one data stores (for e.g. an RDBMS and Mongo)? We have one such requirement and tried a simple POC by creating a simple class extending XAResource (MongoXAResource implements XAResource) and overriding below methods.
@Override
public void commit(Xid xid, boolean b) throws XAException {
clientSession.commitTransaction();
}@Override
public void rollback(Xid xid) throws XAException {
clientSession.abortTransaction();
}It appears to work but i think there is a lot more to do. Is there any plan to implement by MongoDB team?
49 votes -
Allow configuration of 100mb memory limit per aggregation pipeline stage
In this old thread from 2016 (https://groups.google.com/forum/#!topic/mongodb-user/LCeFZZRz5EY) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:
- If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
- You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.
I believe this subject needs to be revisited for the following reasons:
- “Too much memory” is very subjective, and the 100mb…
27 votes -
Add a "Limit" to Delete and Bulk Delete operations
Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.
- For the delete, this would make sure we only delete n number of documents.
- For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.
Right now, the only solution is a hack,…
9 votes -
Additional checks for storage consistency
The following opt-in features would add additional check to check for storage layer corruptions of collections.
- Upon write read what data was committed to disk.
- Periodic or scheduled scanning of a collection. Similar to collection.validate but non blocking.
10 votes -
Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process
What is the problem that needs to be solved? Store (historically)
serverStatus.uptime
counter info on MongoDB Server process, so that it will be possible to trackserverStatus.uptime
changes through the time.Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since
serverStatus.uptime
counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…5 votes -
MongoDB 4.2 Distributed Transaction with Arbiter
Hello.
We are preparing to introduce MongoDB 4.2 and expect distributed transaction function.I read document which arbiter don't be member when use distributed transaction.
https://docs.mongodb.com/manual/core/transactions/index.html#arbiters
PSA can do that, but it's weird that it doesn't even work out to PSSA.
It usually operates as a PSS from an operating point of view, but can temporarily become a PSA in the event of equipment problems.
Why should there be no Arbiter in the Shard to use Distributed Transaction?
I can not understand restriction.Can you tell me technical reason for not being able to support Distributed Transaction with Arbiter?
Do…4 votes
- Don't see your idea?