Database
-
XA Support
Is there any plan to implement distributed transaction that involves more than one data stores (for e.g. an RDBMS and Mongo)? We have one such requirement and tried a simple POC by creating a simple class extending XAResource (MongoXAResource implements XAResource) and overriding below methods.
@Override
public void commit(Xid xid, boolean b) throws XAException {
clientSession.commitTransaction();
}@Override
public void rollback(Xid xid) throws XAException {
clientSession.abortTransaction();
}It appears to work but i think there is a lot more to do. Is there any plan to implement by MongoDB team?
45 votes -
partial text search
we've already seen a full text search and it will be awesome if you manage to implement partial version :)
16 votes -
Database users should be able to change their own passwords
Currently, there is no way for Database Users to manage their own passwords, (even if they are atlasAdmin@admin). Moreover, as a Project Owner, I cannot create a role that allows them to do so, e.g.:
use admin
db.createRole(
{ role: "changeOwnPasswordRole",
privileges: [
{
resource: { db: "", collection: ""},
actions: [ "changeOwnPassword"]
}
],
roles: []
}
)As such, changing passwords always requires a Project Owner setting the new password and sharing it with the Database User. This is a problem, because user-password combinations known by more than one person do not serve as proof of identity.
A…
14 votes -
Allow views with programmatic role based access control rather than just declarative
Often Views, defined by an aggregation pipeline, are used to filter out certain fields, certain records and obfuscate parts of certain values to enable users with a specific restricted role to only see a subset of 'less sensitive' data from a collection. Views can be assigned to a role declaratively, but in some use cases it is also useful to allow the aggregation pipeline logic to be able to access the context of the current session's roles (e.g. $$ROLES) or user id (e.g. $$USER) to be able to make some programmatic decisions of what to show in the view based…
10 votes -
Additional checks for storage consistency
The following opt-in features would add additional check to check for storage layer corruptions of collections.
- Upon write read what data was committed to disk.
- Periodic or scheduled scanning of a collection. Similar to collection.validate but non blocking.
10 votes -
Password enforcement without LDAP
Enforce complex password policy
Enforce password expiration
Enforce password history9 votes -
Allow configuration of 100mb memory limit per aggregation pipeline stage
In this old thread from 2016 (https://groups.google.com/forum/#!topic/mongodb-user/LCeFZZRz5EY) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:
- If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
- You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.
I believe this subject needs to be revisited for the following reasons:
- “Too much memory” is very subjective, and the 100mb…
8 votes -
Auditing requirement for the changes done through Ops Manager portal
Changes done through Ops Manager portal are visible only in Alerts(Activity Feeds). Though these Alerts(Activity Feeds) can be derived through API, our auditors may not accept API calls and we need ops manger to log these change in respective deployment mongo audit.log
Thank you
7 votes -
More complex balancer windows for sharded clusters
Currently we can define a single balancer window which is applied for every day of the week. It would useful to extend this with, for example:
- multiple windows per day (e.g. 2-4am and 9-11pm)
- custom windows for days of the week (e.g. Sat 5pm-midnight, Sunday 0-24)
6 votes -
Kafka audit event streaming
Provide Kafka Topic as a write target for database auditing and database message logging.
https://docs.mongodb.com/manual/core/auditing/
Auditing is currently limited to a local and editable JSON/BSON file or the system console log.
The SYSLOG is not recommended by MongoDB. "The syslog message limit can result in the truncation of the audit messages. The auditing system will neither detect the truncation nor error upon its occurrence."5 votes -
Allow kill connections
Kill session commands only stop current activities on DB, but not closing/dropping connections (connections still remain open in $listSessions).
It´d be useful to be able to close opened connections in situations where too many sessions have been opened incorrectly or not closed.5 votes -
Add a "Limit" to Delete and Bulk Delete operations
Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.
- For the delete, this would make sure we only delete n number of documents.
- For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.
Right now, the only solution is a hack,…
5 votes -
Reduce the minimum value for watchdogPeriodSeconds
The storage watchdog attempts to create, write, and read a test file in critical directories every 10 seconds.
The watchdogPeriodSeconds parameter controls how often these a thread checks to ensure at least one check has succeeded since the last check.
The minimum value for watchdogPeriodSeconds is 60 seconds. This means that in the worst case, the mongod could be unable to write for up to 2 minutes before the watchdog asserts and kills the stalled node. That is a very long time for a primary node to be stalled in a busy cluster.
It does make sense that watchdogPeriodSeconds must…
4 votes -
4 votes
-
Improve the election process to consider node reachability
Consider both new and existing sockets to be utilized in order to make more realistic observations about cluster health during an election to avoid for example DNS related issues which would make a node unreachable for new connections.
4 votes -
Include the _ids of existing documents in BulkWriteResult when performing upserts
When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:
db.getCollection("test").find({})
db.test.drop()
var bulk = db.test.initializeUnorderedBulkOp();
bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
bulk.execute();
```The BulkWriteResult contains the upserted _id:
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 0,
"nUpserted" : 1,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [
{
"index" : 0,
"_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
}
]})
However, when a document already exist, the _id is not returned:
db.test.find()
var bulk = db.test.initializeUnorderedBulkOp();
bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…4 votes -
Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process
What is the problem that needs to be solved? Store (historically)
serverStatus.uptime
counter info on MongoDB Server process, so that it will be possible to trackserverStatus.uptime
changes through the time.Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since
serverStatus.uptime
counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…4 votes -
MongoDB 4.2 Distributed Transaction with Arbiter
Hello.
We are preparing to introduce MongoDB 4.2 and expect distributed transaction function.I read document which arbiter don't be member when use distributed transaction.
>> https://docs.mongodb.com/manual/core/transactions/index.html#arbitersPSA can do that, but it's weird that it doesn't even work out to PSSA.
It usually operates as a PSS from an operating point of view, but can temporarily become a PSA in the event of equipment problems.
Why should there be no Arbiter in the Shard to use Distributed Transaction?
I can not understand restriction.Can you tell me technical reason for not being able to support Distributed Transaction with Arbiter? …
4 votes -
Data masking policy
Implement Data masking similar to Schema Validation in Mongo so that customer can define a server-side data masking policy to mask the results of a query and a new role which will give explicit permission to users for reading unmasked data
3 votes -
Extend db.collection.distinct() to work with multiple fields in a compound key
Currently the distinct() command finds the unique set of values for a SINGLE specified field across a collection or view. For example:
db.staff.distinct("last_name" )If there is an index on the lastname field, the DISTINCTSCAN plan can use that index and the operation is very fast.
To find the unique values for a set of more than one fields, the $group aggregation stage has to be used like this:
db.staff.aggregate([{$group: {_id: {FName: "$first_name", LName: "$last_name"}} ]);
This operation does not really need the $group functionality, as it is not calculating a sum/min/max/average/etc value using the accumulator operators.…
3 votes
- Don't see your idea?