Database
302 results found
-
Add SHA2, SHA3 and ECDSA functions to agg framework
It is very useful (and valuable security-wise) to be able to reverify hashes and signatures "on-engine" instead of dragging material out to a client app and running the algo there. The implementations are straightforward and everywhere now so it's not a huge lift for the backend. Example use:
aggregate([
{$match: whatever},
{$addFields: {
hashok: {$cond: [ {$eq: [ {$sha3: "path.to.struct"}, "path.to.stored.sha3"} ], 1, 0},
sigok: { $verify: { sig: "path.to.sig", pubkey: "path.to.pubkey", algo: "name of curve to use eg. SECP256k1}}
}
])
The digest function would operate on the raw BSON behind the scenes.1 vote -
Document scoped RBAC - Permission for collection document fields
Roles and accesses can be defined on the basis of collections that define roles for users.
It would be nice if these access permissions could be made within the scope of the fields under the collection and the query results would be returned accordingly.Current:
privileges: [
{ resource: { db: "users", collection: "user" }, actions: [ "find"] },
}
Expected:
{ resource: { db: "users", collection: "user", field: "email" }, actions: [ "find"] },
3 votes -
Display Recovery time during restore process.
Team,
Currently Mongo DB restoration process not giving any recovery time estimate when restore process start and because of that we are not able to plan time window for other critical process to start which is depend on restore and not able to communicate exact time when system will be available.
Please include this feature in upcoming release.
4 votes -
Lock the document field (not the entire document)
Hi
according to this reference: https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactionsWhen I lock a document with a field with a new ObjectID, the whole document is locked!
Idea :
Operations:
i have three fields : A , B , C
I locked fieldA
with new ObjectID in transactionT1
.
i locked fieldB
with new ObjectID in transactionT2
.Behaviors (high performance) :
- In transactionT2
: if fieldA
is updated,writeConflict
error occurs.
- In transactionT1
: if fieldB
is updated,writeConflict
error occurs.
- Outside of transactions: If fieldA
is updated, it waits for …1 vote -
Improve the mongo query language
Sometimes I find Mongo query language as not put very well together, sometimes it feels like a patch job. It would be nice, if you could make you query language easier to reason about. It would be awesome, if you could introduce fluent style api builder instead of building bson documents.
1 vote -
Compound clustered index
Now it is possible to create a clustered index for only one field. Since documents can be arranged in ascending order of multiple fields, I see no reason to disallow a clustered index from being a compound.
Expected syntax:
create_collection('testVCFcoll', clusteredIndex={'key': {'_id': 1}, 'unique': True, 'name': ['#CHROM', 'POS']})
3 votes -
Add functionality to specify the readConcern level at db.collection.findOne()
Add functionality to specify the readConcern at db.collection.findOne(). At the version 5.0.14 it's not supported.
1 vote -
Build MongoDB with PGO
I would like to see support for PGO (and even LLVM Bolt) in the upstream. Would be awesome if MongoDB will distribute PGO-optimized binaries, so the users will be able to see an additional performance boost "for free". At least describe to the users somewhere in the documentation, how they could achieve a boost for their own scenarios with PGO.
1 vote -
Release notes with urgency and risk
Provide MongoDB customers/users with an understandable release notes, especially for bugfixes.
What are the risks this bugfix release covers, what is its urgency.Right now, release notes are made of MongoDB Jira tickets, which are very detailed and refer to the implementation of MongoDB, and thus cannot be easily understood by end users.
As a suggestion, release notes could sum up the following data in a simple table:
- Nature of impact
-> data corruption: yes/no
-> downtime: of a single node / of the whole cluster / on a subset of requests / etc
- Context of impact…11 votes -
Combine reshardCollection+mongosynd idea to support a remote collection on a separate new cluster
Great for prod productivity and 99.999 SLA if mongo could support this,
for example,Given "mydb.mycoll" in current cluster being sharded with {zip:1} shard key
1/ New cluster: sh.shardCollection( "mydb.mycoll", {name:1, phone:1} )
2/ "Mongocopy" till mydb.mycoll in new and current cluster are synced.
Very much like how reshardCollection works now, but to a remote
mydb.mycoll on the new cluster, instead of the local coll
system.resharding.554c8995-2ec9-4bda-9401-a3ad475b9c8cThis is a combination of mongosync and reshardCollection in one.
Prod cluster is often very big and busy and requires no downtime with the
99.999 SLA (Service level agreement). Being able to reshardCollection
to…1 vote -
1 vote
-
Progress bar
when upgrading cluster / instance from one instance type to other on shared instance for example from M2 to M5, there should be some sort of progress bar exhibited tracking progress of upgrade
3 votes -
throttle sessions which use too much resources
We have different types of applications :
1. Writer - to load data into mongodb from different data sources
2. Reader - to read data and display to end user.Normally, there is strict SLA for reader , but no SLA (or less restricted) for writers. We want to make sure that writer will not impact reader in case when for some reason a lot of data arrived from external sources. So, we would like to slow down writers for the sake of readers.
Writers can saturate CPUs and IO, that's why we want an option to leave some room…
1 vote -
Clustered Collection TTL on _id should support ObjectId.
Clustered Collections have the ability to expire on the _id field, this would be really helpful if it could use the timestamp portion of the ObjectId.
2 votes -
Add hash function (eg. md5) to aggregation pipeline
I would like to implement hash-based sharding in my own application on top of MongoDB. For that purpose, I would like to pull a stable pseudorandom subset of documents into each of my servers, and I would like to do so without enlarging the documents by adding additional fields, and without using JavaScript in the aggregation pipeline (for performance reasons).
The idea: add a hash function, such as md5, to aggregation pipelines. The function would accept an object/array containing the data to be hashed, and would return the hash, ideally as a number.
1 vote -
$dateDiff operator should be useful to calculate age
In the documentation it says: " For example, two dates that are 18 months apart would return 1 year difference instead of 1.5 years.". But if startDate is 2021-08-01 and endDate is 2023-02-01, the result is 2 years difference. I think it should be good if this operator could be used to calculate the age.
1 vote -
Support Two Array Fields in Compound Indexes
Hi,
I've come across a lot of use cases where the business logic has demanded unique constraints on 1-2 fields that have been modeled as arrays on documents. The cases that only contain a single array field is already taken care of using unique indexes in MongoDB, however the cases with 2 array fields have required application level constraints since the database only supports a single array field in compound indexes. While multiple arrays in a single index would massively increase the size of the indexes, it would be very helpful if it would still be possible. If there are…
2 votes -
bloom filter index
https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/
I think having an option to use bloom filter indexes could provide for better performance when compared to compound indexes and eliminate the need for having multiple indexes. It would likely require tuning still, but with very large data sets this could be much less expensive.
3 votes -
$currentDate option to only update if the document was modified
A common pattern in a data model is to have a field that denotes when the data was last modified. For this example, let this field be called "updated". I want to toggle a field on a document called "enabled", and if the value is modified I also want to update the "updated" field.
This behavior is possible right now via a comment shown in SERVER-42084, but ONLY if we include the entire document, which is not acceptable when you only want to modify a single field. This document could have other fields that are numeric and are updated atomically,…
1 vote -
Improve sorting performance
Sorting always ends up doing a collection scan when the selected index for the find/match does meet the sort requirement. The sort effectively makes the performance worse by 15-25 times for the "matched" dataset which runs into 10s of thousands (not millions) of documents
1 vote
- Don't see your idea?