Database
314 results found
-
Make consistent use of system-wide CA certificate store
Please make the use of system-wide CA certificate store the same in all tool/commands.
If
tls.CAFile
is not specified inmongod/mongos
configuration then the system-wide CA certificate store will be used.If
--sslCAFile
is not specified inmongoimport/mongoexport
tool then the system-wide CA certificate store will be used - but this behavior is not documented.For
mongosh
you have to specify option--tlsUseSystemCA
is you like to use the system-wide CA certificate store. I did not check how it is/was working in legacymongo
shell.For command
Mongo()
(https://www.mongodb.com/docs/v6.0/reference/method/Mongo/), I was not able to find out, how to…
2 votes -
How to limit the number of document updates?
Hi
I want to limit the number of document updates in one command.for example
db.users.updateMany(
<filter>,
<update>,
{
limit : 100
}
);https://www.mongodb.com/community/forums/t/how-to-limit-the-number-of-document-updates/102204/3
27 votes -
prometheus metrics availablity through privatelink
until now is only possible to get metrics from public or private peered vpc but is not actually possible through privatelink. Since privatelink are used most of the time for security reason this limitation leads to find compromises or workaround to customers
20 votes -
Clustered Collection TTL on _id should support ObjectId.
Clustered Collections have the ability to expire on the _id field, this would be really helpful if it could use the timestamp portion of the ObjectId.
2 votes -
Compound clustered index
Now it is possible to create a clustered index for only one field. Since documents can be arranged in ascending order of multiple fields, I see no reason to disallow a clustered index from being a compound.
Expected syntax:
create_collection('testVCFcoll', clusteredIndex={'key': {'_id': 1}, 'unique': True, 'name': ['#CHROM', 'POS']})
3 votes -
Need an array query operator like elem match which matches all array nested object instead of at least one array element
as per documentation elem match query matches at least one match in array of nested object,
same we need a array query operator which returns the whole document only if elem match like criteria matches all elements in array of nested objects.
I have gone through usecase of $all with $elemMatch but it behaviour different it is like $and with $elemMatch instead excepted behaviour i am asking or excepting mentioned earlier.
https://www.mongodb.com/docs/manual/reference/operator/query/all/#use--all-with--elemmatch
https://www.mongodb.com/docs/manual/reference/operator/query/elemMatch/1 vote -
Please upgrade third_party/mozjs to esr 102
for new cpu isa (loongarch64/loong64) support , Thanks
1 vote -
Handle Daylight Saving Time when $densify is used on a date field
When using "day" as "unit" for a $densify pipeline stage on a date field, the date is always advanced of 24 hours. This is however not always the expected result in timezones in which the year has one 23-hour and one 25-hour long day, because of Daylight Saving Time.
It would be useful to have the possibility to pass an optional timezone parameter in the $densify stage and, when present, have the stage account for these exceptions when appropriate.
Here follows an example.
Assume we have a collection containing the following documents:
…db.densifyDateExample.insertMany([ {_id: "a", d: ISODate("2022-10-28T22:00:00Z")}, {_id: "b", d:
7 votes -
Support $documents on shards
The new aggregation operator $documents cannot be used together with $merge in a sharded cluster, you get an error:
db.aggregate([ { $documents: [ { _id: ObjectId("6616b08a610fab3e84d2d4ee"), a: 'foo', shardKey: 1 }, ] }, { $merge: { into: { db: 'myDB', coll: 'sharded_Collection' } } } ])
raises
$documents must run on mongoS, but cannot :: caused by :: $merge must run on a shard
Having this new function available in every environment would we great.
1 vote -
Document scoped RBAC - Permission for collection document fields
Roles and accesses can be defined on the basis of collections that define roles for users.
It would be nice if these access permissions could be made within the scope of the fields under the collection and the query results would be returned accordingly.Current:
privileges: [
{ resource: { db: "users", collection: "user" }, actions: [ "find"] },
}
Expected:
{ resource: { db: "users", collection: "user", field: "email" }, actions: [ "find"] },
3 votes -
Support Two Array Fields in Compound Indexes
Hi,
I've come across a lot of use cases where the business logic has demanded unique constraints on 1-2 fields that have been modeled as arrays on documents. The cases that only contain a single array field is already taken care of using unique indexes in MongoDB, however the cases with 2 array fields have required application level constraints since the database only supports a single array field in compound indexes. While multiple arrays in a single index would massively increase the size of the indexes, it would be very helpful if it would still be possible. If there are…
2 votes -
Display Recovery time during restore process.
Team,
Currently Mongo DB restoration process not giving any recovery time estimate when restore process start and because of that we are not able to plan time window for other critical process to start which is depend on restore and not able to communicate exact time when system will be available.
Please include this feature in upcoming release.
4 votes -
Raise an error when "majority" writes not possible
This topic is related to https://www.mongodb.com/docs/v6.0/tutorial/mitigate-psa-performance-issues/ in a PSA ReplicaSet configuration.
When you try to execute a command with writeConcern
{w: "majority"}
in a three-member Primary-Secondary-Arbiter configuration where one data bearing node is not available, then the command hangs forever - unless the missing member becomes available again or you reconfigure the ReplicaSet to{votes: 0, priority: 0}
on the non-available member.Instead of waiting forever, MongoDB should rollback the change and raise an error. This behavior should apply mainly for operations where
{w: "majority"}
is set implicitly and cannot be changed by the user, for example at "renameCollection" issued…1 vote -
BSON::Proxy Proxy content from/to another collection
I would like a mongodb data type, called
BSON::Proxy # not indexable or searchable
Which would hold the value of another collection & id.
IE:
BSON::Proxy({database: "this_one", collection: "blobs", _id: "123"})
This would allow my code to request a field that would be used for reading ONLY IF REQUESTED in the projection.
Example record:
{
id: "abc",
username: "test user",
blob: BSON::Proxy({database: "thisone", collection: "blobs", _id: "123"})
age: 50
}db.collection.find(query, projection, options)
db.users.find({_id: "abc"}) # returns all fields except blob.
db.users.find({_d: "abc", { _id: true, username: true, blob: true, age: true }}) # returns all fields including blob.
…
1 vote -
Query Planner needs a timeOut set as a Database parameter.
Query Planner needs a timeOut set as a Database parameter.
We see App queries timing out on Query planner taking >1 sec. Whilst this can be avoided by setting maxTimeMS at client end - this is more of a setting for the overall query and not just the query planner - Also, this comes at the risk of closing/timing out the actual query (cursor) which is not our need.
We only want the query planner itself to have a specific - set/customisable timeout and the query to keep running selecting one/any if the plans run thus far without timing out…
2 votes -
Progress bar
when upgrading cluster / instance from one instance type to other on shared instance for example from M2 to M5, there should be some sort of progress bar exhibited tracking progress of upgrade
3 votes -
Make redundant createView() a no-op
If I call createView() with params that match an existing view (in name and all other attributes), it returns an error. It'd be more convenient if the call simply succeeded without doing any work. The behavior I propose is analogous to the way that createIndex() behaves. With the current behavior my only choices are to (1) unconditionally drop and recreate the view, or (2) read the current view definition and see whether it matches the definition I want. The first choice is unacceptable because for a period of time (albeit a short one) the view won't exist and queries that…
1 vote -
bloom filter index
https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/
I think having an option to use bloom filter indexes could provide for better performance when compared to compound indexes and eliminate the need for having multiple indexes. It would likely require tuning still, but with very large data sets this could be much less expensive.
3 votes -
We need to be able to use $[<identifier>] and "$setOnInsert" in the same command
I want to be able to have a maintain array of counters for a user through a single update statement. If the document containing array of counters does not exist, I want to add it. If it does exist, I want to increment the counter
For example, this command
…
db.inboxItemCounts.updateOne(
// filter
{
"userId": userDoc.userId
},
// update
{
"$setOnInsert": {
"userId": userDoc.userId,
"fromUserSummary": [{
"userName": fromUserDoc.userName,
"count": 1
}]
},
// "$inc": incBody,
"$inc": {
"fromUserSummary.$[userElement].count": 1
}
},
// options
{
"upsert": true,
"writeConcern": { "w": "majority" },
"arrayFilters": [
{ "userElement.userName": { $eq: fromUserDoc.userName }}
]
}2 votes -
$addToSetIfNotExists or javascript code as array operator
This would allow for unique if not last entry into arrays.
Preventing:
['one', 'one', 'two', 'three']
But allowing:
['one', 'two', 'three', 'one']
Or perhaps (to run js code on the array at the db):
`.update({}, {$js: {'array_field': 'var last = ""; for (var key in array) {if (array[key] === last) {array.splice(key);}}'});
1 vote
- Don't see your idea?