Database
-
Multi-version-intermediate upgrade package
When upgrading the MongoDB server community edition from a very old version to the newest version one cannot skip installing intermediate versions, e. g. version 3.6 -> 4.4 does not work because the database files cannot be auto-migrated and the FeatureCompatibilityVersion is too low on 3.6 (or not existent).
Instead we have to install 3.6 -> 4.0 -> 4.2 -> 4.4 and execute db.adminCommand( { setFeatureCompatibilityVersion: "..." } ) appropriately in between installing each new version.
I propose creating an additional installer package for Linux which takes care of all that to migrate the internal database structure from the installed…
1 vote -
Add SSO authentication support to mongoDB database
Existing issue: One user has accounts in multiple mongoDB databases on Atlas that exist in different Projects and maybe Organizations as well. When he wants to switch from one database to the other from a 3rd party app, he has to provide his credentials every time.
Adding SSO authentication support to the mongoDB databases would add flexibility to a user like that, to switch from one database to the other without being asked for his credentials every time when connecting from a 3rd party application.
1 vote -
Have an API to have the History of Primary Nodes
Have an API to have the History of Primary Nodes,from the time the Replica set is initiated.I know we can have alerts for Primary switch over,but if we want to analyse the data of elections/primary node for say about last 1 year , an API would help.
1 vote -
CSFLE - Enable aggregation stages for non-encrypted collections
When encrypting any collection with CSFLE, aggregation is not allowed on non-encrypted collections.
The official recommendation is to maintain 2 clients: 1 for CSFLE and 1 for when aggregation is needed. How is this an acceptable solution?
2 votes -
Preserve field order in $merge
Filed on behalf of https://jira.mongodb.org/browse/SERVER-63853:
A variety of formats require strict adherence to the sequence of fields, such as bioinformatics
Files of such formats are often very large and contain nested structures, so it is convenient to use them as collections. But to keep the data belonging to the above specs, it is necessary to keep the arrangement of the fields. Unfortunately, aggregations with saving results to another DB lose original arrangement.
2 votes -
1 vote
-
Raise the limit of 16 MB JSON between aggregation stages
When doing analytics, the 16 MB JSON limit between aggregation stages restricts the ability to process large amount of data. allowDiskUse does not help with all the various $operators and stages that we use. See ticket 00774514 for details.
2 votes -
Please create a build in role that grants developers all permissions needed to add/edit schema validations short of adminAll.
Please create a built-in role that grants developers all permissions needed to add/edit schema validations short of dbAdminAnyDatabase.
1 vote -
Add option to $dayOfWeek to choose between Monday and Sunday
Hi!
I was wondering if you could add an optional parameter to $dayOfWeek that allows you to choose on which day you want the functionality to start counting.
Thanks!
1 vote -
Add regex support in pipeline operator `replaceAll`
It would be very nice to have something like this possible:
{ $replaceAll: { input: "$text", find: "/[;,.]/g", replacement: "." } }
Many thanks !
1 vote -
URL Decode `readPreferenceTags`
If a url encoded value is set to readPreferenceTags it will be ignored causing for instance connecting to analytics node impossible from tooling that correctly encodes the urls.
It can look like this for instance:
readPreferenceTags=nodeType:ANALYTICS
becomesreadPreferenceTags=nodeType%3AANALYTICS
1 vote -
the profile output in association with the $comment query operator need to have consistency across operations.
I have observed the different profile result with regard to $comment query operator in association with find and update operation respectively as follows;
find operation shows a comment on the command.filter.$comment and the command.comment field in the system.profile collection.
op: 'query',
ns: 'db101.Bets',
command: {
find: 'Bets',
filter: {
_id: ObjectId("61a9db4b3bd34e4f68fb9abc"),
'$comment': 'test-dba'
},
comment: 'test-dba',
lsid: { id: UUID("43ebee67-3184-4ede-9cee-ecca7457861a") },
'$db': 'db101'
},update operation shows a comment only on the command.q.$comment field in the system.profile collection.
op: 'update',
ns: 'db101.Bets',
command: {
q: {
_id: ObjectId("61a9db4b3bd34e4f68fb9abc"),
'$comment': 'test-dba'
},
u: { '$set': { odds: 0.5 } },
multi:…
1 vote -
Support for CentOS Stream 8 in MongoDB OPS Manager version 5.x
Per the Server Support Matrix https://www.mongodb.com/try/download/ops-manager support for CentOS Stream 8 is not al.
We would like to see the currently supported MongoDB OPS Manager version 5.x available on the CentOS Stream 8 distribution.1 vote -
Change streams and Triggers for Time Series Collections
Add Change streams and Trigger capabilities to Time Series Collections.
Current Limitations don't allow this.
https://docs.mongodb.com/manual/core/timeseries/timeseries-limitations/#change-streams1 vote -
Please report mixed-type numeric _id fields in $merge stage error
Posting this idea at the request of one of the Jira users. You can find more technical details about this in the Jira issue:
https://jira.mongodb.org/browse/SERVER-61613
The gist of it is that I may have two collections,
b2
andb3
, that are not distinguishable in the Mongo Shell, like this:db.b2.find() [ { _id: 1, created: ISODate("2021-11-18T23:16:33.149Z") }, { _id: 2, created: ISODate("2021-11-18T23:16:33.149Z") } ] db.b3.find() [ { _id: 1, created: ISODate("2021-11-18T22:53:02.113Z") }, { _id: 2, created: ISODate("2021-11-18T22:53:02.113Z") } ] ``` When I merge each into a collection `pg` with this syntax:
db.pg.aggregate([{$merge: {into: "b3", whenMatched: "merge", whenNotMatched: "fail"}} ]);…
1 vote -
multiple centersphere as a geometry for geowithin
im looking at one of my queries that a system regularly runs and some times we look for records that are within up to 250 different centerspheres, i wornder if we could enable, like geowithin has the ability to support multiple poligons, we could enable multiple centerspheres ,
$match: {{'location':{
"$geoWithin" : {
"$centerSphere" : [[ 14.4321, -9.4321], 2.5232135647961246e-05]
}
}}, {'location':{
"$geoWithin" : {
"$centerSphere" : [[ 14.4321, -9.4321], 2.5232135647961246e-05]
}
}}, ...}
we could do :
```
$match: {'location':{
"$geoWithin" : {
"$centerSpheres" : [
[[ 14.4321, -9.4321], 2.5232135647961246e-05] ,
[[ 14.4321, -9.4321], 2.5232135647961246e-05],
...
]} }}
1 vote -
There is a specific collection that I need more performance than others. Is there a way to assign more ram/memory to a specific collection?
There is a specific collection that I need more performance than others. Is there a way to assign more ram/memory to a specific collection?
1 vote -
Unique index in sharded cluster
For enforcing uniqueness in a sharded cluster, the officially recommended approach provided here https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/#std-label-shard-key-arbitrary-uniqueness is simplistic and in production environment it brings non-trivial amount of work. Some considerations:
- Ephemeral issues might cause inconsistencies between the two collections (for example, unique index collection update succeeded but not the main collection) and make some unique keys not useable.
- There are many changes needed (we're using ORM Mongoose, there are many hooks for it to change) for enforce this universally.
What we ended up doing is to use distributed ephemeral locks (a TTLed MongoDB collection) to lock on the unique keys before adding…
2 votes -
$populate stage
Please provide a $populate stage that allows to resolve single referenced documents.
Internally it could use the combination of $lookup and $unwind:Related:
https://stackoverflow.com/questions/37793844/mongodb-how-to-resolve-dbref-on-client-side
1 vote -
Reserve connections to admin users
When the max number of connections is reached, no one can login to the database until some connections are closed or a failover is triggered (killing all the connections).
The admin users should have a few reserved connections so that they can login to the database and take actions, like kill some connections.
1 vote
- Don't see your idea?