Database
295 results found
-
1 vote
-
Raise the limit of 16 MB JSON between aggregation stages
When doing analytics, the 16 MB JSON limit between aggregation stages restricts the ability to process large amount of data. allowDiskUse does not help with all the various $operators and stages that we use. See ticket 00774514 for details.
9 votes -
Please create a build in role that grants developers all permissions needed to add/edit schema validations short of adminAll.
Please create a built-in role that grants developers all permissions needed to add/edit schema validations short of dbAdminAnyDatabase.
1 vote -
Add option to $dayOfWeek to choose between Monday and Sunday
Hi!
I was wondering if you could add an optional parameter to $dayOfWeek that allows you to choose on which day you want the functionality to start counting.
Thanks!
1 vote -
Add regex support in pipeline operator `replaceAll`
It would be very nice to have something like this possible:
{ $replaceAll: { input: "$text", find: "/[;,.]/g", replacement: "." } }
Many thanks !
1 vote -
URL Decode `readPreferenceTags`
If a url encoded value is set to readPreferenceTags it will be ignored causing for instance connecting to analytics node impossible from tooling that correctly encodes the urls.
It can look like this for instance:
readPreferenceTags=nodeType:ANALYTICS
becomesreadPreferenceTags=nodeType%3AANALYTICS
1 vote -
the profile output in association with the $comment query operator need to have consistency across operations.
I have observed the different profile result with regard to $comment query operator in association with find and update operation respectively as follows;
find operation shows a comment on the command.filter.$comment and the command.comment field in the system.profile collection.
op: 'query',
ns: 'db101.Bets',
command: {
find: 'Bets',
filter: {
_id: ObjectId("61a9db4b3bd34e4f68fb9abc"),
'$comment': 'test-dba'
},
comment: 'test-dba',
lsid: { id: UUID("43ebee67-3184-4ede-9cee-ecca7457861a") },
'$db': 'db101'
},update operation shows a comment only on the command.q.$comment field in the system.profile collection.
op: 'update',
ns: 'db101.Bets',
command: {
q: {
_id: ObjectId("61a9db4b3bd34e4f68fb9abc"),
'$comment': 'test-dba'
},
u: { '$set': { odds: 0.5 } },
multi:…
1 vote -
Support for CentOS Stream 8 in MongoDB OPS Manager version 5.x
Per the Server Support Matrix https://www.mongodb.com/try/download/ops-manager support for CentOS Stream 8 is not al.
We would like to see the currently supported MongoDB OPS Manager version 5.x available on the CentOS Stream 8 distribution.1 vote -
Change streams and Triggers for Time Series Collections
Add Change streams and Trigger capabilities to Time Series Collections.
Current Limitations don't allow this.
https://docs.mongodb.com/manual/core/timeseries/timeseries-limitations/#change-streams20 votes -
Please report mixed-type numeric _id fields in $merge stage error
Posting this idea at the request of one of the Jira users. You can find more technical details about this in the Jira issue:
https://jira.mongodb.org/browse/SERVER-61613
The gist of it is that I may have two collections,
b2
andb3
, that are not distinguishable in the Mongo Shell, like this:db.b2.find() [ { _id: 1, created: ISODate("2021-11-18T23:16:33.149Z") }, { _id: 2, created: ISODate("2021-11-18T23:16:33.149Z") } ] db.b3.find() [ { _id: 1, created: ISODate("2021-11-18T22:53:02.113Z") }, { _id: 2, created: ISODate("2021-11-18T22:53:02.113Z") } ] ``` When I merge each into a collection `pg` with this syntax:
db.pg.aggregate([{$merge: {into: "b3", whenMatched: "merge", whenNotMatched: "fail"}} ]);…
1 vote -
multiple centersphere as a geometry for geowithin
im looking at one of my queries that a system regularly runs and some times we look for records that are within up to 250 different centerspheres, i wornder if we could enable, like geowithin has the ability to support multiple poligons, we could enable multiple centerspheres ,
$match: {{'location':{
"$geoWithin" : {
"$centerSphere" : [[ 14.4321, -9.4321], 2.5232135647961246e-05]
}
}}, {'location':{
"$geoWithin" : {
"$centerSphere" : [[ 14.4321, -9.4321], 2.5232135647961246e-05]
}
}}, ...}
we could do :
```
$match: {'location':{
"$geoWithin" : {
"$centerSpheres" : [
[[ 14.4321, -9.4321], 2.5232135647961246e-05] ,
[[ 14.4321, -9.4321], 2.5232135647961246e-05],
...
]} }}
1 vote -
There is a specific collection that I need more performance than others. Is there a way to assign more ram/memory to a specific collection?
There is a specific collection that I need more performance than others. Is there a way to assign more ram/memory to a specific collection?
1 vote -
Unique index in sharded cluster
For enforcing uniqueness in a sharded cluster, the officially recommended approach provided here https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/#std-label-shard-key-arbitrary-uniqueness is simplistic and in production environment it brings non-trivial amount of work. Some considerations:
- Ephemeral issues might cause inconsistencies between the two collections (for example, unique index collection update succeeded but not the main collection) and make some unique keys not useable.
- There are many changes needed (we're using ORM Mongoose, there are many hooks for it to change) for enforce this universally.
What we ended up doing is to use distributed ephemeral locks (a TTLed MongoDB collection) to lock on the unique keys before adding…
5 votes -
$populate stage
Please provide a $populate stage that allows to resolve single referenced documents.
Internally it could use the combination of $lookup and $unwind:Related:
https://stackoverflow.com/questions/37793844/mongodb-how-to-resolve-dbref-on-client-side
1 vote -
Reserve connections to admin users
When the max number of connections is reached, no one can login to the database until some connections are closed or a failover is triggered (killing all the connections).
The admin users should have a few reserved connections so that they can login to the database and take actions, like kill some connections.
1 vote -
Mongo replicaset init sync issue
When we have large size mongo replica set, we may have to take out each nodes for maintenance for few hrs or days.
In that case we have to increase size of Oplog to keep several days of transactions to resync nodes after maintanance.
If the RS headroom falls below oplog window, then we have blow away data and do init sync to add node back after maintanence.
The problem with very large size Oplog, will slow down any Change Stream process. Also it will occupy un neccasary space within collections.
Can mongo offer alternative way to constantly dump oplog…
1 vote -
Shard Drain/ Removal issue
If we have more shards, and if want to remove few shards (more than 1), we use below command.
db.adminCommand( { removeShard : "Shardname" } )
Ex, if I have Shard 1, 2, 3, 4, 5. want to remove Shard 2 & 5.
I want to remove one shard at a time to minimize impact to users , then want to remove, say Shard 2, then want to remove Shard 5.
If we do this, some chunks from Shard#2 also get moved to "Shard#5", which is suppose to removed later. This causes Shard#5 chunk size increase. Then takes more time.
…
1 vote -
Rename an existing index
Allow for the possibility of renaming an existing index, without having to drop and recreate it.
Let us say a unique index exists in production, it might not be possible to safely drop it. Yet the index name might not be ideal.
7 votes -
Implement read-only fields and documents in database
I haven't tracked down this functionality yet, so excuse me if it is already implemented.
As a developer and system administrator I came to a dilemma how to prevent myself to make changes to documents.
I can limit myself as a programmer, but as a administrator I can always logon to console and make changes ''by hand''.
I was thinking of read-only field type, which once set, could not be updated nor removed from document (although backup and restore is the first problem which comes to mind). This limit should be set on a database level. Read-only field could typically…
1 vote -
The ability to perform a quick rollback or rewind of the database
This is along the lines of a flashback, to a previous point in time say from a very impacting change to data.. A large delete or data modification event. This avoids the need to take a complete outage for hours and hours restoring TBs of data and re-applying change logs. Would be great if this could be done to the granularity of a single or multiple collections too. This could use the oplogs present locally or in the oplog store.
1 vote
- Don't see your idea?