Database
320 results found
-
Query Planner needs a timeOut set as a Database parameter.
Query Planner needs a timeOut set as a Database parameter.
We see App queries timing out on Query planner taking >1 sec. Whilst this can be avoided by setting maxTimeMS at client end - this is more of a setting for the overall query and not just the query planner - Also, this comes at the risk of closing/timing out the actual query (cursor) which is not our need.
We only want the query planner itself to have a specific - set/customisable timeout and the query to keep running selecting one/any if the plans run thus far without timing out…
2 votes -
Progress bar
when upgrading cluster / instance from one instance type to other on shared instance for example from M2 to M5, there should be some sort of progress bar exhibited tracking progress of upgrade
3 votes -
Make redundant createView() a no-op
If I call createView() with params that match an existing view (in name and all other attributes), it returns an error. It'd be more convenient if the call simply succeeded without doing any work. The behavior I propose is analogous to the way that createIndex() behaves. With the current behavior my only choices are to (1) unconditionally drop and recreate the view, or (2) read the current view definition and see whether it matches the definition I want. The first choice is unacceptable because for a period of time (albeit a short one) the view won't exist and queries that…
1 vote -
bloom filter index
https://www.percona.com/blog/2019/06/14/bloom-indexes-in-postgresql/
I think having an option to use bloom filter indexes could provide for better performance when compared to compound indexes and eliminate the need for having multiple indexes. It would likely require tuning still, but with very large data sets this could be much less expensive.
3 votes -
We need to be able to use $[<identifier>] and "$setOnInsert" in the same command
I want to be able to have a maintain array of counters for a user through a single update statement. If the document containing array of counters does not exist, I want to add it. If it does exist, I want to increment the counter
For example, this command
…
db.inboxItemCounts.updateOne(
// filter
{
"userId": userDoc.userId
},
// update
{
"$setOnInsert": {
"userId": userDoc.userId,
"fromUserSummary": [{
"userName": fromUserDoc.userName,
"count": 1
}]
},
// "$inc": incBody,
"$inc": {
"fromUserSummary.$[userElement].count": 1
}
},
// options
{
"upsert": true,
"writeConcern": { "w": "majority" },
"arrayFilters": [
{ "userElement.userName": { $eq: fromUserDoc.userName }}
]
}2 votes -
$addToSetIfNotExists or javascript code as array operator
This would allow for unique if not last entry into arrays.
Preventing:
['one', 'one', 'two', 'three']
But allowing:
['one', 'two', 'three', 'one']
Or perhaps (to run js code on the array at the db):
`.update({}, {$js: {'array_field': 'var last = ""; for (var key in array) {if (array[key] === last) {array.splice(key);}}'});
1 vote -
Allow Dynamic Object In $project and $addFields
Assume I have a field mapping defined in some configuration collection of my application. And this field mapping varies for different clients on my application.
I would like to pass the dynamic object in my $project or $addFields stage
Like:
$project: {
{$arrayToObject: "$field_mapping"},
}{$arrayToObject: "$field_mapping"} would return something like
"email" : "$data.Email",
"phone" : "$data.Phone,
"firstname" : "$data.FirstName",
"lastname" : "$data.LastName",
"preferredchannel" : "$data.Channel",
"preferredlanguage" : "$data.LanguageCode",
"emailchannel" : "$data.Email",
"smschannel" : "$data.SMS",
"province" : "$data.CustomerState"1 vote -
hint support for $graphLookup
Currently you can supply a
hint
to theaggregation
call in order to tell MongoDB to use a specific index for the initial$match
. But there is currently no way to specify which index to use for a$graphLookup
later in the pipeline.I would like an optional
hint
property on the$graphLookup
stage.2 votes -
1 vote
-
Implement restrict Network access list to improve Security risk
Currently the Network access list admits to add IP addresses to allow connections from. However, we are using Private Endpoints to connect to our clusters; from Google Cloud we cannot add rules to PSC in the firewall, so this means that being in our VPN all devices internally has access to our mongo databases.
How can we prevent access from any place in our company to our databases. This is a huge security risk.
1 vote -
Add numerically-ordered index feature which forces a field to maintain ordering
I would like MongoDB to add better support for ordered lists across documents in a collection. This feature would allow the user to designate a field such as "position", and the DB would ensure the values of that field across documents remain in a monotonically-increasing integer sequence 0, 1, 2, 3, ... N.
I am the maintainer of a Ruby library called "Mongoid Orderable". This library makes numerically ordered field across documents. I'd like to ask MongoDB to investigate moving the functionality of this library to the server.
What use case would this feature/improvement enable?
This feature would be useful…
2 votes -
Support compound TTL index
Right now you can only do a single field TTL index. I would like a put a TTL on a compound index.
3 votes -
Add pipeline stage for "downsampling" data
Down sampling is an extremely common operation used when plotting time-series data on graphs when there is too much data to get a good looking/meaningful graph. This would pick and choose "important" data points based on an algorithm such as "Largest-Triangle-Three-Buckets" (https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf) instead of returning the entire data set.
Not only would this make prettier graph but it will also reduce the overall payload returned from the data thus reducing network related latency.
This would be an awesome addition to timeseries!
6 votes -
Raise maximum BSON document size bigger than 16 MB
Per https://www.mongodb.com/docs/manual/reference/limits/#:~:text=The%20maximum%20BSON%20document%20size,MongoDB%20provides%20the%20GridFS%20API, maximum BSON document size is 16 MB. I would like to request this to support bigger sizes like 32 MB, 64 MB or even bigger.
117 votes -
TTL Support within a document
The current TTL implementation where documents can expire after a certain amount of time is extremely useful, especially because of its robustness in terms of if the db crashes.
I would love for this to be extended upon with the ability to allow data within a document to expire after a set time. So for example, if you add data to a document you could set that data and that data only to expire with its own time to live value
3 votes -
Enhancement on Native Auditing
When we enable native auditing the following three information is missing. Its more useful from security aspects . Can it be considered to capture these information in current or future releases soon..
Session ID
OS user
Service nameKannan
3 votes -
Enable BigInt Support for Blockchain Use
Smart contracts on Ethereum and on Ethereum compatible chains are represented as 256bit integers. There are several libraries widely used to deal with these data types in JavaScript such as bn.js and BigNumber.js.
A potential workaround could be to split the data, store using Decimal128, and recombine using Aggregation Framework. However, this would add performance and programming overhead that encourages customers to select alternatives to MongoDB.
3 votes -
Allow for redacted and non-redacted log files at the same time
This would be applicable for on-prem and Atlas.
1 vote -
Add operator that would calculate distance between 2 geolocation points
It would be great to have operator that would calculate distance between 2 geolocation points, and not to do it manually with big aggregate queries.
I suggest to add 2 new operators that would calculate distance in two different ways, as discussed in this Community Post: https://www.mongodb.com/community/forums/t/how-to-calculate-distance-between-two-geolocation-points/173045
20 votes -
Allow changing compression of an existing collection
I'd like to switch my database from Snappy to Zstd compression.
Currently, it doesn't seem that it is possible to change the compression of an existing collection.
It would be nice if there were a way to do this, even if (for example) it required making a new replica set member which "re-compresses" the data to the new algorithm while cloning it.
As per SERVER-67726, the only way to do this today is to create a new collection and manually copy with mongodump/mongorestore. This doesn't seem to be a viable option, for uptime / data consistency reasons.
3 votes
- Don't see your idea?