Database
255 results found
-
$group all fields
$group should have the ability to allow specifying all fields in a document (without explicitly defining them all, which can lead to duplicating dozens of lines just to do "key: $first")
This will help users that use $unwind and then want to $group the results without having to do a subsequent $lookup and $mergeObjects (or similar) to get the final document structure they're looking for.
1 vote -
Amazon Linux 2023 Support for MongoDB 6.0 and OpsManager 6.0
Amazon Linux 2023 is supported in rapid releases from 6.2.0 and will be supported in the upcoming 7.0 release of MongoDB, however it appears there is currently no plan to support AL2023 for the 6.0 release of MongoDB. Can support for AL2023 be added to the 6.0 branch for both MongoDB and OpsManager?
1 vote -
Support indexes on single array elements (e.g. "MyArray.0")
Currently, if I create an index on an array field and specify an element number ("MyArray.0"), the system still indexes the whole array as normal.
Normally, this might increase the index size but at least it's still useful when querying MyArray.0 later.
However, it would be better to be able to index individual numbered elements in cases like compound indexes where I might need to check whether two arrays are non-empty, and I can't use a compound index for a query on MyArray.0 and MyOtherArray.0 at the same time due to the multi-array limitation.
For MongoDB folks with access to…
1 vote -
Collection which stores last login date_time for the users
Are you please able to store the last login date_time for the users which exist either in admin database or $external database in a collection of admin database of that cluster or opsmanager database which manages the clusters?
I have a requirement at my end - is to find the users who havent logged in for 60 days, , their roles to be revoked. And, ultimately delete the users who dont have any roles attached after a fixed period of time.
I do undertsnad you store the login details in audit logs. But that would be a tedious process at…
1 vote -
Add timestamps to user documents
Most database technologies store this metadata by default.
Because the expected data volume and change rate of this attribute will most probably be low, there should be no reason of not storing this information.
Of course this information might already be available in audit files, but first: auditing isn't enabled by default.
Second: most database users won't have access to this file/info and third: most users won't expect this info in a separate file (reminder, MongoDB recommends to store the data where it belongs when it comes to "data/schema modelling", so the metadata of a user document should also be…1 vote -
Metadata for collections
I would like to be able to store metadata about a collection such as a description or link to sources, along with the name of the collection.
1 vote -
Flatten arrays in group stage
Have group operators to flatten document arrays into a single one with or without repeated elements.
So ->
doc1 = {arr: [1,2,3,4], gr: "group"}, doc2 = {arr: [5, 6, 7, 8], gr: "group"}
{$group: {id: "$gr", arrays: {$***: "$arr"} } }
=>
{id: "gr", arrays: [1, 2, 3, 4, 5, 6, 7, 8]}2 votes -
tail queries functionality
We'd like to log the underlying MongoDB queries issued against the database. Similar to https://github.com/mrsarm/mongotail but something official
1 vote -
Change stream Monitoring/Aletring in management
how to monitor change stream activity per namespace
1. Number of Change streams per namespace
2. Set + Manage the allowed number of change streams
3. A metrics UI tab for OpsManager.
4. Alert if the number of change streams exceeds the set limit.This will assist in managing the cluster and also avoiding any issue that may arise from highly demanding change streams that take up RAM and compute.
1 vote -
1 vote
-
Cascading delete for DBRefs
Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:
- start a transaction
- fetch document
- look for dbref fields
- fetch those docs
- continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
- go back in reverse and delete all of them
- commit transaction
If this is possible to do…
1 vote -
Support change streams without service discovery
Currently, change streams are not supported in standalone instances, so testing change stream functionality requires a one-node replica set. However, promoting a standalone node to a one-node replica set requires a call to
rs.initiate(config)
, which requires host and port information so that clients can connect; something that is not required for standalone nodes. This means change stream support is conflated with service discovery. It becomes impossible, for example, to create a docker image that boots as a single-node replica set, while it's trivial to make a docker image that boots as a standalone server.Various ideas that would make…
1 vote -
add IO throughput related fields to 'serverStatus' output
There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
Need it inserverStatus
output so that we can monitor it.3 votes -
Avoid truncating the query on the Atlas profiler or system.profile collection
Slow running queries that are captured in system.profile collection or on profiler page of Atlas are truncated if the query is too long. As an Application DBA, it would be difficult to analyse the query without figuring out the actual query. The current limitation of command document is 50Kb. Request you to consider this limitation to avoid truncation of queries.
1 vote -
Expose individual command execution time
Many MongoDB drivers currently expose events (CommandSucceededEvent say) which provide an elapsed time. However, that elapsed time is the round-trip time, which is not super useful as that can be measured by a programmer manually. It would be neat if there was a way to get the actual time spent by the server on a per-command basis. This data is computed somewhere as it is exposed in Atlas metrics as Execution Time.
There's the explain facility but this is just to get an estimate of a query's cost. I would be interested in knowing how much time the server spent…
1 vote -
Improve fortification coverage with _FORTIFY_SOURCE=3
MongoDB Server codebase uses
_FORTIFY_SOURCE=2
fortification level (e.g. see v7.0, latest at the moment: https://github.com/mongodb/mongo/blob/v7.0/SConstruct#L4698).
Consider changing it to a new fortification level (_FORTIFY_SOURCE=3
) provided by GCC 12 to improve DB's security.See also:
https://fedoraproject.org/wiki/Changes/Add_FORTIFY_SOURCE%3D3_to_distribution_build_flags
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level1 vote -
$merge
Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.
2 votes -
I believe the future is for AI to assist the user in simple but sometimes frustrating tasks like connecting or finding the correct build
An Artificial Intelligence assistance would be very useful to the user for finding the correct configuration and helping set up connections. There are many deprecated components, especially if you are trying to integrate a IoT platform like Raspberry Pi. It would be great for the system to recognize what you are trying to do and guide you along the right path.
1 vote -
amazon linux 2023 (AL2023) support for ARM
Amazon Linux 2023 (AL2023) support for ARM (MongoDB and CloudManager)
1 vote -
Add SHA2, SHA3 and ECDSA functions to agg framework
It is very useful (and valuable security-wise) to be able to reverify hashes and signatures "on-engine" instead of dragging material out to a client app and running the algo there. The implementations are straightforward and everywhere now so it's not a huge lift for the backend. Example use:
aggregate([
{$match: whatever},
{$addFields: {
hashok: {$cond: [ {$eq: [ {$sha3: "path.to.struct"}, "path.to.stored.sha3"} ], 1, 0},
sigok: { $verify: { sig: "path.to.sig", pubkey: "path.to.pubkey", algo: "name of curve to use eg. SECP256k1}}
}
])
The digest function would operate on the raw BSON behind the scenes.1 vote
- Don't see your idea?