Database
313 results found
-
Collection which stores last login date_time for the users
Are you please able to store the last login date_time for the users which exist either in admin database or $external database in a collection of admin database of that cluster or opsmanager database which manages the clusters?
I have a requirement at my end - is to find the users who havent logged in for 60 days, , their roles to be revoked. And, ultimately delete the users who dont have any roles attached after a fixed period of time.
I do undertsnad you store the login details in audit logs. But that would be a tedious process at…
4 votes -
Add timestamps to user documents
Most database technologies store this metadata by default.
Because the expected data volume and change rate of this attribute will most probably be low, there should be no reason of not storing this information.
Of course this information might already be available in audit files, but first: auditing isn't enabled by default.
Second: most database users won't have access to this file/info and third: most users won't expect this info in a separate file (reminder, MongoDB recommends to store the data where it belongs when it comes to "data/schema modelling", so the metadata of a user document should also be…12 votes -
Metadata for collections
I would like to be able to store metadata about a collection such as a description or link to sources, along with the name of the collection.
1 vote -
Flatten arrays in group stage
Have group operators to flatten document arrays into a single one with or without repeated elements.
So ->
doc1 = {arr: [1,2,3,4], gr: "group"}, doc2 = {arr: [5, 6, 7, 8], gr: "group"}
{$group: {id: "$gr", arrays: {$***: "$arr"} } }
=>
{id: "gr", arrays: [1, 2, 3, 4, 5, 6, 7, 8]}2 votes -
tail queries functionality
We'd like to log the underlying MongoDB queries issued against the database. Similar to https://github.com/mrsarm/mongotail but something official
1 vote -
Change stream Monitoring/Aletring in management
how to monitor change stream activity per namespace
1. Number of Change streams per namespace
2. Set + Manage the allowed number of change streams
3. A metrics UI tab for OpsManager.
4. Alert if the number of change streams exceeds the set limit.This will assist in managing the cluster and also avoiding any issue that may arise from highly demanding change streams that take up RAM and compute.
1 vote -
1 vote
-
Cascading delete for DBRefs
Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:
- start a transaction
- fetch document
- look for dbref fields
- fetch those docs
- continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
- go back in reverse and delete all of them
- commit transaction
If this is possible to do…
2 votes -
Support change streams without service discovery
Currently, change streams are not supported in standalone instances, so testing change stream functionality requires a one-node replica set. However, promoting a standalone node to a one-node replica set requires a call to
rs.initiate(config)
, which requires host and port information so that clients can connect; something that is not required for standalone nodes. This means change stream support is conflated with service discovery. It becomes impossible, for example, to create a docker image that boots as a single-node replica set, while it's trivial to make a docker image that boots as a standalone server.Various ideas that would make…
1 vote -
add IO throughput related fields to 'serverStatus' output
There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
Need it inserverStatus
output so that we can monitor it.3 votes -
Avoid truncating the query on the Atlas profiler or system.profile collection
Slow running queries that are captured in system.profile collection or on profiler page of Atlas are truncated if the query is too long. As an Application DBA, it would be difficult to analyse the query without figuring out the actual query. The current limitation of command document is 50Kb. Request you to consider this limitation to avoid truncation of queries.
1 vote -
Expose individual command execution time
Many MongoDB drivers currently expose events (CommandSucceededEvent say) which provide an elapsed time. However, that elapsed time is the round-trip time, which is not super useful as that can be measured by a programmer manually. It would be neat if there was a way to get the actual time spent by the server on a per-command basis. This data is computed somewhere as it is exposed in Atlas metrics as Execution Time.
There's the explain facility but this is just to get an estimate of a query's cost. I would be interested in knowing how much time the server spent…
1 vote -
Improve fortification coverage with _FORTIFY_SOURCE=3
MongoDB Server codebase uses
_FORTIFY_SOURCE=2
fortification level (e.g. see v7.0, latest at the moment: https://github.com/mongodb/mongo/blob/v7.0/SConstruct#L4698).
Consider changing it to a new fortification level (_FORTIFY_SOURCE=3
) provided by GCC 12 to improve DB's security.See also:
https://fedoraproject.org/wiki/Changes/Add_FORTIFY_SOURCE%3D3_to_distribution_build_flags
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level1 vote -
$merge
Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.
3 votes -
I believe the future is for AI to assist the user in simple but sometimes frustrating tasks like connecting or finding the correct build
An Artificial Intelligence assistance would be very useful to the user for finding the correct configuration and helping set up connections. There are many deprecated components, especially if you are trying to integrate a IoT platform like Raspberry Pi. It would be great for the system to recognize what you are trying to do and guide you along the right path.
1 vote -
amazon linux 2023 (AL2023) support for ARM
Amazon Linux 2023 (AL2023) support for ARM (MongoDB and CloudManager)
1 vote -
Add SHA2, SHA3 and ECDSA functions to agg framework
It is very useful (and valuable security-wise) to be able to reverify hashes and signatures "on-engine" instead of dragging material out to a client app and running the algo there. The implementations are straightforward and everywhere now so it's not a huge lift for the backend. Example use:
aggregate([
{$match: whatever},
{$addFields: {
hashok: {$cond: [ {$eq: [ {$sha3: "path.to.struct"}, "path.to.stored.sha3"} ], 1, 0},
sigok: { $verify: { sig: "path.to.sig", pubkey: "path.to.pubkey", algo: "name of curve to use eg. SECP256k1}}
}
])
The digest function would operate on the raw BSON behind the scenes.1 vote -
Lock the document field (not the entire document)
Hi
according to this reference: https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactionsWhen I lock a document with a field with a new ObjectID, the whole document is locked!
Idea :
Operations:
i have three fields : A , B , C
I locked fieldA
with new ObjectID in transactionT1
.
i locked fieldB
with new ObjectID in transactionT2
.Behaviors (high performance) :
- In transactionT2
: if fieldA
is updated,writeConflict
error occurs.
- In transactionT1
: if fieldB
is updated,writeConflict
error occurs.
- Outside of transactions: If fieldA
is updated, it waits for …1 vote -
Improve the mongo query language
Sometimes I find Mongo query language as not put very well together, sometimes it feels like a patch job. It would be nice, if you could make you query language easier to reason about. It would be awesome, if you could introduce fluent style api builder instead of building bson documents.
1 vote -
ARM support
Can we support ARM packages for Debian 11. They are required for bitnami to add ARM support to their mongo charts
30 votes
- Don't see your idea?