Database
292 results found
-
Support indexes on single array elements (e.g. "MyArray.0")
Currently, if I create an index on an array field and specify an element number ("MyArray.0"), the system still indexes the whole array as normal.
Normally, this might increase the index size but at least it's still useful when querying MyArray.0 later.
However, it would be better to be able to index individual numbered elements in cases like compound indexes where I might need to check whether two arrays are non-empty, and I can't use a compound index for a query on MyArray.0 and MyOtherArray.0 at the same time due to the multi-array limitation.
For MongoDB folks with access to…
1 vote -
Metadata for collections
I would like to be able to store metadata about a collection such as a description or link to sources, along with the name of the collection.
1 vote -
tail queries functionality
We'd like to log the underlying MongoDB queries issued against the database. Similar to https://github.com/mrsarm/mongotail but something official
1 vote -
Change stream Monitoring/Aletring in management
how to monitor change stream activity per namespace
1. Number of Change streams per namespace
2. Set + Manage the allowed number of change streams
3. A metrics UI tab for OpsManager.
4. Alert if the number of change streams exceeds the set limit.This will assist in managing the cluster and also avoiding any issue that may arise from highly demanding change streams that take up RAM and compute.
1 vote -
1 vote
-
Cascading delete for DBRefs
Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:
- start a transaction
- fetch document
- look for dbref fields
- fetch those docs
- continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
- go back in reverse and delete all of them
- commit transaction
If this is possible to do…
1 vote -
Support change streams without service discovery
Currently, change streams are not supported in standalone instances, so testing change stream functionality requires a one-node replica set. However, promoting a standalone node to a one-node replica set requires a call to
rs.initiate(config)
, which requires host and port information so that clients can connect; something that is not required for standalone nodes. This means change stream support is conflated with service discovery. It becomes impossible, for example, to create a docker image that boots as a single-node replica set, while it's trivial to make a docker image that boots as a standalone server.Various ideas that would make…
1 vote -
Avoid truncating the query on the Atlas profiler or system.profile collection
Slow running queries that are captured in system.profile collection or on profiler page of Atlas are truncated if the query is too long. As an Application DBA, it would be difficult to analyse the query without figuring out the actual query. The current limitation of command document is 50Kb. Request you to consider this limitation to avoid truncation of queries.
1 vote -
Expose individual command execution time
Many MongoDB drivers currently expose events (CommandSucceededEvent say) which provide an elapsed time. However, that elapsed time is the round-trip time, which is not super useful as that can be measured by a programmer manually. It would be neat if there was a way to get the actual time spent by the server on a per-command basis. This data is computed somewhere as it is exposed in Atlas metrics as Execution Time.
There's the explain facility but this is just to get an estimate of a query's cost. I would be interested in knowing how much time the server spent…
1 vote -
Improve fortification coverage with _FORTIFY_SOURCE=3
MongoDB Server codebase uses
_FORTIFY_SOURCE=2
fortification level (e.g. see v7.0, latest at the moment: https://github.com/mongodb/mongo/blob/v7.0/SConstruct#L4698).
Consider changing it to a new fortification level (_FORTIFY_SOURCE=3
) provided by GCC 12 to improve DB's security.See also:
https://fedoraproject.org/wiki/Changes/Add_FORTIFY_SOURCE%3D3_to_distribution_build_flags
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level1 vote -
I believe the future is for AI to assist the user in simple but sometimes frustrating tasks like connecting or finding the correct build
An Artificial Intelligence assistance would be very useful to the user for finding the correct configuration and helping set up connections. There are many deprecated components, especially if you are trying to integrate a IoT platform like Raspberry Pi. It would be great for the system to recognize what you are trying to do and guide you along the right path.
1 vote -
amazon linux 2023 (AL2023) support for ARM
Amazon Linux 2023 (AL2023) support for ARM (MongoDB and CloudManager)
1 vote -
Add SHA2, SHA3 and ECDSA functions to agg framework
It is very useful (and valuable security-wise) to be able to reverify hashes and signatures "on-engine" instead of dragging material out to a client app and running the algo there. The implementations are straightforward and everywhere now so it's not a huge lift for the backend. Example use:
aggregate([
{$match: whatever},
{$addFields: {
hashok: {$cond: [ {$eq: [ {$sha3: "path.to.struct"}, "path.to.stored.sha3"} ], 1, 0},
sigok: { $verify: { sig: "path.to.sig", pubkey: "path.to.pubkey", algo: "name of curve to use eg. SECP256k1}}
}
])
The digest function would operate on the raw BSON behind the scenes.1 vote -
Lock the document field (not the entire document)
Hi
according to this reference: https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactionsWhen I lock a document with a field with a new ObjectID, the whole document is locked!
Idea :
Operations:
i have three fields : A , B , C
I locked fieldA
with new ObjectID in transactionT1
.
i locked fieldB
with new ObjectID in transactionT2
.Behaviors (high performance) :
- In transactionT2
: if fieldA
is updated,writeConflict
error occurs.
- In transactionT1
: if fieldB
is updated,writeConflict
error occurs.
- Outside of transactions: If fieldA
is updated, it waits for …1 vote -
Improve the mongo query language
Sometimes I find Mongo query language as not put very well together, sometimes it feels like a patch job. It would be nice, if you could make you query language easier to reason about. It would be awesome, if you could introduce fluent style api builder instead of building bson documents.
1 vote -
Add functionality to specify the readConcern level at db.collection.findOne()
Add functionality to specify the readConcern at db.collection.findOne(). At the version 5.0.14 it's not supported.
1 vote -
Combine reshardCollection+mongosynd idea to support a remote collection on a separate new cluster
Great for prod productivity and 99.999 SLA if mongo could support this,
for example,Given "mydb.mycoll" in current cluster being sharded with {zip:1} shard key
1/ New cluster: sh.shardCollection( "mydb.mycoll", {name:1, phone:1} )
2/ "Mongocopy" till mydb.mycoll in new and current cluster are synced.
Very much like how reshardCollection works now, but to a remote
mydb.mycoll on the new cluster, instead of the local coll
system.resharding.554c8995-2ec9-4bda-9401-a3ad475b9c8cThis is a combination of mongosync and reshardCollection in one.
Prod cluster is often very big and busy and requires no downtime with the
99.999 SLA (Service level agreement). Being able to reshardCollection
to…1 vote -
throttle sessions which use too much resources
We have different types of applications :
1. Writer - to load data into mongodb from different data sources
2. Reader - to read data and display to end user.Normally, there is strict SLA for reader , but no SLA (or less restricted) for writers. We want to make sure that writer will not impact reader in case when for some reason a lot of data arrived from external sources. So, we would like to slow down writers for the sake of readers.
Writers can saturate CPUs and IO, that's why we want an option to leave some room…
1 vote -
Add hash function (eg. md5) to aggregation pipeline
I would like to implement hash-based sharding in my own application on top of MongoDB. For that purpose, I would like to pull a stable pseudorandom subset of documents into each of my servers, and I would like to do so without enlarging the documents by adding additional fields, and without using JavaScript in the aggregation pipeline (for performance reasons).
The idea: add a hash function, such as md5, to aggregation pipelines. The function would accept an object/array containing the data to be hashed, and would return the hash, ideally as a number.
1 vote -
$dateDiff operator should be useful to calculate age
In the documentation it says: " For example, two dates that are 18 months apart would return 1 year difference instead of 1.5 years.". But if startDate is 2021-08-01 and endDate is 2023-02-01, the result is 2 years difference. I think it should be good if this operator could be used to calculate the age.
1 vote
- Don't see your idea?