Database
230 results found
-
1 vote
-
Cascading delete for DBRefs
Since transactions have been added in 2018, which work across collections (https://www.mongodb.com/docs/manual/core/transactions/) and across shards (https://www.mongodb.com/docs/manual/core/transactions-sharded-clusters/), shouldn't cascading deletes be possible now? I only worked with sql transactions in the past, but my intuition would be that it should be fairly easy to do this in a client:
- start a transaction
- fetch document
- look for dbref fields
- fetch those docs
- continue at 2 until all docs have been found, stopping at branches when a doc has already been fetched
- go back in reverse and delete all of them
- commit transaction
If this is possible to do…
1 vote -
Support change streams without service discovery
Currently, change streams are not supported in standalone instances, so testing change stream functionality requires a one-node replica set. However, promoting a standalone node to a one-node replica set requires a call to
rs.initiate(config)
, which requires host and port information so that clients can connect; something that is not required for standalone nodes. This means change stream support is conflated with service discovery. It becomes impossible, for example, to create a docker image that boots as a single-node replica set, while it's trivial to make a docker image that boots as a standalone server.Various ideas that would make…
1 vote -
add IO throughput related fields to 'serverStatus' output
There is no IO throughput related fields in the result of serverStatus, instead in FTDC this is available in the disk metrics.
Need it inserverStatus
output so that we can monitor it.3 votes -
Avoid truncating the query on the Atlas profiler or system.profile collection
Slow running queries that are captured in system.profile collection or on profiler page of Atlas are truncated if the query is too long. As an Application DBA, it would be difficult to analyse the query without figuring out the actual query. The current limitation of command document is 50Kb. Request you to consider this limitation to avoid truncation of queries.
1 vote -
Expose individual command execution time
Many MongoDB drivers currently expose events (CommandSucceededEvent say) which provide an elapsed time. However, that elapsed time is the round-trip time, which is not super useful as that can be measured by a programmer manually. It would be neat if there was a way to get the actual time spent by the server on a per-command basis. This data is computed somewhere as it is exposed in Atlas metrics as Execution Time.
There's the explain facility but this is just to get an estimate of a query's cost. I would be interested in knowing how much time the server spent…
1 vote -
Improve fortification coverage with _FORTIFY_SOURCE=3
MongoDB Server codebase uses
_FORTIFY_SOURCE=2
fortification level (e.g. see v7.0, latest at the moment: https://github.com/mongodb/mongo/blob/v7.0/SConstruct#L4698).
Consider changing it to a new fortification level (_FORTIFY_SOURCE=3
) provided by GCC 12 to improve DB's security.See also:
https://fedoraproject.org/wiki/Changes/Add_FORTIFY_SOURCE%3D3_to_distribution_build_flags
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level1 vote -
$merge
Report number of docs matched, merged, skipped, etc. from a $merge stage. Alternatively, return the merged doc results as a pipeline result to pass to additional stages.
2 votes -
I believe the future is for AI to assist the user in simple but sometimes frustrating tasks like connecting or finding the correct build
An Artificial Intelligence assistance would be very useful to the user for finding the correct configuration and helping set up connections. There are many deprecated components, especially if you are trying to integrate a IoT platform like Raspberry Pi. It would be great for the system to recognize what you are trying to do and guide you along the right path.
1 vote -
amazon linux 2023 (AL2023) support for ARM
Amazon Linux 2023 (AL2023) support for ARM (MongoDB and CloudManager)
1 vote -
Add SHA2, SHA3 and ECDSA functions to agg framework
It is very useful (and valuable security-wise) to be able to reverify hashes and signatures "on-engine" instead of dragging material out to a client app and running the algo there. The implementations are straightforward and everywhere now so it's not a huge lift for the backend. Example use:
aggregate([
{$match: whatever},
{$addFields: {
hashok: {$cond: [ {$eq: [ {$sha3: "path.to.struct"}, "path.to.stored.sha3"} ], 1, 0},
sigok: { $verify: { sig: "path.to.sig", pubkey: "path.to.pubkey", algo: "name of curve to use eg. SECP256k1}}
}
])
The digest function would operate on the raw BSON behind the scenes.1 vote -
Lock the document field (not the entire document)
Hi
according to this reference: https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactionsWhen I lock a document with a field with a new ObjectID, the whole document is locked!
Idea :
Operations:
i have three fields : A , B , C
I locked fieldA
with new ObjectID in transactionT1
.
i locked fieldB
with new ObjectID in transactionT2
.Behaviors (high performance) :
- In transactionT2
: if fieldA
is updated,writeConflict
error occurs.
- In transactionT1
: if fieldB
is updated,writeConflict
error occurs.
- Outside of transactions: If fieldA
is updated, it waits for …1 vote -
Improve the mongo query language
Sometimes I find Mongo query language as not put very well together, sometimes it feels like a patch job. It would be nice, if you could make you query language easier to reason about. It would be awesome, if you could introduce fluent style api builder instead of building bson documents.
1 vote -
ARM support
Can we support ARM packages for Debian 11. They are required for bitnami to add ARM support to their mongo charts
10 votes -
Extend schema validation to be able to enforce referential integrity between collections
Where a relational database uses 2 tables to store a 1:many "parent - child" relationship between entities, MongoDB mostly stores the child documents in an array file as part of the parent document. This automatically ensures referential integrity in that
- a child document cannot be inserted or updated to refer to a non-existent parent, and
- a parent document cannot be deleted such that it leaves "orphaned" child documentsHowever, there are situations where the number and/or size of the child documents makes embedding them all in their parent unworkable, due to the 16 megabyte document size limit if…
2 votes -
Add functionality to specify the readConcern level at db.collection.findOne()
Add functionality to specify the readConcern at db.collection.findOne(). At the version 5.0.14 it's not supported.
1 vote -
geo
It would be nice to get the length of an LineString of a geo-json object or the possibility to write an aggregation to calculate it.
2 votes -
Unique Indexes and Bulk Upserts for Time Series Collections
We would like to insert data in bulk into time series collections and identify the new data that has been inserted without the possibility of duplicates being inserted.
For regular collections this is achievable by adding a unique index and performing a bulk upsert (as any duplicates will be rejected due to the unique index).
For time series collections however unique indexes are not currently supported.
In addition performing an upsert with $setOnInsert option which should only action insert operations is also not currently supported for time series collections.
At the moment the only options appear to be:
(1) to…
3 votes -
Budget limit for serverless pay as you go mode
I was looking at the serverless pay-as-you-go option for my DB so I could have continuous backup and snapshots but I found it too risky. Currently, the only protection user has is alerts when RPUs go over a certain budget threshold. I would like to be able to set a budget limit that would prevent me from going over pre-set daily budget. If you would get hit with DOS or some other brute force attack you could rack up lots of traffic and get an unexpected bill without a potential limit.
2 votes -
Combine reshardCollection+mongosynd idea to support a remote collection on a separate new cluster
Great for prod productivity and 99.999 SLA if mongo could support this,
for example,Given "mydb.mycoll" in current cluster being sharded with {zip:1} shard key
1/ New cluster: sh.shardCollection( "mydb.mycoll", {name:1, phone:1} )
2/ "Mongocopy" till mydb.mycoll in new and current cluster are synced.
Very much like how reshardCollection works now, but to a remote
mydb.mycoll on the new cluster, instead of the local coll
system.resharding.554c8995-2ec9-4bda-9401-a3ad475b9c8cThis is a combination of mongosync and reshardCollection in one.
Prod cluster is often very big and busy and requires no downtime with the
99.999 SLA (Service level agreement). Being able to reshardCollection
to…1 vote
- Don't see your idea?