Database
305 results found
-
log redaction customization
If we have customizations on the log redaction feature then we can add our preference where we have to apply log redaction. As of now according to my opinion it is getting applied on the all log fields which may create a issue while troubleshooting the any issue.
2 votes -
Easier way to troubleshoot storage use size discrepancy across nodes in the same replica set
While initial sync may potentially help on this topic, it would be great if the product has any easier way to identify the cause of significantly different storage use size (so to give a better confidence if initial sync is going to help).
1 vote -
Feature to perform Machine Learning predictive analysis and classification in MongoDB
I want to bring the machine learning compute and predictive analysis into MongoDB atlas. Instead of ETL my data out of Atlas to achieve this, I will reduce my architectural complexity by having an aggregation operator that does this on my documents stored in Atlas.
1 vote -
Parallelize unionWith
Today $unionWith aggregation command is executed sequentially. EG first we query collection A and then collection B and then the union occurs.
The process should be parallelized so the query part will run in parallel while the union will be done as best effort tree merge try to speed up the overall Elapsed Time of the query1 vote -
Scheduled stepdown for smoother primary election
Stepdown is a great tool that allows us to keep clusters operating smoothly. We use it for example when we want to perform some maintenance work on the host where the primary is currently running, to perform a rolling upgrade, and in many other cases we need to switch the primary to another node.
While usually electing a new primary is fast enough, for clusters with very high write traffic, it sometimes unfortunately leads to write errors on the application side. The reason is that drivers need to disconnect from the previous primary and connect to the new one, and…
1 vote -
Named MongoDB Connections
When a service is acting erroneously and generating hundreds or thousands of connections, it's currently difficult to determine which service is doing so when you have 30+ services connecting to MongoDB.
My proposal is that we should be optionally able to specify a non-unique name for the connection in the MongoDB URL (possibly after a #), which would allow DB administrators to see how many of each named connection was connected at any given time, and also other metrics (operations/s per name, average/max query execution time per name, etc.).
Example URL:
mongodb://user:****@clustername.abcd.mongodb.net:27017/dbName?authSource=admin#myServiceName2 votes -
Add Relaxed mode support for the $out operator
Add Relaxed mode support for the $out operator.
*and include as option in the existing drivers1 vote -
Write parts of database engine in ZIG
Zig is a zero-dependency, drop-in C/C++ compiler that supports cross-compilation out-of-the-box. Implementing some parts of MongoDB database engine core in ZIG might brings some performance improvements.
1 vote -
Aggregations should allow an empty sort stage instead of returning an error
When you run an aggregation pipeline that contains an empty sort stage (like
{"$sort": {}}
) MongoDB returns the error message "$sort stage must have at least one sort key". It would be really helpful if such a stage would work and simply not apply any sorting at all.For one this would be more consistent with a find operation (e.g.
db.runCommand({"find": "test", "sort": {}})
ordb.test.find({}, {}, {"sort": {}})
) which does not return an error but simply does not sort the results. More importantly it would also make it easier for developers and frameworks to dynamically generate the…1 vote -
4 votes
-
Providing connection details with username for real time connection monitoring.
Currently, mongd.log provides connection details like remote IP, source port with authentication information but doesn't provide the connection is in active state or not.
serverStatus() only provides number of current and active connections.Example:
10.0.0.100:12345 - username active
10.0.0.101:12346 - username idle2 votes -
Query comment or metadata in change stream event
Our application is using change stream events to publish changes as kafka events for our customers. Sometimes we are in need to decide if an event should be send by kafka or not. Right at the moment the only way would be to decide based on additional fields on our document.
It would be nice if there is an opportunity to include query comments to change stream events or some kind of meta data to get some info about the origin of the operation. This would be useful, to decide without any need to add additional fields to our data…
1 vote -
allow mongosync to migrate few field from source to destination for collection in database
right now mongosync will migrate all the fields from source to destination based on filter or non filter setting but there is no way to move few field out of all field for all document
to migrate few field from source to destination for collection in database with default primary key field (_ID )
requesting the feature in new mongosync version
1 vote -
Export Backup Snapshots to GCP Bucket
As Atlas user, i want to Export Backup Snapshots to a GCP bucket because i don't have an AWS subscription.
2 votes -
TTL index activity statistics
Dear all,
Presently, there is no visibility or tracking of TTL index activity: no data available to the user to see how much data and how often has been deleted. I suggest having a separate “dictionary” collection with statistics for all TTL indexes with pre-defined data retention (aka last month).
Thank you for looking at that.
Regards, Marina.
1 vote -
Make $merge support DELETE operation
Currently, the
$merge
only supports insert/update/upsert/merge behaviour. It would be great to support delete behaviour. A common use case would be in-place document deduplication/clean-up in a collection.2 votes -
Sorting support on Array fields.
Presently, I'm attempting to arrange(sort) the documents within my collection based on a key nested within an array object. However, sorting isn't functioning as expected in this scenario.
A proper explanation with example is here what I mean.
https://www.mongodb.com/community/forums/t/how-to-sort-documents-in-collection-on-basis-of-array-fields/270867?u=samrat_n_a1 vote -
user authentication
Hy
It would be extremely useful to be able to create users who can only connect to the database from specific networks or even specific IP addresses, similar to what is possible with MySQL.
For example, using the following commands:
CREATE USER 'user_name'@'10.214.3.0' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON shorturl.* TO 'user_name'@'10.214.3.0';
You can create a user who can access the database only from the network with the IP address 10.214.3.0.
I would like to know if it is possible to achieve similar functionality in mongodb as well. This would be very useful for my purposes, as I want…
3 votes -
Introduce a new field BucketLifeSpan Optional along with Granularity
An enhancement to MongoDB's management of time series collections could involve the introduction of a BucketLifeSpan attribute, in addition to the existing Granularity setting. This new, optional attribute would automate the duration a bucket can remain open, with the condition that Granularity should be less than or equal to BucketLifeSpan.
Consider a use case involving a time series collection for tracking data from 70,000 socket devices daily, with DeviceId as the metafield. Assuming data is organized into daily collections and granularity is set to minutes to optimally fill the buckets unless they reach their size limit.
For a collection named…
3 votes -
Change Timeseries Bucket Memory Limit
Currently, there's no way to set a memory threshold for bucket allocation in MongoDB's time series collections. As the bucket size increases and more collections are opened day after day, a limiting mechanism triggers for open buckets, leading to cache pressure and the premature closing of buckets under high load. It would be beneficial for users to have the ability to set a memory threshold for the timeseries bucket memory limit ( I guess limit is around 3GB). It enabling us to prevent early bucket closure in production environments. Alternatively, providing the option to manually close buckets could help manage…
2 votes
- Don't see your idea?