Database
313 results found
-
x509 Authentication Supports Multiple CA's
I work for a large entity with a complicated security certificate structure where we are unable to create certs with specific CA's but rather they are issued to us by request and thus we don't always have control over the root or intermediate CA in the chain.
We have a situation where server certificates are signed with one CA but client certificates are signed with a different CA. All the CA's are installed and trusted in our Windows certificate store, but the current limitation of Mongo is it only supports a single CA.
Would it be possible for Mongo to…1 vote -
Dont Deprecate LDAP in future Major Version releases.
we have everything configured with LDAP including Ops manager agents/bind id's/DB ID's etc. We have so much dependency on LDAP which helps us in many ways including rotation of passwords in CyberArk, No dependency of Single passwords etc.
Kindly help not to deprecate LDAP features.2 votes -
mongodump - add support for nsInclude/nsExclude
Currently mongodump supports the excludeCollection/includeCollection options but this requires the use of the --db option. It would be useful if mongodump was able to dump all databases, as it does by default, but also be able to specify a collection or collections not to dump.
1 vote -
Maintain database ID/Password profile inside database
Requirement is to maintain database id/passwords with standard details inside database like (Lastlogin date/ passwordchangedate) and control of passwords related how many failedloginattempts/passwordlifetime/passwordcomplexity/passwordlocktime/passwordreusemax etc
3 votes -
Allow the ability when NVME cluster to autoscale up based on CPU and Memory
To provide autoscaling capability for MongoDB NVME clusters based on CPU utilization and memory metrics. This feature would automatically provision additional resources when predefined thresholds are reached, ensuring optimal performance during usage spikes without manual intervention.
Currently, MongoDB Atlas NVME-based clusters require manual vertical scaling, which leads to:
- Performance degradation during unexpected high-load periods
- Inefficient resource allocation requiring constant monitoring
- Potential application downtime during scaling operations
- DevOps overhead for continuous capacity planning2 votes -
Leveraging Group By/Distinct Query to use regular Index
When using Group By or Distinct queries on database, provide the ability to leverage existing index
i.e these queries today will miss using existing index
db.collection.distinct("field")
db.collection.aggregate([
{ $group: { _id: "$category", count: { $sum: 1 } } }
])2 votes -
Disallow new connections that don't specify an appName field
Currently as a DBA it's sometimes hard to identify which service is causing load on a shared cluster/database. Some people connect for adhoc workloads but don't specify an appName making it hard to track back to the owner. It would be useful to require appName for connections to a cluster to enforce this behavior.
1 vote -
Certificate "friendlyName" (windows system certificate store)
mongod.exe:
currently there is only support for "subject" and "thumbprint" to select the certificate from the windows certificate store.
Is there a plan to implement the "friendly name" of the certificate as well?example mongod.cfg:
..
tls:
mode: requireTLS
certificateSelector: friendlyname=FRIENDLY-NAME1 vote -
deleteMany() execution automatically disable the balancer on sharded clusters
deleteMany() execution automatically disable the balancer on sharded clusters
1 vote -
Execute $group on shardPart instead of mergerPart
Basically what is described here: https://www.mongodb.com/community/forums/t/how-to-enforce-mongodb-to-execute-group-on-shardpart-of-the-execution-plan/267560
When running covered count queries that could be aggregated independently on the shards, there is still a lot of overhead due to the fact that shards have to report documents with _id to
mongos
It looks like this is happening for count queries that use $limit stage, but not for count queries without it
1 vote -
Support for proxy protocol on mongod
Hi,
I have mongodb database server running as a container on kubernetes and i have istio as ingress, but i need proxy protocol in order to preserve source ip, because without proxy protocol the client ip that arrives in mongodb server is the ip from the istio ingress gateway.
I know mongos already supports it for sharded clusters, but would be very helpfully if mongod could support proxy protocol as well.
1 vote -
Option Renaming Time Series Collections
Ability to rename TS collection . We need it for migration history data to TS without downtime during conversion.
1 vote -
Support retrieving all connections info from clients to mongodb
Support retrieving all connections info from clients to mongodb, not just those running commands returned by currentOp(). OS command netstat can return all ip info, but it does not contain the relation ino between ip and database such database name and user. It's a common function in relation database, e.g. 'show processlist' command in mysql. And it's useful when we try to migrate to a new MongoDB cluster, we need to find out all the clients connected to the original mongodb.
1 vote -
Add support for Atlas Search via Stable API
In the aggregation pipeline, the $search stage is very helpful in developing my organization's application's functionalities. But using the Stable API or Versioned API, as it was once called, I am unable to use the $search stage as it is a part of Atlas Search and not a core MongoDB Server function. I would like to request that the support for Atlas Search be added to Stable API. This will allow my organization to seamlessly upgrade the server without breaking functionality. Other workarounds are not possible for us. Please do consider this as a feature for current or future versions…
2 votes -
Add support for all type of joins like Postgres has and improve performance
$lookup is a performance killer. Joins are crucial parts in every OLTP system. $lookup is the equivalent to join in SQL, however $lookup is slow, doesn't support hash joins or other efficient join algorithm implemented in Postgres for example.
Seems that if mongo won't add support, their DB puts behind Postgres.
1 vote -
MongoDB should have a database backup system commit base for large self-management databases, like git commit.
There is already a mongodb backup tool mongodump but it is not a good solution for backing up large databases. We thought of making a CMS Platform with mongodb. Billions of websites can be created there. For such a large platform, the mongodump backup tool does not seem sufficient.
We want a solution where Mongodb will manage a different backup directory their data will not be deleted even run the delete command. like blockchain. If we run the delete command it will delete the data from the main database but the secondary backup database will declare it as a delete…
3 votes -
Multiple unique single indexes in sharded collections
MongoDB sharded collection can only have one unique index which is the shard key. However, in real applications, one might need more than one single unique index. "Proxy" collections suggested in https://www.mongodb.com/docs/manual/tutorial/unique-constraints-on-arbitrary-fields/ has 3 issues:
The most important one is that it does not enforce a one-to-one relationship between the main document and the proxy document. So if I follow the example in the link, two different emails can have the same parentid. You can add a unique single index on parentid but that will not work if the proxy collection is sharded.
If I want to add…
1 vote -
Allow to decrease time series granularity and custom bucketing values
In our IoT use case, we are leveraging MongoDB’s time series functionality. Due to high write volume, we need to adjust the timeseries.granularity and bucketMaxSpanSeconds parameters to manage the write load. However, after increasing the bucketMaxSpanSeconds, we need to run the system for several days to observe stability. If the value is set too high, MongoDB does not support decreasing the bucketing value, and we are forced to create a new collection instead.
It would greatly simplify our testing process and increase flexibility in adapting to business changes if MongoDB allowed the decrease of bucketMaxSpanSeconds after it has been increased.
2 votes -
Metadata (Created Date) for database accounts
We frequently have audit questions about when a given user was created in our Mongo databases. Other DBMSs (Oracle, for instance) have a 'CREATED' field in a metadata table (i.e. DBA_USERS) that shows when a user was created, and this would be very helpful when responding to these audits.
2 votes -
Is there any limitation on size of query in pipeline (character in query, not data)
I have an aggregate query, when the character in pipeline over 21066 character, the result return wrong format that the pipeline expected. So I wonder is there any limitation on size of query in pipeline?
My case is:
Query: [
"$match":{"profileID": {"$in": ['', .....]}},
{"$group":{"_id":"$profileID","materialLevel2":{"$sum":{"$cond":[{"$and":[{"$eq":["$expenseL2","true"]},{"$eq":["$expenseL3","false"]},{"$eq":["$expenseL4","false"]}]},1,0]}},"materialLevel3":{"$sum":{"$cond":[{"$and":[{"$eq":["$expenseL3","true"]},{"$eq":["$expenseL4","false"]}]},1,0]}},"materialLevel4":{"$sum":{"$cond":[{"$eq":["$expenseL4","true"]},1,0]}}}}
]The true result is:
[{
"_id" : "000.03.50.H29",
"materialLevel2" : 1.0,
"materialLevel3" : 1.0,
"materialLevel4" : 2.0
},...]But when the length of query greater than 21066 character, the result of query is:
[{
"profileID" : "00.00.00.PTT",
"expenseL2" : "true",
"expenseL3" : "true",
"expenseL4" : "false"
},....]Thanks!
1 vote
- Don't see your idea?