Database
314 results found
-
restricted mode for database
database in restricted mode can very helpful .. so we can carry out , lock users out, rebuild indexes, compact and other admin tasks.
1 vote -
Support for converting between UUID and String
It would be nice to have UUID support for $convert and $toString functions - and maybe also having a new $toUuid function added.
We have documents with UUIDs stored as UUIDs, and others where they are stored as Strings - and need to $lookup a document with an UUID type _id from a document where that uuid is stored as a String. As far as I can tell that is currently not possible.
2 votes -
Allow changing config values without restart
It would be great if configuration changes could be effected without needed to restart nodes.
For example, audit filter, enabling/disabling different security mechanisms.
This would be especially useful for Atlas and clients with large clusters where restarts will cause a performance deficit due to cold cache after node restarts.
1 vote -
Tiered TTL for time series collection based on granularity
Currently time series collections have a single TTL across all inherent granularities. It would be great to specify a TTL for each granularity. For example:
For seconds: 1 week
For hours: 1 month
Others: neverCourse information should be held longer than finer information in some cases - currently they all fall under the main TTL specified.
2 votes -
Provide straightforward syntax for 1-to-1 joins in aggregation
The syntax for joins that bring back multiple documents from foreign collections is very straightforward and yields exactly what one would expect, but simple joins that are bread and butter in SQL require very convoluted and expensive to run syntax.
Consider a product database that has
products
,categories
andreviews
collections. Each product has a unique category and may have multiple reviews. Getting all reviews in an aggregation is very straightforward (top stage), but getting categories, similar to SQL, is as convoluted as it gets (bottom stage).
…db.products.aggregate( [ // // Document aggregates naturally aggregate foreign documents // into
1 vote -
MultiTenant Abstraction
Just as a time-series collection in MongoDB 5.0 abstracts the underlying implementation of the bucketing pattern, customers who implement a multi-tenant model through separate databases per tenant run into the issue of too many dhandles and would benefit from an abstraction of the implementation of a collection-with-discriminator-field - they would be able to meet their internal compliance requirement of separate databases per customer, and would have minimal changes to their code.
1 vote -
Retrywrite error fixed in Mongodb 4.4
We have test the retry write in Atlas by using mongodb java driver in Mongodb 4.4 cluster. The error is similar to SERVER-53624(https://jira.mongodb.org/browse/SERVER-53624). The supporter respond the error will fix in Mongodb 5.0. We hope this error can also fixed in Mongodb 4.4
1 vote -
Bidirectional Data Replication beetween two or more clusters
In order to provide active-active deployments on multiple datacenter without need to shard data and route trafic based on shard-key. Develop a bidirectional replication with a conflict resolution method , for example , timestamp-based.
19 votes -
Make targeted query to a specific shard without using the shard key as part of query.
As of right now you need to use the shard key as part of the query to make a target query to a specific shard. Would like the ability to make targeted query to a specific shard without using the shard key as a part of the query.
Maybe one way of doing this is using index metadata to avoid scatter gather query and using that index meta data to instead make targeted queries on sharded clusters.
1 vote -
Easy Paging with offset
Paging is a common functionality for REST APIs. When implementing paging queries for large datasets skip and limit is often not an option. When using query filters with non unique fields, such as creation date there can be problems such as duplicate entities on subsequent pages. It would be great to be able to additionaly pass in an offset document reference, that would be used as a start in case the filter does not lead to a unique starting point.
so for example
db.test.find({"creationDate" : {$gte:ISODate('2021-08-27T07:25:00Z')}, {"offset":<ObjectId>).sort({"creationDate":1}).limit(20)1 vote -
support $lookup for update aggregation
We frequently denormalise either full documents or subsets to different documents in order to speed up reading, create indexes or paginate/sort on fields.
Consider a user collection and a task collection, if a task can be assigned to the user, it makes sense to just put the user document on the task they are assigned. But an update to a user now requires you to update the user both in the user collection and all tasks with that user in the tasks collection.
This can be achieved but does introduce some complexity, however the introduction of updates using aggregation pipelines…
2 votes -
Add a $median accumulator
There is the $avg operator that returns the average of a set of values. Why not a $median?
1 vote -
Support for MongoDB Server on Ubuntu 21.04.
Per the MongoDB Server Supported Platforms Matrix support for Ubuntu 21.04 is not yet available.
We would like to see the currently supported MongoDB Server versions available on the Ubuntu 21.04 LTS distribution which was released on 22 April 20212 votes -
Provide a KRB5_KTNAME setParameter or other config setting
The Kerberos keytab file is specified in the KRB5_KTNAME environment variable.
Could a setParameter or other config file setting "krb5KtName" be provided to allow this to be set?
2 votes -
Update to two binding accounts in the config file for LDAP.
Please refer the Case: 00803199
We need to update to two binding accounts in the config file to for LDAP authentication so that we can avoid down time while resetting the binding account password.
3 votes -
x509 authentication with other certificate's components than (O,OU,DC)
In some entities (e.g. ours), the O, OU, DC triplet is not detailled enough or not appropriate, which makes it impossible to authenticate through x509.
For exemple, in our entity, the O and OU are the same for all certificates (because all servers are in the same Organisation Unit), and the DC field is not used. We do use other fields though.
Because of that, we can't use the x509 authentication feature, although it is strongly asked for by the security Team.Would it be possible to enhance the x509 authentication mechanism to allow more flexibility for the authentication's fields?
6 votes -
Higher IOPS for small disk sizes (MongoDB Atlas on Azure)
AWS and GCP start with 2300-3000 IOPS for M10+ instance from the smallest disk size (8Gb), whereas on Azure we get 120 IOPS with 8Gb, 240 with 64Gb, 500 IOPS with 128Gb ... nowhere near to what AWS/GCP offer (120 - 500 IOPS for a database server is nothing!).
So if I take a bigger storage (512Gb for 2300 IOPS) to be on par with AWS/GCP then there is a dramatic price difference of your MongoDB Atlas offering for Azure compared to AWS/GCP - it gets 2 times more expensive!
I understand that there is a dependency on the…
15 votes -
Provide global collection-aggregated latency stats
The global latency stats Server Status section currently aggregates all the latency information known to the server.
We have a need for the same information except aggregated by collection.
The actual command: db.serverStatus().opLatencies.
1 vote -
$lookup with the option to return only the first element
Often I have a $lookup and the next stage is a $addFields { $first: "" }} because I know the lookup will return only one entry. It'd be nice to have this option directly in the lookup, so it'd return the first object instead of an array. Thanks!
1 vote -
Ability to replicate the data within a cluster to another cluster
Two different scenarios. One scenario is to have a DR cluster. 2 independant clusters that are one way synced. As of right now, a custom CDC solution (custom code + Kafka) is needed to achieve this.
Another is because of latency and the requirement for data to be stored in 2 different data center, 2 indepentant mongoDB clusters that are two way synced. As of right now, a custom CDC solution (custom code + Kafka) is needed to achieve this.2 votes
- Don't see your idea?