Database
298 results found
-
Support compound TTL index
Right now you can only do a single field TTL index. I would like a put a TTL on a compound index.
3 votes -
TTL Support within a document
The current TTL implementation where documents can expire after a certain amount of time is extremely useful, especially because of its robustness in terms of if the db crashes.
I would love for this to be extended upon with the ability to allow data within a document to expire after a set time. So for example, if you add data to a document you could set that data and that data only to expire with its own time to live value
3 votes -
Enhancement on Native Auditing
When we enable native auditing the following three information is missing. Its more useful from security aspects . Can it be considered to capture these information in current or future releases soon..
Session ID
OS user
Service nameKannan
3 votes -
Enable BigInt Support for Blockchain Use
Smart contracts on Ethereum and on Ethereum compatible chains are represented as 256bit integers. There are several libraries widely used to deal with these data types in JavaScript such as bn.js and BigNumber.js.
A potential workaround could be to split the data, store using Decimal128, and recombine using Aggregation Framework. However, this would add performance and programming overhead that encourages customers to select alternatives to MongoDB.
3 votes -
Support newer versions of JSON Schema in validation to be able to use the "if", "then", "else" and "const" keywords
Currently there is no way to have conditional schema validators because you can't use the "const" keyword in a "oneOf" or the "if", "then" and "else" keywords.
3 votes -
Support expressions in $densify range bounds
The $densify aggregation pipeline stage seems unable to evaluate range bounds expressions, requiring the range bounds to be constant.
See the following example (the collection testcoll contains a single documents with only the _id field):
…sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [0, 5], step: 1}}}]) [ { a: 0 }, { _id: ObjectId("6284a16d64553eaf74b1e189"), a: 1 }, { a: 2 }, { a: 3 }, { a: 4 } ] sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [{$toInt: "0"}, 5], step: 1}}}]) MongoServerError: A bounding array must be an ascending array of either two dates or
3 votes -
collection-level users should be able to list their collections
Currently users with collection-specific read or read/write permissions are not authorized to perform the following commands:
db.listCollections()
show collections
db.getCollectionNames()This impacts the shell (and also third party tools that won't let users access their permitted collections because the list of collections is blocked in the first place)
Suggestion:
Users with collection-specific read or read/write permissions should be able to run the above commands and the result would only present the collections for which the user has some read or write privileges (instead of blocking everything).
3 votes -
Add SSO authentication support to mongoDB database
Existing issue: One user has accounts in multiple mongoDB databases on Atlas that exist in different Projects and maybe Organizations as well. When he wants to switch from one database to the other from a 3rd party app, he has to provide his credentials every time.
Adding SSO authentication support to the mongoDB databases would add flexibility to a user like that, to switch from one database to the other without being asked for his credentials every time when connecting from a 3rd party application.
3 votes -
Update to two binding accounts in the config file for LDAP.
Please refer the Case: 00803199
We need to update to two binding accounts in the config file to for LDAP authentication so that we can avoid down time while resetting the binding account password.
3 votes -
delete logs number of days old
In the options for Mongo DB Log Settings, there is only
Max Percent of Disk
Total Number of filesA new option for
Number of Days to keepWould be useful
3 votes -
Deny Network Access to MongoDB Cluster
To improve network security, please create an option to Deny specific Network Adresses to MongoDB Cluster.
3 votes -
view on nested array
- It is quite common to have nested array in documents in MongoDB.
- It is also quite common to have to "flatify" those arrays in queries.
Here is an example where we have a collection with consultants and their professional experiences.
consultant [{ name:"Toto", age: 25, experiences: [{ role: "Sofware Engineer", from: "2010-01-01", to: "2012-01-01", company: "CompanyA" }, { role: "Data Analyst", from: "2018-01-01", to: "2020-01-01", company: "CompanyB" }] }]
If we would like to list all experiences at a company during a certain period of time, we would have to do an aggregate query with 3 steps that are sometimes…
3 votes -
log connection string used by application to connect
there are multiple options to connect to mongo: you can connect to specific node or you can connect to the whole replicaset etc...
if DBA does not have access to source code - it's not possible to validate if application properly configured and connects to replicaset.it would be nice to let mongoDB dump to mongod.log used connection string and/or details how exactly client sessions is connected to mongo.
3 votes -
Low latency Change Stream for Global Cluster in Atlas
our event-driven applications need to publish events to Kafka triggered by Change Stream feature. It works perfect in replicaset MongoDB cluster.
However, after migrating to Global Cluster in Atlas, the Change Stream cannot keep low latency because of ordering reason among shards.
The latency may go up to 20 seconds when for a single change event.It would be nice if application can receive Change Stream from the single shard only (not care about ordering among shards) to prevent latency.
The idea is to pass "location" options when starting the Change Stream cursor.
3 votes -
updateMany limit
When porting an application backend from an RDBMS to MongoDB, we've spoken to two people who've are looking for a way to specify a limit on the number of documents in .updateMany(). I understand the behavior cannot be defined on a sharded cluster, but if we had a way to do this on an unsharded collection, that would help when dealing with these teams.
3 votes -
Option to prohibit a non-voting member from becoming a sync source of a voting member
Hi,
Our proposition in a few words: add a replica set option to allow chained replication but with the following exception: a non-voting member cannot become a sync source of a voting member under any circumstance.
This proposition would allow chained replication for clusters having both w=majority writes and analytics nodes.
Right now, those clusters cannot safely enable chained replication, because under some circumstances, the non-voting analytics member may become the first secondary in a serial chain of replication. In that case, this node[*] slows down the replication process for all downstream secondaries. Higher replication lag translates to extremely slow…
3 votes -
Validation for referential integrity
Currently, with the JSON Schema validation, we are able to limit the values for a field using enumerations. However, we need to have a way to limit the values to those entered in another collection.
3 votes -
geoIntersects feature for 2D index
At Airbus, we would like to use geoIntersects feature with 2D Index.
We don't need 2DSphere Index, but we have to use it if we want to have geoIntersects avalaible.
As a consequence, we have to manage Spherical margins which introduces a useless and hard to maintain workaround in our product.
Could you please plan to implement geoIntersects and not only geoWithin with 2D Index ?
Thank you3 votes -
Automatic Indexes
MongoDB can already suggest useful indexes. Why not take the next step and allow MongoDB to autonomously create and manage indexes. Ideally it will automatically maintain the indexes over time as the structure and usage of the database changes.
3 votes -
Allow to decrease time series granularity and custom bucketing values
In our IoT use case, we are leveraging MongoDB’s time series functionality. Due to high write volume, we need to adjust the timeseries.granularity and bucketMaxSpanSeconds parameters to manage the write load. However, after increasing the bucketMaxSpanSeconds, we need to run the system for several days to observe stability. If the value is set too high, MongoDB does not support decreasing the bucketing value, and we are forced to create a new collection instead.
It would greatly simplify our testing process and increase flexibility in adapting to business changes if MongoDB allowed the decrease of bucketMaxSpanSeconds after it has been increased.
2 votes
- Don't see your idea?