Database
298 results found
-
Raise maximum BSON document size bigger than 16 MB
Per https://www.mongodb.com/docs/manual/reference/limits/#:~:text=The%20maximum%20BSON%20document%20size,MongoDB%20provides%20the%20GridFS%20API, maximum BSON document size is 16 MB. I would like to request this to support bigger sizes like 32 MB, 64 MB or even bigger.
95 votes -
XA Support
Is there any plan to implement distributed transaction that involves more than one data stores (for e.g. an RDBMS and Mongo)? We have one such requirement and tried a simple POC by creating a simple class extending XAResource (MongoXAResource implements XAResource) and overriding below methods.
@Override
public void commit(Xid xid, boolean b) throws XAException {
clientSession.commitTransaction();
}@Override
public void rollback(Xid xid) throws XAException {
clientSession.abortTransaction();
}It appears to work but i think there is a lot more to do. Is there any plan to implement by MongoDB team?
49 votes -
Database users should be able to change their own passwords
Currently, there is no way for Database Users to manage their own passwords, (even if they are atlasAdmin@admin). Moreover, as a Project Owner, I cannot create a role that allows them to do so, e.g.:
use admin db.createRole( { role: "changeOwnPasswordRole", privileges: [ { resource: { db: "", collection: ""}, actions: [ "changeOwnPassword"] } ], roles: [] } )
As such, changing passwords always requires a Project Owner setting the new password and sharing it with the Database User. This is a problem, because user-password combinations known by more than one person do not serve as proof of identity.
A…
33 votes -
partial text search
we've already seen a full text search and it will be awesome if you manage to implement partial version :)
32 votes -
Allow configuration of 100mb memory limit per aggregation pipeline stage
In this old thread from 2016 (https://groups.google.com/forum/#!topic/mongodb-user/LCeFZZRz5EY) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:
- If too much memory is used per aggregation pipeline stage then it will reduce performance for the overall MongoDB database, impacting other queries negatively.
- You can set allowDiskUse: true and revert to performing these pipeline stages on disk when they exceed 100mb.
I believe this subject needs to be revisited for the following reasons:
- “Too much memory” is very subjective, and the 100mb…
29 votes -
ARM support
Can we support ARM packages for Debian 11. They are required for bitnami to add ARM support to their mongo charts
28 votes -
How to limit the number of document updates?
Hi
I want to limit the number of document updates in one command.for example
db.users.updateMany(
<filter>,
<update>,
{
limit : 100
}
);https://www.mongodb.com/community/forums/t/how-to-limit-the-number-of-document-updates/102204/3
25 votes -
Password enforcement without LDAP
Enforce complex password policy
Enforce password expiration
Enforce password history24 votes -
Change streams and Triggers for Time Series Collections
Add Change streams and Trigger capabilities to Time Series Collections.
Current Limitations don't allow this.
https://docs.mongodb.com/manual/core/timeseries/timeseries-limitations/#change-streams20 votes -
prometheus metrics availablity through privatelink
until now is only possible to get metrics from public or private peered vpc but is not actually possible through privatelink. Since privatelink are used most of the time for security reason this limitation leads to find compromises or workaround to customers
19 votes -
Add operator that would calculate distance between 2 geolocation points
It would be great to have operator that would calculate distance between 2 geolocation points, and not to do it manually with big aggregate queries.
I suggest to add 2 new operators that would calculate distance in two different ways, as discussed in this Community Post: https://www.mongodb.com/community/forums/t/how-to-calculate-distance-between-two-geolocation-points/173045
19 votes -
Bidirectional Data Replication beetween two or more clusters
In order to provide active-active deployments on multiple datacenter without need to shard data and route trafic based on shard-key. Develop a bidirectional replication with a conflict resolution method , for example , timestamp-based.
18 votes -
Allow kill connections
Kill session commands only stop current activities on DB, but not closing/dropping connections (connections still remain open in $listSessions).
It´d be useful to be able to close opened connections in situations where too many sessions have been opened incorrectly or not closed.16 votes -
Higher IOPS for small disk sizes (MongoDB Atlas on Azure)
AWS and GCP start with 2300-3000 IOPS for M10+ instance from the smallest disk size (8Gb), whereas on Azure we get 120 IOPS with 8Gb, 240 with 64Gb, 500 IOPS with 128Gb ... nowhere near to what AWS/GCP offer (120 - 500 IOPS for a database server is nothing!).
So if I take a bigger storage (512Gb for 2300 IOPS) to be on par with AWS/GCP then there is a dramatic price difference of your MongoDB Atlas offering for Azure compared to AWS/GCP - it gets 2 times more expensive!
I understand that there is a dependency on the…
14 votes -
Add timestamps to user documents
Most database technologies store this metadata by default.
Because the expected data volume and change rate of this attribute will most probably be low, there should be no reason of not storing this information.
Of course this information might already be available in audit files, but first: auditing isn't enabled by default.
Second: most database users won't have access to this file/info and third: most users won't expect this info in a separate file (reminder, MongoDB recommends to store the data where it belongs when it comes to "data/schema modelling", so the metadata of a user document should also be…11 votes -
Release notes with urgency and risk
Provide MongoDB customers/users with an understandable release notes, especially for bugfixes.
What are the risks this bugfix release covers, what is its urgency.Right now, release notes are made of MongoDB Jira tickets, which are very detailed and refer to the implementation of MongoDB, and thus cannot be easily understood by end users.
As a suggestion, release notes could sum up the following data in a simple table:
- Nature of impact
-> data corruption: yes/no
-> downtime: of a single node / of the whole cluster / on a subset of requests / etc
- Context of impact…11 votes -
More complex balancer windows for sharded clusters
Currently we can define a single balancer window which is applied for every day of the week. It would useful to extend this with, for example:
- multiple windows per day (e.g. 2-4am and 9-11pm)
- custom windows for days of the week (e.g. Sat 5pm-midnight, Sunday 0-24)
11 votes -
Burstable IOPS for MongoDB Atlas on Azure
According to Azure documentation, bursting is enabled by default for all VMs that using Premium SSD (https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-types#bursting). It would be great if MongoDB Atlas on Azure can get benefit from it.
11 votes -
Unique Indexes and Bulk Upserts for Time Series Collections
We would like to insert data in bulk into time series collections and identify the new data that has been inserted without the possibility of duplicates being inserted.
For regular collections this is achievable by adding a unique index and performing a bulk upsert (as any duplicates will be rejected due to the unique index).
For time series collections however unique indexes are not currently supported.
In addition performing an upsert with $setOnInsert option which should only action insert operations is also not currently supported for time series collections.
At the moment the only options appear to be:
(1) to…
10 votes -
geoContain
Dear all,
according to the attached image, I have some documents (in blue, with id from 1 to 5) having a geographic extent and a search area (in yellow).
I need to find all documents where search area is completly inside the document's geometry.
Using different words, I need to find all documents where geometry completly covers the given search area.
In my sample, the geo query should return the document with id 1.
This kind of query has a opposite logic than the $geoWithinCould you provide a $geoContain functionality in the next future?
10 votes
- Don't see your idea?