Database
302 results found
-
Arrow function support
Is there any plan to implement arrow functions support?
The current way to use functions (BSON type 13) is using traditional javascript functions:function() {
...
emit(key, value);
}
It would be great to support also arrow functions:
() => {
...
emit(key, value);
}
1 vote -
Assign MongoDB roles to LDAP users
I would like to create a softlab and therefore give the possibility to a user to create collections in a database which is dedicated to him.
Currently, to do that I need to :
1. create a single-member LDAP group for each user
2. map this group to a MongoDB role authorizing the user's database.Create a group with a single member is conceptually useless. I want to avoid step 1.
My feature request : add the ability to assign MongoDB roles to LDAP users as well as groups.
Regards,
Jerome2 votes -
Allow multiple text indexes per collection
MongoDb only allow one text index by collection, in contrast of other index types.
This is a limitation that makes it difficult to develop projects with search functionality, for example, if you want to add a text search for advanced users on all fields and a public search on a subset of fields, there isn't a simple and performant way to solve it.
Many developers end up using other products like elastic search, or creating additional collections and using lookup, or using preg_reg or building smart indexes when the creation of multiple text indexes per collection would allow a quick,…
4 votes -
options for different compute configuration per shards
Can we have options for different compute configuration per shards?
1 vote -
Kafka audit event streaming
Provide Kafka Topic as a write target for database auditing and database message logging.
https://docs.mongodb.com/manual/core/auditing/
Auditing is currently limited to a local and editable JSON/BSON file or the system console log.
The SYSLOG is not recommended by MongoDB. "The syslog message limit can result in the truncation of the audit messages. The auditing system will neither detect the truncation nor error upon its occurrence."5 votes -
wired
wiredTiger open files usage
Currently WT uses a file per collection and index, leading in some scenarios to extremely high number of open files/dhandles.
Is there any plan to support one file/dhandle per database?
4 votes -
4.4 to the EU on regular instance for development!
I work on mongodb in the EU.. i want to use the atlas search but i also need some features of v4.4
In development and for general everyday projects where our clients don't have large user bases we generally go for M2 instances.. affordable while offering full text search.
The release of 4.4 is almost half a year old.. when will we see 4.4 released to the EU of regular M2 instances??
1 vote -
Add a "Limit" to Delete and Bulk Delete operations
Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance.
- For the delete, this would make sure we only delete n number of documents.
- For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted.
Right now, the only solution is a hack,…
10 votes -
Ability to enable collection name enforcement
Currently if you perform a find or other operation, and you have a typo in your collection name, the operation will execute successfully and there is no indication that the collection you are operating on doesn't exist. It would be handy if there were some sort of session variable that could be set that tells the engine, to return an error if the collection being operated on does not exist. For example, lets say we have a collection named "myCollection". If I issue a find with a typo in the collection name from the shell like:
db.myCollectio.find();
This will successfully…
4 votes -
geoIntersects feature for 2D index
At Airbus, we would like to use geoIntersects feature with 2D Index.
We don't need 2DSphere Index, but we have to use it if we want to have geoIntersects avalaible.
As a consequence, we have to manage Spherical margins which introduces a useless and hard to maintain workaround in our product.
Could you please plan to implement geoIntersects and not only geoWithin with 2D Index ?
Thank you3 votes -
String Json Parser
lately, I imported a big CSV file, containing fields with values of Json object in String format, the simplest way to parse those strings to object it was by doing find().forEach( JSON.pase then update) it taken too much time to update the collection the only think that i have could done to reduce the latency is putting write priority to 0 which isn't very advisable with high throughput of updates, I don't really know for sure I'am not the expert here but working with big data demand a lot of cleaning, and not being able to do the cleaning on…
1 vote -
Bundle rest service together DB
As per current industry practice, REST/Json became standard way to deal with the data between the FE application and Backend DB. So if REST the server/service is bundled with DB it would eliminate the need of a middleware tier in most cases.
It means we can directly fetch data from a frontend application using REST/Json from mongoDB without requiring a thirdparty service.
1 vote -
Collection Comments
I would like the ability to attach comments to a collection so that other people using the data can get some understand of context or important Readme/FAQ that I would need to share.
5 votes -
Include the _ids of existing documents in BulkWriteResult when performing upserts
When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:
db.getCollection("test").find({})
db.test.drop()
var bulk = db.test.initializeUnorderedBulkOp();
bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
bulk.execute();
```The BulkWriteResult contains the upserted _id:
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 0,
"nUpserted" : 1,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [
{
"index" : 0,
"_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
}
]
})However, when a document already exist, the _id is not returned:
db.test.find()
var bulk = db.test.initializeUnorderedBulkOp();
bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…5 votes -
Validation for referential integrity
Currently, with the JSON Schema validation, we are able to limit the values for a field using enumerations. However, we need to have a way to limit the values to those entered in another collection.
3 votes -
Additional checks for storage consistency
The following opt-in features would add additional check to check for storage layer corruptions of collections.
- Upon write read what data was committed to disk.
- Periodic or scheduled scanning of a collection. Similar to collection.validate but non blocking.
10 votes -
Automatic Indexes
MongoDB can already suggest useful indexes. Why not take the next step and allow MongoDB to autonomously create and manage indexes. Ideally it will automatically maintain the indexes over time as the structure and usage of the database changes.
3 votes -
Allow collection collation to be editable
Collation of a collection can be set at creation time only. It would be useful to edit these fields to avoid creating an entire new collection and copying the collection over.
2 votes -
Conditional TTL index
In case of auto purging sometimes user might want to ensure data has been archived before purging. If conditional TTL was allowed user could set value of a field to indicate document has been archived and then db can purge accordingly.
1 vote -
collections under a document as in firebase firestore
collections under a document as in firebase firestore
1 vote
- Don't see your idea?