Database
317 results found
-
sharding error shardsvr
Make it clear which node is causing the "shardsvr" error.
Spawned from support case 01042995
Our error occurred when the user tried to connect using Compass. The failure was to list the collection names on one database.
The error presented back to the user was merely
Cannot accept sharding commands if not started with --shardsvr
We found eventually that the primary changed on one of the shards, and that primary did not have the appropriate
clusterRole
in the mongod.conf file. My concerns are that this took too long to track down and would be impossible in a 100-shard environment.- Nothing…
6 votes -
Add pipeline stage for "downsampling" data
Down sampling is an extremely common operation used when plotting time-series data on graphs when there is too much data to get a good looking/meaningful graph. This would pick and choose "important" data points based on an algorithm such as "Largest-Triangle-Three-Buckets" (https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf) instead of returning the entire data set.
Not only would this make prettier graph but it will also reduce the overall payload returned from the data thus reducing network related latency.
This would be an awesome addition to timeseries!
6 votes -
`$getField` to work with a dynamic `field`
Currently
$getField
works only whenfield
resolves at query-compile-time to a string. It would be nice if it worked also whenfield
resolves to a string at runtime.See this Jira ticket - https://jira.mongodb.org/browse/SERVER-67030
6 votes -
Unique index in sharded cluster
For enforcing uniqueness in a sharded cluster, the officially recommended approach provided here https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/#std-label-shard-key-arbitrary-uniqueness is simplistic and in production environment it brings non-trivial amount of work. Some considerations:
- Ephemeral issues might cause inconsistencies between the two collections (for example, unique index collection update succeeded but not the main collection) and make some unique keys not useable.
- There are many changes needed (we're using ORM Mongoose, there are many hooks for it to change) for enforce this universally.
What we ended up doing is to use distributed ephemeral locks (a TTLed MongoDB collection) to lock on the unique keys before adding…
6 votes -
x509 authentication with other certificate's components than (O,OU,DC)
In some entities (e.g. ours), the O, OU, DC triplet is not detailled enough or not appropriate, which makes it impossible to authenticate through x509.
For exemple, in our entity, the O and OU are the same for all certificates (because all servers are in the same Organisation Unit), and the DC field is not used. We do use other fields though.
Because of that, we can't use the x509 authentication feature, although it is strongly asked for by the security Team.Would it be possible to enhance the x509 authentication mechanism to allow more flexibility for the authentication's fields?
6 votes -
NoTableScan at the collection level
NoTableScan at the collection level instead of mongod level.
6 votes -
Ability to see historical `serverStatus.uptime` counter info on MongoDB Server process
What is the problem that needs to be solved? Store (historically)
serverStatus.uptime
counter info on MongoDB Server process, so that it will be possible to trackserverStatus.uptime
changes through the time.Why is it a problem? (the pain) As of now (2020-02-25) there's no way to see historical info of MongoDB Server process restarts since
serverStatus.uptime
counter is restarted every time MongoDB Server process is restarted. There's no other way (other than going into MongoDB Server process logs) to know if the process was restarted and when it was restarted. If you'd like to calculate MongoDB Server process availability, you'll…6 votes -
Get metadata about source client connection that submitted a given change
Currently with change streams it is impossible to know who or what connection initiated the changes.
It would be a good feature to have to be able to receive some data about the source client connection that initiated a change.
My particular use case is the following:
I have an app that connects to Atlas. (source client connection)
I can subscribe to change streams and then execute some logic when it applies.That app can scale to multiple instances.
Each instance subscribes to the change streams.
But I only want each instance to execute the logic that applies to only…5 votes -
Add expression indexes
An expression index is one where the value being indexed is the result of an expression, like lower casing a string.
http://en.wikipedia.org/wiki/Expression_index
http://www.postgresql.org/docs/8.1/static/indexes-expressional.html5 votes -
Support for Ubuntu 20.4 in MongoDB Server version 4.2
Per the Server Support Matrix https://docs.mongodb.com/manual/installation/ support for Ubuntu 20 is in MongoDB Server version 4.4+ but not 4.2.
We would like to see the currently supported MongoDB Server version 4.2 available on the Ubuntu 20.4 LTS distribution.5 votes -
log connection string used by application to connect
there are multiple options to connect to mongo: you can connect to specific node or you can connect to the whole replicaset etc...
if DBA does not have access to source code - it's not possible to validate if application properly configured and connects to replicaset.it would be nice to let mongoDB dump to mongod.log used connection string and/or details how exactly client sessions is connected to mongo.
5 votes -
Reduce the minimum value for watchdogPeriodSeconds
The storage watchdog attempts to create, write, and read a test file in critical directories every 10 seconds.
The watchdogPeriodSeconds parameter controls how often these a thread checks to ensure at least one check has succeeded since the last check.
The minimum value for watchdogPeriodSeconds is 60 seconds. This means that in the worst case, the mongod could be unable to write for up to 2 minutes before the watchdog asserts and kills the stalled node. That is a very long time for a primary node to be stalled in a busy cluster.
It does make sense that watchdogPeriodSeconds must…
5 votes -
Kafka audit event streaming
Provide Kafka Topic as a write target for database auditing and database message logging.
https://docs.mongodb.com/manual/core/auditing/
Auditing is currently limited to a local and editable JSON/BSON file or the system console log.
The SYSLOG is not recommended by MongoDB. "The syslog message limit can result in the truncation of the audit messages. The auditing system will neither detect the truncation nor error upon its occurrence."5 votes -
Collection Comments
I would like the ability to attach comments to a collection so that other people using the data can get some understand of context or important Readme/FAQ that I would need to share.
5 votes -
Include the _ids of existing documents in BulkWriteResult when performing upserts
When performing a bulk operation, it is possible to obtain the _ids of upserted documents via BulkWriteResult. For example:
db.getCollection("test").find({})
db.test.drop()
var bulk = db.test.initializeUnorderedBulkOp();
bulk.find({name: "huey"}).upsert().updateOne({name: "huey"});
bulk.execute();
```The BulkWriteResult contains the upserted _id:
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 0,
"nUpserted" : 1,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [
{
"index" : 0,
"_id" : ObjectId("5ec77b5cc4a955ce03a4cd2e")
}
]
})However, when a document already exist, the _id is not returned:
db.test.find()
var bulk = db.test.initializeUnorderedBulkOp();
bulk.find({name: "huey"}).upsert().updateOne({name: "huey", outfit: "red"});
bulk.find({name: "luey"}).upsert().updateOne({name: "luey", outfit:…5 votes -
Maintain database ID/Password profile inside database
Requirement is to maintain database id/passwords with standard details inside database like (Lastlogin date/ passwordchangedate) and control of passwords related how many failedloginattempts/passwordlifetime/passwordcomplexity/passwordlocktime/passwordreusemax etc
4 votes -
4 votes
-
Collection which stores last login date_time for the users
Are you please able to store the last login date_time for the users which exist either in admin database or $external database in a collection of admin database of that cluster or opsmanager database which manages the clusters?
I have a requirement at my end - is to find the users who havent logged in for 60 days, , their roles to be revoked. And, ultimately delete the users who dont have any roles attached after a fixed period of time.
I do undertsnad you store the login details in audit logs. But that would be a tedious process at…
4 votes -
Extend schema validation to be able to enforce referential integrity between collections
Where a relational database uses 2 tables to store a 1:many "parent - child" relationship between entities, MongoDB mostly stores the child documents in an array file as part of the parent document. This automatically ensures referential integrity in that
- a child document cannot be inserted or updated to refer to a non-existent parent, and
- a parent document cannot be deleted such that it leaves "orphaned" child documentsHowever, there are situations where the number and/or size of the child documents makes embedding them all in their parent unworkable, due to the 16 megabyte document size limit if…
4 votes -
Budget limit for serverless pay as you go mode
I was looking at the serverless pay-as-you-go option for my DB so I could have continuous backup and snapshots but I found it too risky. Currently, the only protection user has is alerts when RPUs go over a certain budget threshold. I would like to be able to set a budget limit that would prevent me from going over pre-set daily budget. If you would get hit with DOS or some other brute force attack you could rack up lots of traffic and get an unexpected bill without a potential limit.
4 votes
- Don't see your idea?