Performance problem with compound index using doubles, with range filtering
It looks like there may be a problem when using doubles in a compound index and using range-test filters against all of them in a query.
The problem we see from the explain plans is that far more keys are being examined that there should be and this leads to poor performance.
If we switch to using a similar setup but with integers instead, we don’t see the problem.
See the second part of this ticket for full details
https://support.mongodb.com/case/00659614
We have a document that looks like this –
{
"id" : ObjectId("5e2ab24eca314f10b486d827"),
"attributes" : [
{
"attributeCode" : "attributesitemsGeometry",
"boundingBox" : {
"lonMin" : -1.93181854783035,
"latMin" : 52.1718902305398,
"lonMax" : -1.9244498979136,
"latMax" : 52.1749938832085
}
}
]
}
With a compound index like this –
{
"v" : 2,
"key" : {
"attributes.boundingBox.lonMin" : 1,
"attributes.boundingBox.lonMax" : 1,
"attributes.boundingBox.latMin" : 1,
"attributes.boundingBox.latMax" : 1 },
"name" : "itemVersions_designCode_collection_attributeCode_boundingBox_date",
"ns" : "customerBlah.itemVersions"
}
And we make queries with range filters like this
"attributes" : {
"$elemMatch" : {
"$and" : [
{
"boundingBox.latMin" : {
"$lte" : 52.0751028974155
}
},
{
"boundingBox.lonMin" : {
"$lte" : -1.92557202436271
}
},
{
"boundingBox.latMax" : {
"$gte" : 52.0729000493818
}
},
{
"boundingBox.lonMax" : {
"$gte" : -1.92931078813729
}
}