Skip to content

Database

To report bugs, please use our SERVER JIRA project.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

267 results found

  1. BSON::Proxy Proxy content from/to another collection

    I would like a mongodb data type, called

    BSON::Proxy # not indexable or searchable

    Which would hold the value of another collection & id.

    IE:

    BSON::Proxy({database: "this_one", collection: "blobs", _id: "123"})

    This would allow my code to request a field that would be used for reading ONLY IF REQUESTED in the projection.

    Example record:
    {
    id: "abc",
    username: "test user",
    blob: BSON::Proxy({database: "this
    one", collection: "blobs", _id: "123"})
    age: 50
    }

    db.collection.find(query, projection, options)

    db.users.find({_id: "abc"}) # returns all fields except blob.

    db.users.find({_d: "abc", { _id: true, username: true, blob: true, age: true }}) # returns all fields including blob.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Make redundant createView() a no-op

    If I call createView() with params that match an existing view (in name and all other attributes), it returns an error. It'd be more convenient if the call simply succeeded without doing any work. The behavior I propose is analogous to the way that createIndex() behaves. With the current behavior my only choices are to (1) unconditionally drop and recreate the view, or (2) read the current view definition and see whether it matches the definition I want. The first choice is unacceptable because for a period of time (albeit a short one) the view won't exist and queries that…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. $addToSetIfNotExists or javascript code as array operator

    This would allow for unique if not last entry into arrays.

    Preventing:

    ['one', 'one', 'two', 'three']

    But allowing:

    ['one', 'two', 'three', 'one']

    Or perhaps (to run js code on the array at the db):

    `.update({}, {$js: {'array_field': 'var last = ""; for (var key in array) {if (array[key] === last) {array.splice(key);}}'});

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Allow Dynamic Object In $project and $addFields

    Assume I have a field mapping defined in some configuration collection of my application. And this field mapping varies for different clients on my application.

    I would like to pass the dynamic object in my $project or $addFields stage

    Like:

    $project: {
    {$arrayToObject: "$field_mapping"},
    }

    {$arrayToObject: "$field_mapping"} would return something like

    "email" : "$data.Email",
    "phone" : "$data.Phone,
    "firstname" : "$data.FirstName",
    "last
    name" : "$data.LastName",
    "preferredchannel" : "$data.Channel",
    "preferred
    language" : "$data.LanguageCode",
    "emailchannel" : "$data.Email",
    "sms
    channel" : "$data.SMS",
    "province" : "$data.CustomerState"

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. testing

    <BODY onload!#$%&()*~+-_.,:;?@[/|\]^`=alert("XSS")>

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Implement restrict Network access list to improve Security risk

    Currently the Network access list admits to add IP addresses to allow connections from. However, we are using Private Endpoints to connect to our clusters; from Google Cloud we cannot add rules to PSC in the firewall, so this means that being in our VPN all devices internally has access to our mongo databases.

    How can we prevent access from any place in our company to our databases. This is a huge security risk.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Allow for redacted and non-redacted log files at the same time

    This would be applicable for on-prem and Atlas.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Allow changing compression of an existing collection

    I'd like to switch my database from Snappy to Zstd compression.

    Currently, it doesn't seem that it is possible to change the compression of an existing collection.

    It would be nice if there were a way to do this, even if (for example) it required making a new replica set member which "re-compresses" the data to the new algorithm while cloning it.

    As per SERVER-67726, the only way to do this today is to create a new collection and manually copy with mongodump/mongorestore. This doesn't seem to be a viable option, for uptime / data consistency reasons.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Support newer versions of JSON Schema in validation to be able to use the "if", "then", "else" and "const" keywords

    Currently there is no way to have conditional schema validators because you can't use the "const" keyword in a "oneOf" or the "if", "then" and "else" keywords.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Create operators that support Centrality Algorithms use cases

    Degree Centrality
    Closeness Centrality
    Harmonic Centrality
    Betweenness Centrality
    Eigenvector Centrality
    PageRank
    ArticleRank

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Add a $sample accumulator operator

    So that it's easier to sample a number of items from each group, instead of writing lengthy DSL like this:

    https://www.mongodb.com/community/forums/t/sample-x-number-of-documents-in-each-group-with-or-after-a-group-stage/170787

    Ideally it can be expressed like this:


    {
    $group: {
    _id: '$year',
    samples: { $sample: 100 },
    }
    }

    Hence making sampling 100 items from each group a snap to achieve!

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. kNN Searches with MongoDB

    Dear MongoDB team,
    The possibility for kNN searches within a collection would be a real game changer. Not only in machine learning, but also in other areas you need this query method to find similar structures.
    Practically, it could go in the same direction as text searches. So that you have to create an index that you can also query similarly and then sort according to the score.

    I speak from my own suffering, as I had to switch from MongoDB to Open-/Elasticsearch for my current project, as they are also non-relational databases but support kNN searches. However, I like…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Boost the performance of bioinformatic annotation queries

    The documents to be selected look something like this:

    {
    "_id": {
    "$oid": "6272c580d4400d8cb10d5406"
    },
    "#CHROM": 1,
    "POS": 286747,
    "ID": "rs369556846",
    "REF": "A",
    "ALT": "G",
    "QUAL": ".",
    "FILTER": ".",
    "INFO": [{
    "RS": 369556846,
    "RSPOS": 286747,
    "dbSNPBuildID": 138,
    "SSR": 0,
    "SAO": 0,
    "VP": "0x050100000005150026000100",
    "WGT": 1,
    "VC": "SNV",
    "CAF": [{
    "$numberDecimal": "0.9381"
    }, {
    "$numberDecimal": "0.0619"
    }],
    "COMMON": 1,
    "TOPMED": [{
    "$numberDecimal": "0.88411856523955147"
    }, {
    "$numberDecimal": "0.11588143476044852"
    }]
    },
    ["SLO", "ASP", "VLD", "G5", "KGPhase3"]
    ]
    }

    For a basic annotation (https://en.wikipedia.org/wiki/SNP_annotation) scenario, we need such query:

    {'ID': {'$in': ['rs369556846', 'rs2185539', 'rs2519062', 'rs149363311', 'rs55745762', <...>]}}
    , where <...> means hundreds/thousands…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Performance  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. views on different database (synonyms)

    It would be great if we could have in Mongo the same as in Oracle with DBLink synonyms:
    Access table or views from a different database.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. add view,index,user,role to mongodb Change Stream

    I tested the changeStream of mongodb. The changes of adding view, index, user and role to mongodb are not reflected in changeStream, but the corresponding log can be found in oplog. I have an issue on https://jira.mongodb.org/browse/SERVER-66460 and got it confirmed, but Chris Kelly thinks this should be a feature. Hopes to add this feature in future versions

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Support readOnly in Json Schema Validation

    Support the readOnly property for json schema validation to make certain fields immutable (or the entire document) after creation.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Security  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Support expressions in $densify range bounds

    The $densify aggregation pipeline stage seems unable to evaluate range bounds expressions, requiring the range bounds to be constant.

    See the following example (the collection testcoll contains a single documents with only the _id field):

    sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [0, 5], step: 1}}}])
    
    [
    
      { a: 0 },
    
      { _id: ObjectId("6284a16d64553eaf74b1e189"), a: 1 },
    
      { a: 2 },
    
      { a: 3 },
    
      { a: 4 }
    
    ]
    
    
    
    sometestdb> db.testcoll.aggregate([{$addFields: {a: 1}}, {$densify: {field: "a", range: {bounds: [{$toInt: "0"}, 5], step: 1}}}])
    
    MongoServerError: A bounding array must be an ascending array of either two dates or
    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Change how document size is calculated

    Disclaimer: I'm aware that this is a pretty big feature request.

    Currently a document has a max size of 16 mb. I'm not suggesting changing that. However, I'm suggesting changing how that size is calculated.

    I'm designing a database in which documents have a tree structure. The document will have an array called "nodes", which can hold any number of child nodes. Those nodes can also have arrays of nodes, and so on. It's a simple tree structure.

    For major users of my application, that outer document could get pretty big. It's not a problem right now (no users, no…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Data Models  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?

Feedback and Knowledge Base