Skip to content

Connectors (BI, Kafka, Spark)

41 results found

  1. Mongo Spark Connector Option to refresh the Schema

    This w.r.t the ticket we raised "https://support.mongodb.com/case/01352011"
    In the current spark connector to infer the schema automatically we have an option "stream.publish.full.document.only" to "true", once this configured there is no explicit schema we need to pass but the driver will infer the schema on the first document it streams and use/apply the schema for any future documents coming from that collection.
    But the issue here is when there is any addition of new fields in the source collection the streams are not inferring the new changes and instead it is using the old schema.
    We should either design…

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Entity Framework (Preview 2) release date

    Hello,

    is there any ETA of PR2 (nuget)?

    Thank you

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Kafka Sink Connector ObjectId Support

    The sink connector should support ObjectIds for the document.id.strategy. In other words, if a ObjectId hex string is provided for _id, the sink connector should be able to convert this to an ObjectId. I've scoured the documentation and the rest of the internet and have not been able to determine how to do this. I see an open PR on the github repo, but there has been no movement in almost 3 years. It's astonishing that I cannot use an ObjectId...

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Kafka Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Retry/reconnect mechanism Mongodb Source Connectors on MongoTimeoutException

    The MongoDB Kafka Source and Sink connectors for the data streaming are working fine seamlessly until there are any errors in Kafka Source connectors.

    If there is any error occurs, the connectors are not recovered from the timeout exceptions and need to be reposted/restarted connectors.

    Exception:
    com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Extend the support for the MongoDB ODBC driver to RHEL8

    At present there is only support for RHEL7, which is effectively out of date. This would support more migrations to RHEL8.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  BI Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Power BI

    The new Certified MongoDB Atlas Connector for Power BI seems to work great for Power BI Desktop, but does not work in Power BI Service as a dataflow and gives the error "An exception occurred: The given data source kind is not supported. Data source kind: MongoDBAtlasODBC. "

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. We have a MongoDB Sharded Cluster host thousands of collections. The schema file for bi connector is quite large. Every time we run a new qu

    Current BI Connector behaviour is that it will do a listCollections command after an existing connection to BIC has been idle for 1-2 minutes. For databases with thousands of collections, this will take several minutes to complete. Afte rthe connection to BIC has been idle for a few minutes, the connection from BIC to MongoDB is dropped and when a new connection is created, the listCollecion command is called again. This overhead is unnecessary.
    Request to have an option to keep the connection alive or some form of connection pooling mechanism so that listCollection will not be called over and…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  BI Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. CFLE support for Kafka connector

    Using MongoDB CFLE, The data need to be passed to downstream using Kafka connector which is consumed by downstream datalake for further processing. When data is pushed into Kafka it will remain encrypted and same encrypted data will land into the datalake, as encryption and decryption is happening at the driver level.

    If Kafka connector supports CFLE encryption/decryption this would work seamlessly

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. support regex in topic.namespace.map ( kafka source connector )

    Currently, topic.namespace.map supports wildcard ( * ).
    It will be helpful if it also support regex as well.
    Or, since it can be breaking change, it would be nice if a new mapping is introduced that supports regex format.

    The workaround can be using custom mapper.
    However, if the there is an builtin version, then the wheel can invented once and used by many ppl.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Kafka Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Enable CSFLE decryption in BI Connector

    Enable CSFLE decryption in BI Connector, in order to provide encrypted fields in MongoDB decrypted for ODBC exposition.
    CSFLE is described here: https://www.mongodb.com/docs/manual/core/security-client-side-encryption/.
    For exemple, CSFLE would then be an option triggerable at BI Connector Launch, that should come with KMS link or keyring if enabled.

    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Dynamic Topic Mapping on the basis of message content

    Currently mongodb source connector only supports dynamic topic mapping on the bases of collections/databases. Can we extend it to support routing on the basis of message content?

    Why it's important?
    We've currently setup connector for a very large collection but since we wanted to route the data for different sections to different topics, we had to setup separate connector for each and define the filter in the pipeline section (it runs as an aggregate query on changestreams to filter out the relevant data). Now obviously it created performance concerns as large number of collscan queries started to run on these…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Kafka Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Make database clusters available to Grafana as datasource

    We recently were trying to use timeseries collection to report some metrics and wanted to use atlas database clusters as datasource in Grafana. There exists a payed plugin from Grafana itself which is unfortunately only usable in conjunction with Grafana Cloud or Enterprise, which we both do not use as we host our grafana instances our selves.

    It would be really nice to have the possibility to use Atlas DB clusters as datasource in Grafana.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Build a version of the BI Connector that is compatible with Alpine

    We deploy the BI Connector within a container that runs the Alpine OS (so it's small and clean).
    However Alpine is bundled with MULS and not the GLIBC libs which means the BI Connector cannot run in that container.

    We have managed to get around this by using gcompat to layer in glibc. But that is not ideal for us as it adds complexity and rather defeats the purpose of using Alpine.

    You build several versions of the BI Connector binary, are you able to build an Alpine compatible version as well?

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  BI Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Change stream total ordering within a transaction

    Right now, change stream will flatten out a transaction into individual operations, i.e., if we do two operations within a transaction, change stream will generate two events.

    However there is no total ordering of the events that happens within the transaction. This is important if we do two updates on the same object within a transaction, since both operations share the same optime (clusterTime), there is no way to provide the ordering between these two events.

    It would be helpful if we can provide an ordering number on every events from the same transaction for this purpose.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Kafka Connector  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Two tiered model for authentication

    The BI Connector facilitates large scale (1000s) of "personal data marts" by acting as a controlled go-between enduser tools like Tableau and a "main" data collection. It is not practical or even desirable to have pass-thru authentication of all these users to the backend mongodb database. Instead, the BI connector could use a special collection in a mongodb instance (not necessarily the target!) to hold SHA(password), name, and YAML equivalent. When started, the mongosqld would verify command line inputs of SHA(password) and name and context, etc. and if OK, would exec an appropriately password-protected endpoint at 3307 with the config…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Kafka connector to support Kafka Schema Registry

    One of the issues that our team has been talking about is when getting data from MongoDB, via a Kafka connector, and sending it through to Kafka we try to enforce schemas in Kafka but that schema is not enforced on the MongoDB data. This leads to developers needing to make sure they let the Data Engineering team know when their schema evolves so we can accommodate that change in the Avro schema. Our thought is to potentially have the developers use the Confluent Schema Registry to serialize their data to Avro prior to writing it to MongoDB. This would…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Ignore heartbeats-mongodb topic by default

    As per KAFKA-208, SMTs can't be applied to the heartbeats-mongodb topic. Users should not have to configure each connector to ignore this topic. Please either ignore this topic by default or provide a command-line switch so it can be ignored.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Allow per-field length declaration for varchar and char types in BI Connector

    Usually in defining schemas for SQL databases, you can specify a max length for the size of a char or varchar column. It'd be nice to have the ability to do that in a schema that's passed to the mongosqld BI Connector process.

    The only option now is to specify a max varchar size that applies to all varchar fields. It'd be nice to be able to define this on a per field basis.

    This is an issue for a customer I'm working with because of the way their BI tool allocates memory for temporary objects created when bridging the…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Add X509 as a Authentication Method

    The current authentication methods lack in their ability to control password complexity requirements (SCRAM-SHA-*) or require exposing on-premise LDAP servers to the public internet. The addition of X509 is worthwhile given that MongoDB Atlas already supports X509 as a user authentication mechanism.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. built-in CDC to Kafka

    Hi,

    it is still hard to set up a MongoDB oplog CDC connection to Kafka to publish changes from e.g. a microservice-local MongoDB. You typically have to use Kafka Connect and either the official MongoDB Atlas Connector, or the Debezium Open Source Connector.

    One of the databases competing with MongoDB, CockroachDB, has a built-in feature to publish "change feeds" to Kafka (see https://www.cockroachlabs.com/docs/stable/stream-data-out-of-cockroachdb-using-changefeeds.html).

    I'd love to see a similar feature for MongoDB, since this would allow us to keep MongoDB and Kafka in sync much easier and more conveniently - without having to care about yet another (probably centralized)…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
← Previous 1 3
  • Don't see your idea?

Connectors (BI, Kafka, Spark)

Categories

Feedback and Knowledge Base