38 results found
The MongoDB Kafka Source and Sink connectors for the data streaming are working fine seamlessly until there are any errors in Kafka Source connectors.
If there is any error occurs, the connectors are not recovered from the timeout exceptions and need to be reposted/restarted connectors.
com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches3 votes
At present there is only support for RHEL7, which is effectively out of date. This would support more migrations to RHEL8.1 vote
The new Certified MongoDB Atlas Connector for Power BI seems to work great for Power BI Desktop, but does not work in Power BI Service as a dataflow and gives the error "An exception occurred: The given data source kind is not supported. Data source kind: MongoDBAtlasODBC. "2 votes
We have a MongoDB Sharded Cluster host thousands of collections. The schema file for bi connector is quite large. Every time we run a new qu
Current BI Connector behaviour is that it will do a listCollections command after an existing connection to BIC has been idle for 1-2 minutes. For databases with thousands of collections, this will take several minutes to complete. Afte rthe connection to BIC has been idle for a few minutes, the connection from BIC to MongoDB is dropped and when a new connection is created, the listCollecion command is called again. This overhead is unnecessary.
Request to have an option to keep the connection alive or some form of connection pooling mechanism so that listCollection will not be called over and…1 vote
Using MongoDB CFLE, The data need to be passed to downstream using Kafka connector which is consumed by downstream datalake for further processing. When data is pushed into Kafka it will remain encrypted and same encrypted data will land into the datalake, as encryption and decryption is happening at the driver level.
If Kafka connector supports CFLE encryption/decryption this would work seamlessly4 votes
topic.namespace.mapsupports wildcard ( * ).
It will be helpful if it also support regex as well.
Or, since it can be breaking change, it would be nice if a new mapping is introduced that supports regex format.
The workaround can be using custom mapper.
However, if the there is an builtin version, then the wheel can invented once and used by many ppl.1 vote
Enable CSFLE decryption in BI Connector, in order to provide encrypted fields in MongoDB decrypted for ODBC exposition.
CSFLE is described here: https://www.mongodb.com/docs/manual/core/security-client-side-encryption/.
For exemple, CSFLE would then be an option triggerable at BI Connector Launch, that should come with KMS link or keyring if enabled.11 votes
Currently mongodb source connector only supports dynamic topic mapping on the bases of collections/databases. Can we extend it to support routing on the basis of message content?
Why it's important?
We've currently setup connector for a very large collection but since we wanted to route the data for different sections to different topics, we had to setup separate connector for each and define the filter in the pipeline section (it runs as an aggregate query on changestreams to filter out the relevant data). Now obviously it created performance concerns as large number of collscan queries started to run on these…1 vote
We deploy the BI Connector within a container that runs the Alpine OS (so it's small and clean).
However Alpine is bundled with MULS and not the GLIBC libs which means the BI Connector cannot run in that container.
We have managed to get around this by using gcompat to layer in glibc. But that is not ideal for us as it adds complexity and rather defeats the purpose of using Alpine.
You build several versions of the BI Connector binary, are you able to build an Alpine compatible version as well?1 vote
Right now, change stream will flatten out a transaction into individual operations, i.e., if we do two operations within a transaction, change stream will generate two events.
However there is no total ordering of the events that happens within the transaction. This is important if we do two updates on the same object within a transaction, since both operations share the same optime (clusterTime), there is no way to provide the ordering between these two events.
It would be helpful if we can provide an ordering number on every events from the same transaction for this purpose.1 vote
We recently were trying to use timeseries collection to report some metrics and wanted to use atlas database clusters as datasource in Grafana. There exists a payed plugin from Grafana itself which is unfortunately only usable in conjunction with Grafana Cloud or Enterprise, which we both do not use as we host our grafana instances our selves.
It would be really nice to have the possibility to use Atlas DB clusters as datasource in Grafana.1 vote
The BI Connector facilitates large scale (1000s) of "personal data marts" by acting as a controlled go-between enduser tools like Tableau and a "main" data collection. It is not practical or even desirable to have pass-thru authentication of all these users to the backend mongodb database. Instead, the BI connector could use a special collection in a mongodb instance (not necessarily the target!) to hold SHA(password), name, and YAML equivalent. When started, the mongosqld would verify command line inputs of SHA(password) and name and context, etc. and if OK, would exec an appropriately password-protected endpoint at 3307 with the config…1 vote
One of the issues that our team has been talking about is when getting data from MongoDB, via a Kafka connector, and sending it through to Kafka we try to enforce schemas in Kafka but that schema is not enforced on the MongoDB data. This leads to developers needing to make sure they let the Data Engineering team know when their schema evolves so we can accommodate that change in the Avro schema. Our thought is to potentially have the developers use the Confluent Schema Registry to serialize their data to Avro prior to writing it to MongoDB. This would…2 votes
As per KAFKA-208, SMTs can't be applied to the heartbeats-mongodb topic. Users should not have to configure each connector to ignore this topic. Please either ignore this topic by default or provide a command-line switch so it can be ignored.4 votes
Usually in defining schemas for SQL databases, you can specify a max length for the size of a char or varchar column. It'd be nice to have the ability to do that in a schema that's passed to the mongosqld BI Connector process.
The only option now is to specify a max varchar size that applies to all varchar fields. It'd be nice to be able to define this on a per field basis.
This is an issue for a customer I'm working with because of the way their BI tool allocates memory for temporary objects created when bridging the…1 vote
The current authentication methods lack in their ability to control password complexity requirements (SCRAM-SHA-*) or require exposing on-premise LDAP servers to the public internet. The addition of X509 is worthwhile given that MongoDB Atlas already supports X509 as a user authentication mechanism.5 votes
it is still hard to set up a MongoDB oplog CDC connection to Kafka to publish changes from e.g. a microservice-local MongoDB. You typically have to use Kafka Connect and either the official MongoDB Atlas Connector, or the Debezium Open Source Connector.
One of the databases competing with MongoDB, CockroachDB, has a built-in feature to publish "change feeds" to Kafka (see https://www.cockroachlabs.com/docs/stable/stream-data-out-of-cockroachdb-using-changefeeds.html).
I'd love to see a similar feature for MongoDB, since this would allow us to keep MongoDB and Kafka in sync much easier and more conveniently - without having to care about yet another (probably centralized)…1 vote
We have the BI connector installed/configured via OPS manager on one of our deployments.
When we have to restart the node, the BI connector takes a long time to start as it has to rebuild the schema.
Hence we would like to file an enhancement request to be able to store the schema information and then retrieve it again after a restart.10 votes
Currently BI Connector has username and password in clear. this presents problem in terms of security. BI connector has to have a way to encrypt the password and then we store the encrypted password which BI connector can decrypt and use it. it can be any standard encrypt decrypt algorithm but the secret key can be generated by the person using this.6 votes
The Power Query Editor in Microsoft leverages a prepared statement for query/filtering.
This command is not supported in the prepared statement protocol yet
This is a feature request to support prepared statements in the mongosqld BI Connector service.7 votes
- Don't see your idea?