Connectors (BI, Kafka, Spark)
35 results found
-
Support Prepared Statements in BI Connector
The Power Query Editor in Microsoft leverages a prepared statement for query/filtering.
Error message:
This command is not supported in the prepared statement protocol yetThis is a feature request to support prepared statements in the mongosqld BI Connector service.
7 votes -
Bi connector - store schema information
We have the BI connector installed/configured via OPS manager on one of our deployments.
When we have to restart the node, the BI connector takes a long time to start as it has to rebuild the schema.
Hence we would like to file an enhancement request to be able to store the schema information and then retrieve it again after a restart.10 votes -
Can't fetch data on MongoDB ODBC via BI connector
Can't fetch data on MongoDB ODBC via BI connector. Test connections are successful but couldn't fetch the data from database servers, only getting 'information_schema' and 'mysql' as by default databases
2 votes -
Multiple clusters in a single mongosqld configuration.
The limitation of supporting multiple clusters in a single mongosqld configuration is now supported or not.
3 votes -
Get schema validation "feedback" in Kafka Mongo Sink Connector
Objective :
We want to be able to validate that data matches some requirements. We would like to to perform this data validation by adding a JSON schema in Mongo (such as it is described here : https://docs.mongodb.com/manual/core/schema-validation/).
Problem is that current implementation of the current Mongo DB Kafka Sink connector does not implement the required elements to benefit from features brought by this KIP : https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
So if we define such a validation on Mongo, if a message has a value that does not match the definition, it would not go in the dead letter queue, and the…
3 votes -
MongoDB Sink Connector CDC default handler
I would like to have a default CDC handler that can process data produced from MongoDB Source Connector without Debezium https://docs.mongodb.com/kafka-connector/master/kafka-sink-cdc#cdc-handler-configuration
3 votes -
Deploy MongoDB BI Connector product using the MongoDB Kubernetes Operator
We would like to have the ability to deploy and run MongoDB BI connector as a container under MongoDB Kubernetes Operator. Currently there is no support on such deployments.
2 votes -
BI Connector Atlas - View current schema creation/update status
In Atlas the is no way to see the BI connector logs, so, the request is to be able to see the Current schema creation/update status or even better, be able to see the log.
6 votes -
Dropbox
Lead with the drop that will very allow. Permit to be inside and all the time useful !!
stay tuned.1 vote -
Enable Write Permission In BI
Hope BI enable write permission because a lot of system especially Windows base using ODBC
6 votes -
Notification and alerts when BI Connector fails
When a BI Connector fails, no alerts are sent to admins, only the notification on the page is available indicating that it will "restart in 5 minutes".
6 votes -
1 vote
-
Enable custom DRDL upload for Atlas BI Connector
We cannot upload custom DRDL to Atlas BI connector. This forces us to roll our own BI connector
25 votes -
Allow separate whitelist for Atlas BI Connector
Currently, whitelisting is only possible at the Project level. We would like to allow whitelisting of the BI connector instance separately from the project.
The Users/IPs that connect to mongo are completely different for BI connector vs the actual DB
8 votes -
Kafka source connector once only semantics
Added as a suppport case here : https://support.mongodb.com/case/00634630
When using the connector as a Source, i.e we capture change streams from the Source Mongo DB and stream that to a Kafka endpoint.
Imagine these are updates on financial transactions in mongodb and they are NOT tolerant to
1) missed data and
2) duplicated data
in that order.So, we need to make sure that the Change Streams that we are observing(matching) on, are delivered once and exactly once to the Kafka pipeline. (Blog on the same : https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/). If exactly-once semantics are enabled, it makes commits transactional by default.
…
4 votes
- Don't see your idea?