Skip to content

Inserts failing silently #458

@jonscyr

Description

@jonscyr

Describe the bug

Missing events in the target table. 3 events at offsets 7407350 - 7407352 are not present in clickhouse table. The connector has attempted to write them, we could see it from the DEBUG logs, but the insert had actually failed. Found insert query in query_log for id "eed49adf-feb9-46a7-abec-e44d9bdc03c2". it failed with MEMORY_LIMIT_EXCEEDED. Shouldn't this have made the sink connector retried or put the events in a DLQ like it was configured to? Instead, I could see this in the connector's logs

|task-0] Response Summary - Written Bytes: [7854], Written Rows: [3] - (QueryId: [eed49adf-feb9-46a7-abec-e44d9bdc03c2]) 

Steps to reproduce

Not sure how to reproduce

Expected behaviour

Failed inserts should have raised exception and caused it to be retried / put in dlq as configured.

Available logs

Configuration

https://gist.github.com/jonscyr/ef2f400a30a6b63a019d77b8a77f23b4

Environment

We have half-stack setup (CH cloud + strimzi kafka). We've been facing this issue where some batches events gets lost every 1 or two months. We have a script for validator running every day which raises this.
Our kafka-connect are running on LOG_LEVEL=DEBUG.

ClickHouse server

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions