Skip to content

Kafka Engine throwing Consumer error: Local: Required feature not supported by broker #61

@Randy-Coding

Description

@Randy-Coding

Kafka deployment:

Bitnami kafka helm chart version kafka-32.2.12, Kafka version 4.0.0

Clickhouse deployment:

hyperdx helm chart version hdx-oss-v2-0.6.4, ClickHouse version 24.10.2.80, which to the best of my knowledge runs lirbkafka 2.8.0

Problem:

When attempting to use kafka engine to push messages into clickhouse, No messages are sent and I get the following:

│ <Warning> StorageKafka (syslog_test_table): sasl.kerberos.kinit.cmd configuration parameter is ignored.                                                          │
│ <Error> StorageKafka (syslog_test_table): Consumer error: Local: Required feature not supported by broker                                                        │
│ <Error> StorageKafka (syslog_test_table): Consumer error: Local: Required feature not supported by broker                                                        │
│ <Error> StorageKafka (syslog_test_table): Consumer error: Local: Required feature not supported by broker                                                        │
│ <Error> StorageKafka (syslog_test_table): Consumer error: Local: Required feature not supported by broker                                                        │
│ <Error> StorageKafka (syslog_test_table): There were 4 messages with an error                                                                                    │
│ <Error> StorageKafka (syslog_test_table): Only errors left     

the values-override.yaml:

#note, I added the clickhouse resource definitions myself
clickhouse:
resources:
limits:
cpu: 8
memory: 64Gi
requests:
cpu: 6
memory: 32Gi
hyperdx:
appUrl: "https://url-to-hyperdx"
ingress:
enabled: true
ingressClassName: nginx
host: "url-to-hyperdx"

My named_collections I placed directly into config.xml:

<!-- Named collections for Kafka integration -->
<named_collections>
    <cluster_1>
        <!-- ClickHouse Kafka engine parameters -->
        <kafka_broker_list>test-kafka-svc.[my_namespace].svc.cluster.local:9092</kafka_broker_list>
        <kafka_topic_list>clickhouse-topic</kafka_topic_list>
        <kafka_group_name>cluster_1_clickhouse_consumer</kafka_group_name>
        <kafka_format>JSONEachRow</kafka_format>

        <!-- Kafka extended configuration -->
        <kafka>
            <security_protocol>PLAINTEXT</security_protocol>
            <debug>all</debug>
        </kafka>

    </cluster_1>
</named_collections>

the admin account I used to make the table:

        <admin>
            <password>admin</password>
            <profile>default</profile>
            <quota>default</quota>
            <networks>
                <ip>127.0.0.1</ip>
                <ip>::1</ip>
                <ip>10.0.0.0/8</ip>
                <host_regexp>.*\.svc\.cluster\.local$</host_regexp>
            </networks>

            <!-- Named-collection and access control privileges -->
            <access_management>1</access_management>
            <named_collection_control>1</named_collection_control>
            <show_named_collections>1</show_named_collections>
            <show_named_collections_secrets>1</show_named_collections_secrets>
        </admin>

The table definition used:

CREATE DATABASE IF NOT EXISTS syslogs;

CREATE OR REPLACE TABLE syslogs.syslog_kafka_table

(
    LEVEL String,
    SOURCEIP String,
    ISODATE DateTime,
    HOST String,
    FACILITY String,
    PID String,
    MSGID String,                               
    MSG String,
    SDATA String,
    RAWMSG String,
    PROGRAM String
)
ENGINE = Kafka(
   'test-kafka-svc.blacksmith.svc.cluster.local:9092',
   'clickhouse-topic',
   'unique-consumer-group',
   'JSONEachRow'
);

And here's a message consumed by kcat (that was running lirbkafka 1.7.0) to show that the keys match the schema:

{"SOURCEIP":"data",
"SDATA":"{}",
"RAWMSG":"data",
"PROGRAM":"data",
"PID":"data",
"MSGID":"data",
"MSG":"data",
"LEVEL":"notice",
"ISODATE":"data",
"HOST":"data",
"FACILITY":"data"}

the error message comes up whenever a materialized view is made to ingest from the table, or whenever someone selects from the table
(i.e SELECT * FROM syslogs.syslog_test_table LIMIT 1;)

I haven't found this error anywhere else. I have consumed messages successfully using a python pod running lirbkafka 2.8.0 (the same one clickhouse uses), I successfully pinged the kafka brokers from the clickhouse pod (using the shell interface), I used another kafka deployment that used another version (the doughgle/kafka-kraft image) and clickhouse throws the same error.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions