Replies: 1 comment 3 replies
-
|
A good start would be to:
Also, are you actually using Strimzi? The config you shared is badly formatted and pretty unreadable. But it does not look like Strimzi. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team, I am deploying debezium (outbox and iceberg connectors ) on kafka connect service running on google kubernetes engine but whenever a new connector is deployed , the tasks are not getting initialized until the service webserver /worker pods are restarted , no error is shown in the logs.
below is my connect-distributed properties template:
Config Providers
config.providers=env
config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider
==============================================================================
INFO {{.CommentLine}} is used to comment lines during development
group.id=kafka-connect-cluster-{{.AppEnv}}
bootstrap.servers={{.KafkaBrokers}}
reconnect.backoff.ms=3000
task.shutdown.graceful.timeout.ms=10000
10min, RECREATE
scheduled.rebalance.max.delay.ms=600000
listeners=HTTP://localhost:8083
rest.advertised.host.name={{.PodIP}}
rest.advertised.port={{.Port}}
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
config.storage.topic={{.ConfigTopic}}
offset.storage.topic={{.OffsetTopic}}
status.storage.topic={{.StatusTopic}}
plugins
plugin.path=/opt/kafka/plugins
==============================================================================
Workers Config
{{.CommentLine}}sasl.mechanism=PLAIN
{{.CommentLine}}security.protocol=SASL_SSL
{{.CommentLine}}sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
{{.CommentLine}} username="{{.KafkaKey}}"
{{.CommentLine}} password="{{.KafkaSecret}}";
==============================================================================
Connectors Config
connector.client.config.override.policy=All
Source connectors
{{.CommentLine}}producer.sasl.mechanism=PLAIN
{{.CommentLine}}producer.security.protocol=SASL_SSL
{{.CommentLine}}producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
{{.CommentLine}} username="{{.KafkaKey}}"
{{.CommentLine}} password="{{.KafkaSecret}}";
Sink connectors
{{.CommentLine}}consumer.sasl.mechanism=PLAIN
{{.CommentLine}}consumer.security.protocol=SASL_SSL
{{.CommentLine}}consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
{{.CommentLine}} username="{{.KafkaKey}}"
{{.CommentLine}} password="{{.KafkaSecret}}";
Our resource allocation/ number of pods running/using strimzi kafka image are all good.
can anybody suggest what should I try to fix this issue?
Earlier we tried adding connect.protocol=eager in template but it makes all the connector in unassigned state though the tasks got initialized but again no use.
Beta Was this translation helpful? Give feedback.
All reactions