*原文:[https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#whats-new-part](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#whats-new-part)*
*****
<br >
本節涵蓋了從 2.2 版到 2.3 版的變更內容。另請參閱 [Spring Integration for Apache Kafka(3.2 版)的變更內容](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#new-in-sik)。
<br >
### **2.1.1. 提示,技巧和示例**
*****
添加了一個新的章節的提示,技巧和示例。
<br >
### **2.1.2. Kafka Client 版本**
*****
kafka-clients 要求 2.3.0 及以上版本。
<br >
### **2.1.3. Class/Package 變更**
*****
TopicPartitionInitialOffset 已經過期,請使用 TopicPartitionOffset 替代。
<br >
### **2.1.4. Configuration 變更**
*****
從版本 2.3.4 開始,missingTopicsFatal 容器屬性默認改為 flase。如果為 true,則當代理(Broker)宕機時應用程序將啟動失敗,很多用戶將受此變更的影響。鑒于 Kafka 是一個高可用的平臺,我們認為在代理(Broker)全部宕機的情況下啟動應用程序并不是常見的情況。
<br >
### **2.1.5. Producer 和 Consumer Factory 變更**
*****
The DefaultKafkaProducerFactory can now be configured to create a producer per thread. You can also provide Supplier<Serializer> instances in the constructor as an alternative to either configured classes (which require no-arg constructors), or constructing with Serializer instances, which are then shared between all Producers. See Using DefaultKafkaProducerFactory for more information.
The same option is available with Supplier<Deserializer> instances in DefaultKafkaConsumerFactory. See Using KafkaMessageListenerContainer for more information.
<br >
### 2.1.6. **Listener Container 變更**
*****
Previously, error handlers received ListenerExecutionFailedException (with the actual listener exception as the cause) when the listener was invoked using a listener adapter (such as @KafkaListener s). Exceptions thrown by native GenericMessageListener s were passed to the error handler unchanged. Now a ListenerExecutionFailedException is always the argument (with the actual listener exception as the cause), which provides access to the container’s group.id property.
Because the listener container has it’s own mechanism for committing offsets, it prefers the Kafka ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG to be false. It now sets it to false automatically unless specifically set in the consumer factory or the container’s consumer property overrides.
The ackOnError property is now false by default. See Seek To Current Container Error Handlers for more information.
It is now possible to obtain the consumer’s group.id property in the listener method. See Obtaining the Consumer group.id for more information.
The container has a new property recordInterceptor allowing records to be inspected or modified before invoking the listener. A CompositeRecordInterceptor is also provided in case you need to invoke multiple interceptors. See Message Listener Containers for more information.
The ConsumerSeekAware has new methods allowing you to perform seeks relative to the beginning, end, or current position and to seek to the first offset greater than or equal to a time stamp. See Seeking to a Specific Offset for more information.
A convenience class AbstractConsumerSeekAware is now provided to simplify seeking. See Seeking to a Specific Offset for more information.
The ContainerProperties provides an idleBetweenPolls option to let the main loop in the listener container to sleep between KafkaConsumer.poll() calls. See its JavaDocs and Using KafkaMessageListenerContainer for more information.
When using AckMode.MANUAL (or MANUAL_IMMEDIATE) you can now cause a redelivery by calling nack on the Acknowledgment. See Committing Offsets for more information.
Listener performance can now be monitored using Micrometer Timer s. See Monitoring Listener Performance for more information.
The containers now publish additional consumer lifecyle events relating to startup. See Application Events for more information.
Transactional batch listeners can now support zombie fencing. See Transactions for more information.
The listener container factory can now be configured with a ContainerCustomizer to further configure each container after it has been created and configured. See Container factory for more information.
<br >
### **2.1.7. ErrorHandler 變更**
*****
The SeekToCurrentErrorHandler now treats certain exceptions as fatal and disables retry for those, invoking the recoverer on first failure.
The SeekToCurrentErrorHandler and SeekToCurrentBatchErrorHandler can now be configured to apply a BackOff (thread sleep) between delivery attempts.
Starting with version 2.3.2, recovered records' offsets will be committed when the error handler returns after recovering a failed record.
See Seek To Current Container Error Handlers for more information.
The DeadLetterPublishingRecoverer, when used in conjunction with an ErrorHandlingDeserializer2, now sets the payload of the message sent to the dead-letter topic, to the original value that could not be deserialized. Previously, it was null and user code needed to extract the DeserializationException from the message headers. See Publishing Dead-letter Records for more information.
<br >
### **2.1.8. TopicBuilder**
*****
提供了一個新類 `TopicBuilder`,可以更方便地創建 `NewTopic` `@Bean`。 有關更多信息,請參見配置主題。
<br >
### **2.1.9. Kafka Streams 變更**
*****
現在,您可以對 `@EnableKafkaStreams` 創建的 `StreamsBuilderFactoryBean` 進行額外的配置。 有關更多信息,請參見 [流配置](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-config)。
新提供了 `RecoveringDeserializationExceptionHandler`,它允許恢復具有反序列化錯誤的記錄。 可以將其與 `DeadLetterPublishingRecoverer` 結合使用,以將這些記錄發送到死信主題。 有關更多信息,請參見 [從反序列化異常中恢復](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-deser-recovery)。
提供了 `HeaderEnricher` 轉換器(Transformer),使用 SpEL 生成標頭值。 有關更多信息,請參見 [Header Enricher](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-header-enricher)。
已提供 `MessagingTransformer`。 這允許 Kafka 流拓撲與 Spring 消息組件(例如 Spring Integration flow)進行交互。 有關更多信息,請參見 [MessagingTransformer 和從 KStream 調用 Spring Integration flow](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-integration)。
<br >
### **2.1.10. JSON Component 變更**
*****
Now all the JSON-aware components are configured by default with a Jackson ObjectMapper produced by the JacksonUtils.enhancedObjectMapper(). The JsonDeserializer now provides TypeReference-based constructors for better handling of target generic container types. Also a JacksonMimeTypeModule has been introduced for serialization of org.springframework.util.MimeType to plain string. See its JavaDocs and Serialization, Deserialization, and Message Conversion for more information.
A ByteArrayJsonMessageConverter has been provided as well as a new super class for all Json converters, JsonMessageConverter. Also, a StringOrBytesSerializer is now available; it can serialize byte[], Bytes and String values in ProducerRecord s. See Spring Messaging Message Conversion for more information.
The JsonSerializer, JsonDeserializer and JsonSerde now have fluent APIs to make programmatic configuration simpler. See the javadocs, Serialization, Deserialization, and Message Conversion, and Streams JSON Serialization and Deserialization for more informaion.
<br >
### **2.1.11. ReplyingKafkaTemplate**
*****
When a reply times out, the future is completed exceptionally with a KafkaReplyTimeoutException instead of a KafkaException.
此外,現在提供了重載的 `sendAndReceive` 方法,該方法允許在每個消息的基礎上指定回復超時事件。
<br >
### **2.1.12. AggregatingReplyingKafkaTemplate**
*****
通過聚合來自多個接收者的回復,擴展了 `ReplyingKafkaTemplate`。 有關更多信息,請參見 [聚合多個回復](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#aggregating-request-reply)。
Extends the ReplyingKafkaTemplate by aggregating replies from multiple receivers. See Aggregating Multiple Replies for more information.
<br >
### **2.1.13. 事務變更**
*****
現在,您可以在 `KafkaTemplate` 和 `KafkaTransactionManager` 上覆蓋生產者工廠的 `transactionIdPrefix`。 有關更多信息,請參見 [transactionIdPrefix](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#transaction-id-prefix)。
You can now override the producer factory’s transactionIdPrefix on the KafkaTemplate and KafkaTransactionManager. See transactionIdPrefix for more information.
<br >
### **2.1.14. New Delegating Serializer/Deserializer**
*****
提供了一個委派的序列化器和反序列化器,利用標頭來啟用生產和消費記錄(使用多種鍵/值類型)。 有關更多信息,請參見 [委派序列化器和反序列化器](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#delegating-serialization)。
<br >
### **2.1.15. New Retrying Deserializer**
*****
新增了一個 `RetryingDeserializer`,用于發生瞬時錯誤時(比如可能發生的網絡問題)重試序列化
有關更多信息,請參見 [Retrying Deserializer](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#retrying-deserialization)。
<br >
### **2.1.16. New function for recovering from deserializing errors**
*****
ErrorHandlingDeserializer2 now uses a POJO (FailedDeserializationInfo) for passing all the contextual information around a deserialization error. This enables the code to access to extra information that was missing in the old BiFunction<byte[], Headers, T> failedDeserializationFunction.
<br >
### **2.1.17. EmbeddedKafkaBroker 變更**
*****
現在,您可以在注解中覆蓋默認代理(Broker)列表屬性名稱。 有關更多信息,請參見 [@EmbeddedKafka 注解 或 EmbeddedKafkaBroker Bean](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#kafka-testing-embeddedkafka-annotation)。
<br >
### **2.1.18. ReplyingKafkaTemplate 變更**
*****
現在,您可以自定義標題名稱以進行關聯,回復主題和回復分區。 有關更多信息,請參見 [使用 ReplyingKafkaTemplate](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#replying-template)。
<br >
### **2.1.19. Header Mapper 變更**
*****
`DefaultKafkaHeaderMapper` 不再將簡單的字符串值標頭編碼為JSON。
<br >
<br >
<br >