Log4j day zero vulnerability - Pega Stream (Kafka) in Kubernetes
The mitigation steps for addressing the log4j vulnerability in Kafka instructs to replace the existing log4j jar files in the kafka-<version>/libs directory with the new 2.16.0 jars, followed by a restart of the application server. While this solution works with an actual application server, it does not work with Pega Stream statefulsets as new pods get created on restarts (i.e. kubectl delete pod <stream-pod>).
As a workaround, I created a DSS to unpack the kafka installation to the persistent volume claim attached the stream pod (i.e. /opt/pega/kafkadata/kafka) followed by a pod restart. Once installed in /opt/pega/kafkadata/kafka-<version>, I replaced the jars in the libs directory per the instructions followed by another pod restart and checked to ensure that the 2.16.0 jars did not get overlayed, which they did not.
My question: Is this an acceptable solution to have the kakfa installation reside on the persistent volume claim along with the Kafka data?