By default, Kubernetes does not automatically restart a Deployment when its ConfigMap changes. This can lead to situations where your pods keep running with outdated configuration until you trigger a rollout manually. Fortunately, there are common patterns to solve this.

Why It Happens

Kubernetes mounts ConfigMaps into pods as files or environment variables, but the Deployment controller does not track changes in ConfigMap content. That means no automatic restart.

Solution

Checksum annotations: Add a hash of the ConfigMap into the Deployment’s pod template annotations.
Example in Helm:

annotations:
 configmap-hash: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}

Any ConfigMap change now will update the annotation → triggering a new rollout.

As alternative you can use special controllers that will restart deployment. For example this one - Reloader. But you will also need to put some anotations in your deployemnt or statefulset for magic to happen and downside of course that you need 1 more component in your cluster.

Conclusion

The most popular approach is using hash-based annotations in Deployment templates. This way, every ConfigMap change updates the Deployment spec, which triggers Kubernetes to roll out new pods with the fresh configuration — no manual steps required