You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use force unmount and explicitly unmount bad mount points
There have been cases where the logic to cleanup a mount point
has caused the driver to get into a bad state. This is most
obvious when a subdirectory is mounted to a volume and a parent
directory of that subdirectory is deleted. The Lustre driver
doesn't handle that case in the way that Kubernetes expects
and returns invalid data. To avoid this scenario causing our
driver to get into a bad state, leak mount points, etc, we
must explicitly check that we can read the necessary information
about the mount point, and if not, explicitly unmount that
mount point before allowing Kubernetes to clean up the directory.
To ensure that we don't end up in a bad state, this change
enables force unmounting as well. The force unmount will only
occur after a timeout has expired, since force unmounts can
cause issues with the Lustre driver. However, in this case, it
is better if we are in a bad enough situation to be able to
eventually return to a good state rather than require manual
intervention.
0 commit comments