New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CVE-2019-11245: v1.14.2, v1.13.6: container uid changes to root after first restart or if image is already pulled to the node #78308
Comments
I'm guessing this is @kubernetes/sig-apps-bugs ? |
@sherbang: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Also repeatable using the older minikube ISO that uses Docker 18.06.3-ce:
Evidently this could not be replicated using CRIO as a container runtime. It's unclear to me at this time if this is a Kubernetes/Docker integration issue or a minikube environmental issue. |
I'm encountering the same issue on 1 node kubernetes cluster setup with kubeadm |
After downgrade to 1.14.1 it works again (kubelet and control plane). So looks like this is not tied to minikube. |
Might be fixed by #78261 |
/sig node |
/remove-sig apps |
…usng uid=0 for containers. See kubernetes/kubernetes#78308
…usng uid=0 for containers. See kubernetes/kubernetes#78308
…usng uid=0 for containers. See kubernetes/kubernetes#78308
Please be aware, that this is not only happening for restarting containers, but also when deploying two containers from the same image. Example to be found at kubernetes/minikube#4369, where I had one container for the app and the same image used for a job, resulting in the job container running as uid=0. I haven't tested what happens when scaling via controller or manually adding another pod. |
hoisted comment up to description |
v1.13.7 and v1.14.3 have been released /close |
@liggitt: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…he "ald" user (to workaround kubernetes/kubernetes#78308)
…he "ovd" user (to workaround kubernetes/kubernetes#78308)
The fix #78261 seemed only modify the process of getImageUser(), but the critical problem may be when the uid and username in imageStatus that get by getImageUser() is not set, getImageUser() will return default user as root , this may bring some security concern! |
/label official-cve-feed (Related to kubernetes/sig-security#1) |
…n causng uid=0 for containers. See kubernetes/kubernetes#78308
CVSS:3.0/AV:L/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L, 4.9 (medium)
In kubelet v1.13.6 and v1.14.2, containers for pods that do not specify an explicit
runAsUser
attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node. If the pod specifiedmustRunAsNonRoot: true
, the kubelet will refuse to start the container as root. If the pod did not specifymustRunAsNonRoot: true
, the kubelet will run the container as uid 0.CVE-2019-11245 will be fixed in the following Kubernetes releases:
Fixed by #78261 in master
Affected components:
Affected versions:
Affected configurations:
Clusters with:
runAsUser: <uid>
ormustRunAsNonRoot:true
Impact:
If a pod is run without any user controls specified in the pod spec (like
runAsUser: <uid>
ormustRunAsNonRoot:true
), a container in that pod that would normally run as the USER specified in the container image manifest can sometimes be run as root instead (on container restart, or if the image was previously pulled to the node)runAsUser
are unaffected and continue to work properlyrunAsUser
setting are unaffected and continue to work properlymustRunAsNonRoot:true
will refuse to start the container as uid 0, which can affect availabilityrunAsUser
ormustRunAsNonRoot:true
will run as uid 0 on restart or if the image was previously pulled to the nodeMitigations:
This section lists possible mitigations to use prior to upgrading.
runAsUser
directives in pods to control the uid a container runs asmustRunAsNonRoot:true
directives in pods to prevent starting as root (note this means the attempt to start the container will fail on affected kubelet versions)original issue description follows
What happened:
When I launch a pod from a docker image that specifies a USER in the Dockerfile, the container only runs as that user on its first launch. After that the container runs as UID=0.
What you expected to happen:
I expect the container to act consistently every launch, and probably with the USER specified in the container.
How to reproduce it (as minimally and precisely as possible):
Testing with minikube (same test specifying v1.14.1,
kubectl logs test
always returns 11211):Anything else we need to know?:
Environment:
kubectl version
): I get the results I expect in v1.13.5 and v1.14.1. The problem exists in v1.13.6 and v1.14.2cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: