Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVE-2019-11245: v1.14.2, v1.13.6: container uid changes to root after first restart or if image is already pulled to the node #78308

Closed
sherbang opened this issue May 24, 2019 · 18 comments
Labels
area/security kind/bug Categorizes issue or PR as related to a bug. official-cve-feed Issues or PRs related to CVEs officially announced by Security Response Committee (SRC) priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@sherbang
Copy link

sherbang commented May 24, 2019

CVSS:3.0/AV:L/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L, 4.9 (medium)

In kubelet v1.13.6 and v1.14.2, containers for pods that do not specify an explicit runAsUser attempt to run as uid 0 (root) on container restart, or if the image was previously pulled to the node. If the pod specified mustRunAsNonRoot: true, the kubelet will refuse to start the container as root. If the pod did not specify mustRunAsNonRoot: true, the kubelet will run the container as uid 0.

CVE-2019-11245 will be fixed in the following Kubernetes releases:

Fixed by #78261 in master

Affected components:

  • Kubelet

Affected versions:

  • Kubelet v1.13.6
  • Kubelet v1.14.2

Affected configurations:

Clusters with:

  • Kubelet versions v1.13.6 or v1.14.2
  • Pods that do not specify an explicit runAsUser: <uid> or mustRunAsNonRoot:true

Impact:

If a pod is run without any user controls specified in the pod spec (like runAsUser: <uid> or mustRunAsNonRoot:true), a container in that pod that would normally run as the USER specified in the container image manifest can sometimes be run as root instead (on container restart, or if the image was previously pulled to the node)

  • pods that specify an explicit runAsUser are unaffected and continue to work properly
  • podSecurityPolicies that force a runAsUser setting are unaffected and continue to work properly
  • pods that specify mustRunAsNonRoot:true will refuse to start the container as uid 0, which can affect availability
  • pods that do not specify runAsUser or mustRunAsNonRoot:true will run as uid 0 on restart or if the image was previously pulled to the node

Mitigations:

This section lists possible mitigations to use prior to upgrading.

  • Specify runAsUser directives in pods to control the uid a container runs as
  • Specify mustRunAsNonRoot:true directives in pods to prevent starting as root (note this means the attempt to start the container will fail on affected kubelet versions)
  • Downgrade kubelets to v1.14.1 or v1.13.5 as instructed by your Kubernetes distribution.

original issue description follows

What happened:

When I launch a pod from a docker image that specifies a USER in the Dockerfile, the container only runs as that user on its first launch. After that the container runs as UID=0.

What you expected to happen:
I expect the container to act consistently every launch, and probably with the USER specified in the container.

How to reproduce it (as minimally and precisely as possible):
Testing with minikube (same test specifying v1.14.1, kubectl logs test always returns 11211):

$ minikube start --kubernetes-version v1.14.2
😄  minikube v1.1.0 on linux (amd64)
💿  Downloading Minikube ISO ...
 131.28 MB / 131.28 MB [============================================] 100.00% 0s
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳  Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
💾  Downloading kubeadm v1.14.2
💾  Downloading kubelet v1.14.2
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"
$ cat test.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: test
    image: memcached:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/bash"]
    args:
    - -c
    - 'id -u; sleep 30'
$ kubectl apply -f test.yaml 
pod/test created

# as soon as pod starts
$ kubectl logs test
11211
# Wait 30 seconds for container to restart
$ kubectl logs test
0
# Try deleting/recreating the pod
$ kubectl delete pod test
pod "test" deleted
$ kubectl apply -f test.yaml 
pod/test created
$ kubectl logs test
0

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): I get the results I expect in v1.13.5 and v1.14.1. The problem exists in v1.13.6 and v1.14.2
  • Cloud provider or hardware configuration: minikube v1.1.0 using VirtualBox
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
@sherbang sherbang added the kind/bug Categorizes issue or PR as related to a bug. label May 24, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 24, 2019
@sherbang
Copy link
Author

I'm guessing this is @kubernetes/sig-apps-bugs ?

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 24, 2019
@k8s-ci-robot
Copy link
Contributor

@sherbang: Reiterating the mentions to trigger a notification:
@kubernetes/sig-apps-bugs

In response to this:

I'm guessing this is @kubernetes/sig-apps-bugs ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tstromberg
Copy link

tstromberg commented May 24, 2019

Also repeatable using the older minikube ISO that uses Docker 18.06.3-ce:

minikube start --iso-url="https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso" --vm-driver=k vm2

Evidently this could not be replicated using CRIO as a container runtime. It's unclear to me at this time if this is a Kubernetes/Docker integration issue or a minikube environmental issue.

@tarioch
Copy link

tarioch commented May 24, 2019

I'm encountering the same issue on 1 node kubernetes cluster setup with kubeadm

@tarioch
Copy link

tarioch commented May 24, 2019

After downgrade to 1.14.1 it works again (kubelet and control plane). So looks like this is not tied to minikube.

@tallclair
Copy link
Member

Might be fixed by #78261

@kow3ns
Copy link
Member

kow3ns commented May 28, 2019

/sig node

@k8s-ci-robot k8s-ci-robot added the sig/node Categorizes an issue or PR as relevant to SIG Node. label May 28, 2019
@kow3ns
Copy link
Member

kow3ns commented May 28, 2019

/remove-sig apps

@k8s-ci-robot k8s-ci-robot removed the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label May 28, 2019
@dchen1107 dchen1107 added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label May 29, 2019
poikilotherm added a commit to gdcc/dataverse-kubernetes that referenced this issue May 29, 2019
poikilotherm added a commit to gdcc/dataverse-kubernetes that referenced this issue May 29, 2019
poikilotherm added a commit to gdcc/dataverse-kubernetes that referenced this issue May 29, 2019
@liggitt liggitt changed the title Container uid changes after first restart CVE-2019-11245: v1.14.2, container uid changes after first restart May 30, 2019
@liggitt liggitt changed the title CVE-2019-11245: v1.14.2, container uid changes after first restart CVE-2019-11245: container uid changes after first restart in v1.14.2, v1.13.6 May 30, 2019
@poikilotherm
Copy link

poikilotherm commented May 30, 2019

Please be aware, that this is not only happening for restarting containers, but also when deploying two containers from the same image. Example to be found at kubernetes/minikube#4369, where I had one container for the app and the same image used for a job, resulting in the job container running as uid=0.

I haven't tested what happens when scaling via controller or manually adding another pod.

@liggitt liggitt changed the title CVE-2019-11245: container uid changes after first restart in v1.14.2, v1.13.6 CVE-2019-11245: container uid changes to root after first restart in v1.14.2, v1.13.6 May 30, 2019
@liggitt
Copy link
Member

liggitt commented May 30, 2019

hoisted comment up to description

@liggitt liggitt changed the title CVE-2019-11245: container uid changes to root after first restart in v1.14.2, v1.13.6 CVE-2019-11245: v1.14.2, v1.13.6: container uid changes to root after first restart or if image is already pulled to the node May 30, 2019
jackfrancis added a commit to jackfrancis/aks-engine that referenced this issue May 30, 2019
@liggitt
Copy link
Member

liggitt commented Jun 6, 2019

v1.13.7 and v1.14.3 have been released

/close

@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

v1.13.7 and v1.14.3 have been released

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

TimeBye pushed a commit to TimeBye/kubeadm-ha that referenced this issue Jun 13, 2019
sglover added a commit to IBM/ansible-lifecycle-driver that referenced this issue Oct 3, 2019
sglover added a commit to IBM/openstack-vim-driver that referenced this issue Oct 3, 2019
@b0b0haha
Copy link

The fix #78261 seemed only modify the process of getImageUser(), but the critical problem may be when the uid and username in imageStatus that get by getImageUser() is not set, getImageUser() will return default user as root , this may bring some security concern!

@PushkarJ
Copy link
Member

/label official-cve-feed

(Related to kubernetes/sig-security#1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security kind/bug Categorizes issue or PR as related to a bug. official-cve-feed Issues or PRs related to CVEs officially announced by Security Response Committee (SRC) priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests