Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

httpproxy: crd validations don't enforce at least one service per route #2270

Closed
rbankston opened this issue Feb 21, 2020 · 7 comments · Fixed by #2274
Closed

httpproxy: crd validations don't enforce at least one service per route #2270

rbankston opened this issue Feb 21, 2020 · 7 comments · Fixed by #2274
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@rbankston
Copy link
Contributor

What steps did you take and what happened:
Invalid routes are allowed on an HTTPProxy and cause HTTP/2" 404 NR in the envoy logs but the httpproxy output shows as valid.

NAMESPACE          NAME           FQDN                                                                     TLS SECRET                                         STATUS   STATUS DESCRIPTION
85028-sp-vfs-dev   vfshttpproxy    00000-dev.apps.dev.home.vmw.example.com   ingress-contour/ingress-contour-default-ssl-cert   valid    valid HTTPProxy

What did you expect to happen:
Would expect an invalid route to cause HTTPProxy to be listed as invalid when doing kubectl get httpproxy -A or manifest validation and rejection of the manifest

Anything else you would like to add:
https://kubernetes.slack.com/archives/C8XRH2R4J/p1582314181482900 is the slack thread that found this bug. Adding valid and invalid yaml for testing purposes.

Invalid Yaml:

apiVersion: v1
items:
- apiVersion: projectcontour.io/v1
  kind: HTTPProxy
  metadata:
    annotations:
      ingress.kubernetes.io/force-ssl-redirect: "true"
    name: vfshttpproxy
    namespace: 85028-sp-vfs-dev
  spec:
    routes:
    - conditions:
      - prefix: /
      services:
      - name: vfs-service
        port: 8080
    - timeoutPolicy:
        idle: 1800s
        response: 1800s
    virtualhost:
      fqdn: 00000-dev.apps.dev.home.vmw.example.com
      tls:
        secretName: ingress-contour/ingress-contour-default-ssl-cert
kind: List

Valid Yaml:

apiVersion: v1
items:
- apiVersion: projectcontour.io/v1
  kind: HTTPProxy
  metadata:
    annotations:
      ingress.kubernetes.io/force-ssl-redirect: "true"
    name: vfshttpproxy
    namespace: 85028-sp-vfs-dev
  spec:
    routes:
    - conditions:
      - prefix: /
      services:
      - name: vfs-service
        port: 8080
    timeoutPolicy:
      idle: 1800s
      response: 1800s
    virtualhost:
      fqdn: 00000-dev.apps.dev.home.vmw.example.com
      tls:
        secretName: ingress-contour/ingress-contour-default-ssl-cert
kind: List

Environment:

  • Contour version: 1.2
  • Kubernetes version: (use kubectl version):v1.15.7
  • Kubernetes installer & version: kubeadm 1.15.7
  • Cloud provider or hardware configuration: VCP
  • OS (e.g. from /etc/os-release):Ubuntu 18.04.3 LTS
@stevesloka stevesloka added the kind/bug Categorizes issue or PR as related to a bug. label Feb 21, 2020
@davecheney
Copy link
Contributor

davecheney commented Feb 21, 2020 via email

@davecheney
Copy link
Contributor

davecheney commented Feb 21, 2020 via email

@rbankston
Copy link
Contributor Author

From reading yaml on my phone the invalid part is the misplaced timeoutPolicy stanza.

Correct. Steve helped me out in slack and identified the timeoutPolicy bit.

From contours point of view this information is invisible to contour as it is never deserialised from the api server. We rely on the crd schema validations. Did the validations we supplied with 1.2.0 catch this?

I deployed latest today(2-21-2020) to work on replicate this issue (1.2.0 is in use) and the validations didn't catch this and HTTPProxy just said it was all valid

@stevesloka
Copy link
Member

What happened was Contour got a route that looked like this and that got passed off to Envoy, but its missing the cluster bits to match up:

routes:
- timeoutPolicy:
        idle: 1800s
        response: 1800s

I bet we could tweak the validations to possibly assist with validation, but ideally, Contour should also detect (and I'm surprised it didn't) that there are no services referenced and set the route and set the proxy to an error state.

@davecheney
Copy link
Contributor

davecheney commented Feb 21, 2020 via email

@davecheney
Copy link
Contributor

There are two bugs here

  1. We should adjust our validation such that each route field has at least one route.services subway.

  2. The spec above would have generated a route with no backend services, and should have been filtered out at the dag Builder and rds visitor level. We need to figure out why that didn't happen.

@davecheney davecheney added this to the 1.3.0 milestone Feb 21, 2020
@davecheney davecheney changed the title HTTPProxy routes aren't validated httpproxy: crd validations don't enforce at least one service per route Feb 23, 2020
@davecheney davecheney self-assigned this Feb 23, 2020
@davecheney
Copy link
Contributor

I have a fix for this

davecheney added a commit to davecheney/contour that referenced this issue Feb 23, 2020
Fixes projectcontour#2270

Signed-off-by: Dave Cheney <dave@cheney.net>
davecheney added a commit to davecheney/contour that referenced this issue Feb 23, 2020
Fixes projectcontour#2270

Signed-off-by: Dave Cheney <dave@cheney.net>
davecheney added a commit to davecheney/contour that referenced this issue Feb 23, 2020
Fixes projectcontour#2270

Signed-off-by: Dave Cheney <dave@cheney.net>
davecheney added a commit to davecheney/contour that referenced this issue Feb 24, 2020
Fixes projectcontour#2270

Signed-off-by: Dave Cheney <dave@cheney.net>
davecheney added a commit that referenced this issue Feb 24, 2020
Fixes #2270

Signed-off-by: Dave Cheney <dave@cheney.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants