I have a Kubernetes cluster running on AWS, and I ...
# help
d
I have a Kubernetes cluster running on AWS, and I exposes it using an ingress. If I run
helm install cerbos cerbos/cerbos --version=0.26.0
, the installation goes through and I'm able to access the API documentation on the endpoint in the browser. To be able to test policy requests, I want to serve policies from a public S3 bucket, and added a config file as described here https://docs.cerbos.dev/cerbos/latest/installation/helm.html, but with the following configuration:
Copy code
cerbos:
  config:
    server:
      httpListenAddr: :3592
      grpcListenAddr: :3593
      adminAPI: 
        adminCredentials: 
          passwordHash: <password-hash>
          username: cerbos
        enabled: true
      log:
        level: info

    storage:
      driver: s3
      blob:
        bucket: s3://<bucket-name>?region=eu-west-1
        updatePollInterval: 10s
        downloadTimeout: 30s
I tried the following: •
helm upgrade cerbos cerbos/cerbos --version=0.26.0 --values=config.yaml
-> This does nothing. It states the upgrade is successful, but nothing changes and Cerbos does not seem to read the policy. The API documentation is still available. •
helm uninstall cerbos
helm install cerbos cerbos/cerbos --version=0.26.0 --values=config.yaml
-> This breaks the system, and the ingress returns 503 when trying to access the API documentation. • Rerunning
helm uninstall cerbos
helm install cerbos cerbos/cerbos --version=0.26.0
restores the API access. A couple of things: • How / can I upgrade an existing cerbos service using helm in the cluster, and if so, what is the process for this? • How can I read / access the deployment logs, if something does not goes well? • What could be the issue here? Am I missing some required configurations?
c
Hi. The way to upgrade an existing Cerbos installation is to use
helm upgrade
. You can find the logs using
kubectl logs -f svc/cerbos -n YOUR_NAMESPACE
and inspect the deployment using
kubectl describe deployment cerbos -n YOUR_NAMESPACE
My hunch is that your pods don't have access to the S3 bucket. When you upgraded, nothing happened because the old deployment was still the primary because the new one was failing. When you uninstalled and re-installed, there's no fallback so the whole installation is in failed state. You can check that with
helm history cerbos -n YOUR_NAMESPACE
You either need to add IAM roles to your k8s nodes to give them access to the s3 bucket OR add a key/token with access to the bucket as a environment variable to the Cerbos pod.
d
Ah, okay. So setting the environment variables
AWS_ACCESS_KEY_ID
and
AWS_SECRET_ACCESS_KEY
it probably will solve the issue.
c
Yes. Since you're already on EKS, it might be possible to use IAM roles as well. Unfortunately, I am not an expert on AWS so I can't help you much with that 🙂
d
You helped quite a lot. It is working now 👍
But as a follow-up: with the defined setup, the request towards
/admin/policy
returns
Copy code
{
  "code": 12,
  "message": "Admin service is disabled by the configuration"
}
c
Is it using the same configuration you pasted earlier?
What's the output of
kubectl get cm cerbos -o yaml -n YOUR_NAMESPACE
?
d
Sorry. Got tapped on the shoulder 🙂 Went in again, and ran
uninstall
and
install
again, and checked the logs. Then I was able to identify the issues. There were a couple actually: •
log
is not a valid property for server. •
s3
is not a valid driver. •
missing authentication
(without AWS credentials)
c
Oh, YAML spacing issues 🙂 I should have noticed that. The correct driver is
blob
.
d
Yeah, now it works 👍
Got a better response or the admin/policy request aswell 🙂
Thanks for the help 👍
c
Glad to help
233 Views