This message was deleted.
# help
s
This message was deleted.
j
# Default values for cerbos. # This is a YAML-formatted file. # Declare variables to be passed into your templates. nameOverride: "cerbos" fullnameOverride: "cerbos" # Number of Cerbos pods to run replicaCount: 6 # Container image details image: repository: ghcr.io/cerbos/cerbos pullPolicy: IfNotPresent # Image tag to use. Defaults to the chart appVersion. tag: "" imagePullSecrets: [] serviceAccount: # Specifies whether a service account should be created create: true # Annotations to add to the service account annotations: {} # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: "" # Annotations to add to the pod. podAnnotations: {} # Security context for the whole pod. podSecurityContext: {} # fsGroup: 2000 # Security context for the Cerbos container. securityContext: {} # capabilities: # drop: # - ALL # readOnlyRootFilesystem: true # runAsNonRoot: true # runAsUser: 1000 # Resource limits for the pod. resources: {} # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi # Autoscaling configuration. autoscaling: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80 # Node selector for the pod. nodeSelector: {} # Pod tolerations. tolerations: [] # Pod affinity rules. affinity: {} # Volumes to add to the pod. volumes: [] # Volume mounts to add to the Cerbos container. volumeMounts: [] # Environment variables to add to the pod. env: [] # Source environment variables from config maps or secrets. envFrom: [] # Cerbos service settings. service: type: ClusterIP httpPort: 3592 grpcPort: 3593 # Cerbos deployment settings. cerbos: # Port to expose the http service on. httpPort: 3592 # Port to expose the gRPC service on. grpcPort: 3593 # Secret containing the TLS certificate. # Leave empty to disable TLS. # The secret must contain the following keys: # - tls.crt: Required. Certificate file contents. # - tls.key: Required. Private key for the certificate. # - ca.crt: Optional. CA certificate to add to the trust pool. tlsSecretName: "controlshift.io" # Cerbos log level. Valid values are DEBUG, INFO, WARN and ERROR logLevel: INFO # Add Prometheus service discovery annotations to the pod. prometheusPodAnnotationsEnabled: true # Cerbos config file contents. # Some server settings like server.httpListenAddr, server.grpcListenAddr, server.tls will be overwritten by the chart based on values provided above. config: engine: defaultPolicyVersion: "default" storage: driver: "blob" blob: bucket: "gs://<redacted>" workDir: ${HOME}/tmp/cerbos/work updatePollInterval: 10s server: adminAPI: enabled: true adminCredentials: username: <redacted> passwordHash: <redacted> auxData: jwt: disableVerification: true
i am using that yaml file above and installing cerbos with helm chart
i'm seeing a "Failed to clone blob store" error in my pods.
so i'm guessing cerbos doesn't have permissions to read the gcs bucket?
d
Probably. What’s the full error message?
j
message has been deleted
d
Yes, looks like a permission problem
j
hmmmm okay
i'll go take a look and see. thanks
byt he way, what is this section of the yaml for
serviceAccount: # Specifies whether a service account should be created create: true # Annotations to add to the service account annotations: {} # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: ""
what is the use of a service account @Dennis (Cerbos)
i'm wodnering if i should be seeing a IAM account in GCP created during installation?
what account does cerbos runtime use to connect to gcs?
d
It reads that from environment, but I don’t know if K8s service account running in GKE implies credentials of GCP service account.
I think @Charith (Cerbos) may have an answer when he comes online
j
i have already assigned storage viewer role to the default GKE compute service account
and it's still throwing the same error about being unable to clone the blob
guess i'll wait for charith
d
Is it possible to spin up a pod with GCP CLI, assign the same permissions to the pod and try to download files from the bucket?
j
I have to install the gcs cli tool called gsutil inside it. Let me see how I cna do that.
it works
i created a ubuntu pod, install the google cloud sdk. then ran gsutil ls gs;//<mybucketname> and i am able to list objects in it
i can also do gsutil cp gs://<mybucketname>/myfile and i can copy files out of the bucket
into the ubuntu pod of course.
so the IAM account has sufficient permissions
d
Can you list files?
I see. You can
Can please run cerbos binary manually in this ubuntu pod?
j
oh....just run the binary? what about the configuration that the binary will use?
how will i specify that
d
yes
curl -L -o cerbos.tar.gz “https://github.com/cerbos/cerbos/releases/download/v0.9.1/<BUNDLE>” tar xvf cerbos.tar.gz chmod +x cerbos
configuration is
cerbos.config
section from your Helm values file saved as a separate yaml file.
then you run
./cerbos server --config=/path/to/config.yaml
j
that link only tells me to download and extract the gzipped tarball
how do i configure cerbos to use my gcs bucket?
d
You have your
values.yml
file you use with Helm
copy its
cerbos.config
section to a separate conf.yaml
Copy code
server:
  httpListenAddr: ":3592"
  grpcListenAddr: ":3593"
engine:
  defaultPolicyVersion: "default"
storage:
  driver: "blob"
  blob:
    bucket: "gs://<redacted>"
    workDir: ${HOME}/tmp/cerbos/work
    updatePollInterval: 10s
  server:
    adminAPI:
      enabled: true
      adminCredentials:
        username: <redacted>
        passwordHash: <redacted>
  auxData:
    jwt:
      disableVerification: true
This is your config.yml
j
ok
ok cerrcerbos starts up fine
root@ubuntu:~# ./cerbos server --config=2.yaml 2021-11-09T041152.105Z INFO cerbos.server maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined 2021-11-09T041152.238Z INFO cerbos.blob Polling for updates every 10s {"bucket": "gs://<redacted>", "workDir": "/root/tmp/cerbos/work"} 2021-11-09T041152.239Z INFO cerbos.grpc Starting gRPC server at :3593 2021-11-09T041152.239Z INFO cerbos.http Starting HTTP server at :3592
so it doesn't look like IAM issue?
d
Has it downloaded files?
j
well my bucket is empty
mean it doesn't have any ytaml files in it
it has a single TXT file in it
will cerbos download * . * from the bucket?
d
It looks for
*.yml,**.yaml,**.json
files
It doesn’t download anything else
j
ok let me create a dummy yaml file. it doesn't validate the content of the yaml for download?
so i can put anything in the yaml?
d
it will blow up, since it will try to interpret it as a policy
but we will see if cerbos is able to download it
If you want to try real policies you can upload to this folder to the bucket https://github.com/cerbos/cerbos/tree/main/internal/test/testdata/store
j
yes so this is what i have done. i placed a yaml file inside...it's a rubbish yaml
not cerbos policy
then i started up cerbos
2021-11-09T041755.343Z INFO cerbos.server maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined 2021-11-09T041755.584Z ERROR cerbos.blob Failed to build index {"bucket": "gs://abac_policies", "workDir": "/root/tmp/cerbos/work", "error": "failed to build index: missing imports=0, duplicate definitions=0, load failures=1"} 2021-11-09T041755.584Z ERROR cerbos.blob Failed to initialize blob store {"bucket": "gs://<redacted>", "workDir": "/root/tmp/cerbos/work", "error": "failed to build index: missing imports=0, duplicate definitions=0, load failures=1"} 2021-11-09T041755.584Z INFO cerbos.server maxprocs: No GOMAXPROCS change to reset ERROR: failed to create store: failed to build index: missing imports=0, duplicate definitions=0, load failures=1
i'll just remove the redacted thing
it's getting annoying redacting all the time. hahaha
ok so it blows up
d
looks good
j
which is expected
then when i checked in home/tmp/cerbos/work
i do see the YAML file there
👍 1
so it maybe the issue is something related to the helm chart....
and for that we wait for @Charith (Cerbos)?
d
yes
j
alright. much obliged. you've been very helpful.
d
no problem
@Oguzhan might help as well when he comes online.
j
Ok
c
It is k8s best practice to have dedicated service accounts for applications. That's why the Helm chart creates a Kubernetes service account (not GCP service account) by default for Cerbos. It has no cluster privileges because Cerbos doesn't need any.
The best way to deploy Cerbos with GCS access is to create a dedicated Google service account with only access to that bucket. Then, save the service account JSON key as a k8s secret, mount that secret to the Cerbos pod and set the environment variable
GOOGLE_APPLICATION_CREDENTIALS
pointing to the mounted path of that key in the pod. See https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform for an example.
The other option is to use Google Workload Identity which lets you use Google service accounts inside GKE. So, you'd then configure Helm with
serviceAccount.name
pointing to that federated service account. I haven't personally tried this myself so there might be some caveats there though. You can read more about it at https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
j
Alright let me try that.
hmmm i tried the first approach using a dedicated Google Service Account. my pods are stuck in "pods are pending"
i modified the helm chart's values.yaml like so
Copy code
# Volumes to add to the pod.
volumes: 
- name: cerbos-key
  secret:
    secretName: cerbos-svc-key


# Volume mounts to add to the Cerbos container.
volumeMounts:
- name: cerbos-key
  mountPath: ${HOME}/tmp/cerbos/secrets


# Environment variables to add to the pod.
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
  value: ${HOME}/tmp/cerbos/secrets/key.json


# Source environment variables from config maps or sec
does that look correct?
@Charith (Cerbos)
c
Seems OK on the surface. I would not use environment variables in the path though. They can be interpreted differently. Use an absolute path like
/secret
what does
k describe pod cerbos
output?
j
Copy code
# Volumes to add to the pod.
volumes: 
- name: cerbos-key
  secret:
    secretName: cerbos-svc-key

# Volume mounts to add to the Cerbos container.
volumeMounts:
- name: cerbos-key
  mountPath: /secrets/cerbos

# Environment variables to add to the pod.
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
  value: /secrets/cerbos/key.json
let me redeploy with that instead
ok this is weird
that secret definitely exists
c
Is it in the same namespace?
j
Yes. The name space is cerbos.
So the secret is in cerbos ns. The helm chart is installed in cerbos ns too.
If I remove the volume mounts for the service account then the certs volume gets mounted properly
Really odd
c
You are mounting the certs volume through Helm values as well, right?
j
Oh wait!!!
I think you are right about the ns
Jesus christ
Namespace default. Omg I feel so stupid for missing that. My bad!
c
Ah that would be why 🙂
j
Hahahaha a. Thanks
c
You're welcome