hey folks. I am running cerbos as a sidecar for a ...
# help
hey folks. I am running cerbos as a sidecar for a couple service. I already deployed this into a cluster a long time ago and verified all was working. However, I just deployed to a new cluster and am getting this error:
Copy code
request failed: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /sock/cerbos.sock: connect: permission denied"
I can't remember if I saw this before or how I fixed it 🤦 . I am connecting via gRPC on the unix socket as described in the error
i can provide all the helm stuff if needed
Are you running the application as an unprivileged user? Maybe they don't have read access to the socket file? We can try to reproduce it if you send the chart, your k8s version and the client connection snippet.
The default permission on the socket file is
., which makes it usable only by the user. You can change that with the
setting https://docs.cerbos.dev/cerbos/latest/configuration/server.html#_listen_addresses
OK the permissions might be the issue. I'll report back with results
OK so it looks like my permissions were already set that way (
). However, one of our ops folks consolidated the helm chart for our cerbos integration into one file, and that's when things stopped working. Gonna try a few things on our side, but I might need some support in a bit if we can't spot the problem.
Maybe the permissions on the parent directory (
) doesn't give execute rights to the user? That could also be a cause.
ah great point. I will make sure that's taken into consideration
@Charith (Cerbos) you don't happen to know how to do that inside a Dockerfile, do you? I am trying to chown the socket file to the user that's running in the container, but that doesnt seem to work. Do you have any service configuration (dockerfile I guess) that does this properly? (by this, i mean connect to a sidecar.
We have a Docker Compose demo that uses a socket on a tmpfs volume but it doesn't do any permission juggling. I can think of two things you could try: • Try changing the securityContext.fsGroup for the pod • Use an in-memory
volume for the socket