Hey guys! Question about performance. Let’s say at...
# help
r
Hey guys! Question about performance. Let’s say at some point FGA will be needed, there will be Resource Policies per individual object (
kind: Parent.Entity.aFhAsd273hd2asda
). This might grow to 100k+ policies in DB. How does Cerbos behave at a scale?
c
We have done some internal tests with a large number of policies (10K+) and didn't see any noticeable performance issues. Of course, the complexity of policies and the number of concurrent requests to Cerbos will have some impact if the instance does not have enough memory and CPU. If you have such a large number of policies, it might be better to run serveral Cerbos instances behind a load balancer. You should also increase the compile cache size of Cerbos so that it's not constantly evicting cached policies and having to compile them over and over. It might even be worth doing some request sharding so that the working set of each Cerbos instance is small and can be held in memory.
r
Hey @Charith (Cerbos) thanks for answering. What are the recommended k8s resource requirements? Currently we put 0.5 CPU and 500Mi RAM on a single pod. Can you advise here on some optimal configurations?
Also, how can the compile Cache size be increased?
c
compile.cacheSize
in the configuration (https://docs.cerbos.dev/cerbos/latest/configuration/index.html#_full_configuration). On k8s, don't set a CPU
limit
only a
request
. Cerbos doesn't need that much memory initially. It's difficult to give an exact number because that depends on a lot of factors. Just monitor the pod resource usage and adjust as you see fit or use the vertical pod autoscaler.
r
Thanks!