Hi, is the `tenantAssignments` paradigm expressed ...
# help
n
Hi, is the
tenantAssignments
paradigm expressed in this recipe scalable to thousands of tenants, each represented as a 128-bit UUID? That would require ~128 kB of data for every network request. I'm worried about response time.
c
Just curious: how did you arrive at 128kB of data per request? Does that mean you have ~8000 tenants? Cerbos runs inside your local network close to the application so I'd be quite surprised if 128kb of data would have any noticeable impact on network transfer times. The response time of Cerbos would mostly depend on how complicated your policy rules are. Are they doing a quite expensive calculation over all those 8K entries, for example? You'd have to run some tests with some degenerate cases to get an idea about the worst response times you could expect. Do you need to always pass 8K entries to Cerbos? Could it be possible to filter that list down by context such as removing tenant IDs that are not applicable to the operation. So, the short answer is that "it depends". It's hard to give you a good answer without knowing how your system works. You can schedule a free call with our team using https://go.cerbos.io/workshop and take them through the problem. They might be able to help you model the problem in a different way.