Hmm... these numbers don't look good at all. I thi...
# help
c
Hmm... these numbers don't look good at all. I think there must be some configuration issue here. We've tested Cerbos internally with much higher loads and the p95 latencies were under 50ms. Granted, it wasn't on Lambda. But, I can't imagine it adds so much overhead. 🧵
Are you running the test from the same region as the lambda? What's the RTT?
Also, bear in mind that lambdas have a startup cost because Cerbos has to fetch the policies and compile them every time. If you're hitting a Cerbos instance while its starting up and compiling the policies, you're likely to see errors.
Something's definitely not right in this test. Unfortunately, I am not a Lambda expert so I can't give you any tuning or troubleshooting tips. I'd suggest that you run your test against a Cerbos instance that's not running on Lambda to get a baseline and then compare that with the result you got here.
k
Yes the lambda region is same,I havent checked the RTT yet,Are there any suggestions on how to use cerbos the best way on AWS
d
It looks like on the second graph there’re cold starts for requests after request #600. The AWS ECR repository, lambda itself, and the S3 bucket should be collocated in the same region for better start-up time. 1. How much memory is allocated for the Cerbos lambda? Did you try to increase it? 2. Do you run these tests in another lambda (in the same region)? 3. What’s the
updatePollInterval
config value if it is set?
k
1. For now memory is 1024MB 2. Yes the region is same for AWS ECR repository, lambda itself, and the S3 bucket 3. updatePollInterval for now is 1500s , what is the default??
d
• 1024MB is a lot more than Cerbos needs, but for AWS Lambda configuration, the amount of memory also determines the amount of virtual CPU. At 1769 MB, a function has the equivalent of one vCPU. It’s worth trying this value. ◦ Configuring provisioned concurrency can reduce the number of cold starts. • 1500s for
updatePollInterval
is good. Default is 0 - no polling.