performance issues as kubernetes agent pods scale up

Previous Topic Next Topic
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

performance issues as kubernetes agent pods scale up

Jeff Knurek
We have a build pipeline that dynamically scales up agents (with K8s plugin) to execute tests on. We've noticed that as the amount of agents scales up to 20 (or even over 50), the performance of each agent degrades.

Each agent's resource usage is relatively low, but the requests are done in a way that at most 2 agents end up on a single k8s node. And based on the metrics collected from prometheus, there is no indication that the pods themselves are under high load. We have seen the Jenkins master hit high CPU levels (7 core), but this happens more often when concurrent builds with high agent counts.

In fact, I've added `time` to each `sh` command in the pipeline. And so for example, a `yarn install --ignore-optional` (with packages already installed) takes approx 2 seconds, but Jenkins is reporting it as 1 minute 28 seconds. (see screenshot)

Is there anything that can be done/adjusted to resolve this?


You received this message because you are subscribed to the Google Groups "Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To view this discussion on the web visit

timed-commands.png (135K) Download Attachment