More

PC conflicts

Cong Wang

Submitted

Linux kernel engineers often fine-tune sysctl values to optimize workload performance for specific scenarios. However, given the multitude of parameters per system and the large number of systems in operation, manual tuning may not consistently yield ideal results. To address this, we leverage machine learning, particularly optimization algorithms, to identify the best parameter combinations for different workloads, and streamline the process to minimize human intervention.

In our Proof of Concept, which focuses on optimizing HTTP latency for an Nginx server, multiple kernel parameters are selected for the tuning experiment from Memory Management, CPU scheduler, Networking, Block IO fields. We also have set up a data pipeline to automate the whole iterative experimenting process, which includes triggering the benchmark, collecting the evaluation metrics or benchmark scores, running optimization algorithms, and updating the configurable kernel parameters. The benchmark test results demonstrate improvements over both the average P99 values and witness an obvious shift in data distributions compared to the manual tuning solutions, proving the potential of the autotuning solution.

In this presentation, we'll delve into the experimental design, data pipeline setups, and benchmark test results. We'll also juxtapose the performance of manual tuning with that of automated tuning, providing an analysis and discussion of the results.

J. Mou, K. Sywula [details]

Jasmine Mou (ByteDance) <jasmine.mou@bytedance.com>

Krz Sywula (ByteDance) <krz.sywula@bytedance.com>

Submission Type
Talk
Submission Label
Moonshot
Estimated Length Of Time For Presentation (in minutes)
30
Attendance
Physically

To edit this submission, sign in using your email and password.