jeudi 16 juin 2011

Summary of Linux Symposium 2011

Linux Symposium took place in Ottawa once again in 2011. I went there for a talk about recovering system metrics from kernel trace. There were interesting talks, here is a summary.

Hitoshi Mitake explained scalability issues for virtualization on multi-core architecture with real-time constraints. He mentioned that when one of the virtual CPU is interrupted, another virtual CPU may tries to enter section protected by a spin lock, which decrease performance under load. Alternative scheduling of virtual CPU is proposed to avoid such situation.

Sergey Blagodurov talked about scheduling in NUMA multicore architecture. I learned the numactl system calls that can be used at the userspace level to move process and memory between domains. He was developing scheduler algorithms to optimize performance on this architecture. Since the Linux scheduler can migrate a process, but not the related memory pages, then performance is decreased because the inter-domain communication is increased and is much slower. One of his goal was to increase memory locality for pages that were used frequently.

One talk has been given about Tilera architecture by Jon C Masters, that includes on a single chip up to 100 cores! This piece of hardware is undoubtedly very special, and I would love to get one on my desk! I wonder how does the Linux kernel can scale up to that number of cores.

Trinity, which is a system call fuzzer has been presented by Dave Jones. It's basically a tool that perform system calls with randomized arguments, but still looking at edge cases. They found and fixed around 20 weired bugs this way. Code coverage report could be useful in future.

What about redundant data on a disk? There were a talk about block device deduplication by Kuniyasu Suzaki. The principle is that two identical blocks should not be saved twice. Since it operates at the block level, mileage vary between file systems. The presenter showed principally how much space can be recovered according to block size and file system type. I'm still curious what is the performance impact and CPU overhead of performing deduplication online.

The last talk I wanted to mention is the one given by Alexandre Lissy, about verifications of the Linux kernel. Static checks like Coccinelle are available right now by running make coccicheck in your linux base source tree. While model checking on the entire kernel would be great to get more guarantees about it's well behavior, the feasibility of it with SAT model checker is another story, because of the size and complexity of the Linux source tree, which is quite an undertaking challenge. Maybe an approach like SLAM from Microsoft, that validates interface usage, could be use for Linux drivers too?

This Linux Symposium was the opportunity to meet a lot of people involved in the kernel community and share ideas. Here are few links related to the conference.

Aucun commentaire: