The x86/asm changes merged yesterday for the Linux 7.1 kernel with a few low-level improvements.
Uros Bizjak worked out a few of the x86/x86_64 Assembly improvements for the Linux 7.1 kernel. For the most part it’s uneventful work this cycle but there are two patches for removing some unnecessary memory clobbers. Avoiding the memory clobbers from the inline Assembly code can be useful for minor impact to better instruction scheduling and register allocation. The memory clobbers act as a read/write memory barrier to prevent the compiler from reordering memory loads/stores from the inline Assembly statement and to flush any values cached in registers back to memory as well as reloading values cached in registers that may have been changed in the Assembly code.
Bizjak found the memory clobber to act as a compiler barrier is unnecessary from the kernel’s FS/GS base accessors as they are only reading the FS/GS base MSRs into a general purpose register and not accessing memory. The other patch removes a memory clobber from savesegment() since it too is not accessing memory and avoids the need to declare a memory clobber.
That’s the highlight of the x86/asm pull request now merged for Linux 7.1.
