The SpacemiT-X60 RISC-V SoC can enjoy some very healthy performance improvements with scheduler definitions now merged for the LLVM/Clang 21 compiler.
Merged today to LLVM Git for the RISC-V compiler code is adding scheduler definitions for the SpacemiT-X60 SoC that features a multi-core, multi-cluster RISC-V RVA22 processor. As with other CPU targets, having appropriate scheduling information for instructions can net some measurable performance benefits.
Mikhail Gadelha who authored the patch adding the SpacemiT-X60 scheduler details commented:
“This patch adds an initial scheduler model for the SpacemiT-X60, including latency for scalar instructions only.
The scheduler is based on the documented characteristics of the C908, which the SpacemiT-X60 is believed to be based on, and provides the expected latency for several instructions. I ran a probe to confirm all of these values and to get the latency of instructions not provided by the C908 documentation (e.g., double floating-point instructions).
For load and store instructions, the C908 documentation says the latency is >= 3 for load and 1 for store. I tried a few combinations of values until I got the current values of 5 and 3, which yield the best results.
Although the X60 does appear to support multiple issue for at least some floating point instructions, this model assumes single issue as increasing it reduces the gains below.
This patch gives a geomean improvement of ~4% on SPEC CPU 2017 for both rva22u64 and rva22u64_v, with some benchmarks improving up to 18% (508.namd_r). There were a couple of execution time regressions, but only in noisy benchmarks (523.xalancbmk_r and 510.parest_r).
…
This initial scheduling model is strongly focused on providing sufficient definitions to provide improved performance for the SpacemiT-X60. Further incremental gains may be possible through a much more detailed microarchitectural analysis, but that is left to future work.”
Not bad at all seeing a 4% geo mean improvement, up to 18% for some SPEC tests, and there still being further opportunities to refine the scheduling model for even better performance.