By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Sparse Activation in MoE Models: Extending ReLUfication to Mixture-of-Experts | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Sparse Activation in MoE Models: Extending ReLUfication to Mixture-of-Experts | HackerNoon
Computing

Sparse Activation in MoE Models: Extending ReLUfication to Mixture-of-Experts | HackerNoon

News Room
Last updated: 2026/02/28 at 8:11 AM
News Room Published 28 February 2026
Share
Sparse Activation in MoE Models: Extending ReLUfication to Mixture-of-Experts | HackerNoon
SHARE

Table of Links

Abstract and 1. Introduction

  1. Related Work and Background

  2. Analysis

    3.1 Limitations about Existing ReLUficatio

    3.2 dReLU

  3. Are Neurons in Expert still Sparsely Activated?

  4. dReLU Sparsification

  5. Experiments Results

    6.1 Downstream Tasks Performance

    6.2 Sparsity of Sparsified Models

  6. Practical Inference Speedup Evaluation

    7.1 Experiments Setting

    7.2 Pure CPU Inference and 7.3 Hybrid GPU-CPU Inference

    7.4 Deploy LLMs on mobile phones

  7. Conclusion and References

A. Appendix / supplemental material

B. Limitation

C. Broader Impact

4 Are Neurons in Expert still Sparsely Activated?

Previous work has shown that dense LLMs with different activation functions (ReLU, SwiGLU, etc.) exhibit the property of sparse activation [69, 36, 30]. However, the analysis is limited to dense models. Despite the intuitive assumption that partitioning FFNs into different experts within an MoE model would result in denser activations within each expert, it remains unclear whether this sparsity phenomenon persists in MoE models. In this section, we select representative MoE models and commonly used downstream tasks to investigate whether this sparsity phenomenon still exists in MoE models. We utilize the same method in 3 to control the sparsity in each expert.

Models. We select Deepseek-MoE [15], Qwen1.5-MoE [5] and Mixtral [25] as the models for our experiments. We also add Llama-2-7B as for comparison.

We first study the performance with regard to the sparsity ratio, as shown in Figure 5 (a)[2]. Specifically, the performance only drops by about 1%-2% when the sparsity ratio is 0.5. This trend suggests that MoE models exhibit similar sparsity compared to dense models.

Further, we profile the activation patterns of Mistral and Mixtral, a pair of popular dense LLM and MoE LLM, as shown in Figure 5 (b). We find that both LLMs show a similar pattern where activations are concentrated around 0, which is consistent with previous analysis of dense LLMs. The sparsity in experts also implies that every neuron in the same expert has different functionality. This finding applies to all layers and experts, as detailed in Appendix A.2. We report this interesting observation and leave further analysis for future work.

Inspired by our discoveries in MoE models, we are convinced that ReLUfication can be extended to MoE models and is not restricted to dense models. As the proportion of FFN weights in MoE models increases, the FLOP reduction achieved through ReLUfication will be even more pronounced.

:::info
Authors:

(1) Yixin Song, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;

(2) Haotong Xie, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;

(3) Zhengyan Zhang, Department of Computer Science and Technology, Tsinghua University;

(4) Bo Wen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;

(5) Li Ma, Shanghai Artificial Intelligence Laboratory;

(6) Zeyu Mi, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University Mi [email protected]);

(7) Haibo Chen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University.

:::


:::info
This paper is available on arxiv under CC BY 4.0 license.

:::

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life. Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
Next Article This Laptop Cooling Pad Knocked 20 Degrees Off My Laptop’s CPU Temperature This Laptop Cooling Pad Knocked 20 Degrees Off My Laptop’s CPU Temperature
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

features, price and technical sheet
features, price and technical sheet
Mobile
This new Wear OS smartwatch brings gesture controls to your wrist and lasts for days
This new Wear OS smartwatch brings gesture controls to your wrist and lasts for days
News
Exit China, Apple will manufacture the Mac mini in the United States
Exit China, Apple will manufacture the Mac mini in the United States
Mobile
Fujitsu advances its news for MWC 2026
Fujitsu advances its news for MWC 2026
Mobile

You Might also Like

Servo Browser Engine Starts 2026 With Many Notable Improvements
Computing

Servo Browser Engine Starts 2026 With Many Notable Improvements

1 Min Read
Tayo Aina on how African creators must think to scale
Computing

Tayo Aina on how African creators must think to scale

12 Min Read
dReLU Sparsification: Recovering LLM Performance with 150B Token Pretraining | HackerNoon
Computing

dReLU Sparsification: Recovering LLM Performance with 150B Token Pretraining | HackerNoon

3 Min Read
TurboSparse-LLM Performance: Outperforming Mixtral and Gemma with Extreme Sparsity | HackerNoon
Computing

TurboSparse-LLM Performance: Outperforming Mixtral and Gemma with Extreme Sparsity | HackerNoon

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?