By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Teaching Big Models With Less Data: How Adapters + Active Learning Win | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Teaching Big Models With Less Data: How Adapters + Active Learning Win | HackerNoon
Computing

Teaching Big Models With Less Data: How Adapters + Active Learning Win | HackerNoon

News Room
Last updated: 2025/08/25 at 9:29 PM
News Room Published 25 August 2025
Share
SHARE

:::info
Authors:

(1) Josip Jukic, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia ([email protected]);

(2) Jan Šnajder, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia ([email protected]).

:::

Table of Links

Abstract and 1. Introduction

  1. Related Work
  2. Preliminaries
  3. Experiments
  4. Analysis
  5. Conclusion, Limitations, and References

A. Reproducibility

Abstract

Pre-trained language models (PLMs) have ignited a surge in demand for effective finetuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. In parallel, adapter modules designed for parameter-efficient fine-tuning (PEFT) have demonstrated notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. We present an empirical study of PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. We further examine the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, where we find that PEFT yields more stable representations of early and middle layers compared to FFT. Our research underscores the synergistic potential of AL and PEFT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning.[1]

1 Introduction

Pre-trained language models (PLMs) have quickly become a staple in the field of natural language processing. With the growing demand for data for training these models, developing efficient finetuning methods has become critical. This is particularly relevant for many domains and languages where obtaining large amounts of labeled training data is difficult or downright impossible. In such low-resource settings, it becomes essential to effectively leverage and adapt PLMs while minimizing the need for extensive labeled data.

Data labeling is notoriously time-consuming and expensive, often hindering the development of sizable labeled datasets required for training high-performance models. Active learning (AL) (Cohn et al., 1996; Settles, 2009) has emerged as a potential solution to this challenge. In contrast to passive learning, in which the training set is sampled at random, AL encompasses a unique family of machine learning algorithms specifically designed to reduce labeling costs by reducing label complexity, i.e., the number of labels required by an acquisition model to achieve a certain level of performance (Dasgupta, 2011). With the advent of PLMs, AL research has pivoted towards investigating training regimes for PLMs, such as task-adaptive pre-training (TAPT; Gururangan et al., 2020), that could be combined with AL to further reduce the label complexity.

While AL aims at directly minimizing the label complexity of learning, training efficiency can also be improved by reducing the parameter complexity of the model. This becomes more important as PLMs grow larger, and fine-tuning becomes increasingly challenging due to the sheer number of parameters involved. To address this issue, adapters (Houlsby et al., 2019) have been introduced as compact modules that can be incorporated between the layers of PLMs. Adapters enable considerable parameter-sharing, facilitating parameterefficient fine-tuning (PEFT) through modular learning (Pfeiffer et al., 2023). In this process, only the parameters of the adapters are updated during the tuning for a specific downstream task. Recent research (He et al., 2021; Li and Liang, 2021; Karimi Mahabadi et al., 2021) has revealed that some PEFT methods outperform full fine-tuning (FFT) in low-resource settings, potentially due to better stability and a decreased risk of overfitting. In contrast, FFT has been shown to exhibit instability in scenarios with limited data.

Despite the promising results demonstrated by PEFT methods in low-resource settings, there is a arXiv:2305.14576v2 [cs.CL] 23 Oct 2023 striking gap in research on parameter-efficient training with respect to how PEFT interacts with AL. Given that the majority of real-world AL scenarios involve a restricted amount of data, PEFT methods emerge as strong candidates for AL acquisition models. However, there has been no exploration of AL in conjunction with adapters. Investigating this uncharted territory can further advance our understanding of AL and reveal novel strategies for optimizing performance in low-resource settings.

In this paper, we present an empirical study on the behavior of PEFT in low-resource settings for text classification tasks. We analyze PEFT with and without AL and compare it against FFT. While our results confirm that PEFT exhibits superior performance in low-resource setups compared to FFT, we show that the improved performance with PEFT extends to AL scenarios in terms of performance gains over passive learning. Furthermore, we analyze the efficacy of TAPT in conjunction with AL and PEFT. We find that TAPT is beneficial in AL scenarios for both PEFT and fully fine-tuned models, thus representing a viable technique for improving performance in low-resource settings. Finally, aiming to illuminate why PEFT and TAPT improve AL performance in low-resource settings, we analyze the properties of PEFT and FFT via forgetting dynamics (Toneva et al., 2019) and PLMs’ instance-level representations. We find that AL methods choose fewer unforgettable and more moderately forgettable examples when combined with PEFT and TAPT, where forgetfulness indicates the model’s tendency to learn and forget the gold label of a particular instance. Compared to FFT, we observe that PEFT yields representations in the early and middle layers of a model that are more similar to the representations of the base PLM. We hypothesize that this property mitigates the issue of forgetting the knowledge obtained during pretraining when fine-tuning for downstream tasks.

In summary, we show that in AL low-resource settings for text classification, (1) PEFT yields greater performance improvements compared to FFT and (2) TAPT enhances the overall classification performance of adapters and is well-suited for AL scenarios. We also show that (3) AL methods choose fewer unforgettable and more moderately forgettable examples with PEFT and that (4) PEFT produces instance-level representations of early and middle layers that are more similar to the base PLM than FFT. Our results uncover the intricacies of positive interactions between AL, PEFT, and TAPT, providing empirical justification for their combined use in low-resource settings.

2 Related Work

Our research involves combining AL with PLMs and investigating the use of PEFT techniques within the confines of low-resource settings.

AL with PLMs. Until recently, the conventional approach for integrating PLMs with AL involved performing full fine-tuning with a fixed number of training epochs and training the model from scratch in each AL step (Ein-Dor et al., 2020; Margatina et al., 2021; Shelmanov et al., 2021; Karamcheti et al., 2021; Schröder et al., 2022). However, studies by Mosbach et al. (2021) and Zhang et al. (2021) revealed that fine-tuning in low-resource setups is prone to instability, particularly when training for only a few epochs. This instability, often sensitive to weight initialization and data ordering (Dodge et al., 2020), presents a significant challenge for AL, which frequently operates in lowresource settings. Recent research has looked into the impact of PLM training regimes on AL performance (Grießhaber et al., 2020; Yuan et al., 2020; Yu et al., 2022), suggesting that the choice of training regime is more critical than the choice of the AL method. Notably, TAPT has proven particularly effective in enhancing AL performance (Margatina et al., 2022; Jukic and Šnajder ´ , 2023).

Adapters in low-resource settings. Research on adapters in low-resource settings has primarily focused on areas such as cross-lingual transfer for low-resource languages (Ansell et al., 2021; Lee et al., 2022; Parovic et al., 2022), where the emphasis lies on exploring diverse methods of fusing adapters. In monolingual settings with scarce data, adapters have been found to outperform full finetuning (Li and Liang, 2021; Mao et al., 2022). A study by He et al. (2021) demonstrated that adapterbased tuning exhibits enhanced stability and generalization capabilities by virtue of being less sensitive to learning rates than traditional fine-tuning methods. While incorporating task adaptation techniques, such as TAPT, has been shown to match or even improve performance over FFT in lowresource setups, Kim et al. (2021) noted an interesting caveat: the benefits of integrating TAPT with adapters tend to taper off as the amount of data increases.

Despite the established effectiveness of adapters in setups with limited resources, their integration into AL frameworks — which frequently face analogous resource constraints — remains an untapped area of research. This gap is particularly notable given that AL’s iterative learning process could significantly benefit from adapters’ parameter efficiency and transferability, especially in scenarios where data scarcity or labeling costs are primary concerns.

:::info
This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[1] Our code is available at https://github.com/josipjukic/adapter-al

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Tesla could have avoided that $242.5M Autopilot verdict, filings show | News
Next Article Asus ROG Strix Scar 16 (G635LW) Review: A Cool Runner With a Dapper Display
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Time to Buy? Prices Drop for Nvidia RTX 5000 GPUs as Supplies Free Up
News
3 Quick and easy ways to clear up storage space in windows 11
Software
The Apple AirPods 4 are within 99 cents of their best-ever price from Prime Day
News
End of Summer Deal: Get Up to 77% Off NordVPN
News

You Might also Like

Computing

11 Unconventional Ways to Use AI in Marketing | HackerNoon

8 Min Read
Computing

HKGAI And FLock.io Partner To Advance Decentralised AI For Government Efficiency | HackerNoon

3 Min Read
Computing

MetaWin Announces “MetaWin Create” – Free AI Tools For All MetaWinners NFT Holders | HackerNoon

2 Min Read
Computing

How Fast Is PyJuice? Testing Compilation Speed Across GPUs and Batch Sizes | HackerNoon

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?