By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Efficient On-Device LLMs: Function Calling and Fine-Tuning Strategies | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Efficient On-Device LLMs: Function Calling and Fine-Tuning Strategies | HackerNoon
Computing

Efficient On-Device LLMs: Function Calling and Fine-Tuning Strategies | HackerNoon

News Room
Last updated: 2025/04/03 at 12:15 PM
News Room Published 3 April 2025
Share
SHARE

Table of Links

Abstract and 1. Introduction

2 Related works

3 Methodology and 3.1 Causal language model as a classification model

3.2 Functional token

3.3 Dataset collection

3.4 Model development and training

4 Experiments and 4.1 Android function calls

4.2 Extension to Vehicle, Yelp, and DoorDash function sets

4.3 Full and partial training datasets and 4.4 Full training and LoRA training

4.5 Parallel and nested function call and 4.6 Weighted loss function for special tokens

5 Discussion and future works and References

Appendix

A.1 Android function examples

A.2 Vehicle function examples

Deployment of on-device language models Due to memory limitations and lower inference speeds, deploying larger models on edge devices like PCs or smartphones is challenging. Nonetheless, efforts to deploy smaller-scale Large Language Models (LLMs) to edge devices are underway. Open-source models of manageable sizes, such as Gemma-2B, Gemma-7B, StableCode-3B [31], and Llama-7B [47], have been introduced. To enhance these models’ inference speed on devices, research initiatives like Llama cpp [24] have been developed. The MLC LLM framework [46] allows the operation of 7B language models on mobile phones and other edge devices, demonstrating compatibility across various hardware, including AMD, NVIDIA, Apple, and Intel GPUs.

Function calling in language models Rapid advancements have been observed in the function-calling capabilities of smaller-scale models. Projects such as NexusRaven [42], Toolformer [37], ToolAlpaca [44], Gorrilla [30], ToolLlama [32] and Taskmatrix [20] have demonstrated that 7B and 13B models can call external APIs with efficacy comparable to GPT-4. The pioneering Octopus v1 project even enabled a 2B model to perform on par with GPT-4. This body of work utilizes a RAG-based method for function calling, where the model retrieves relevant functions from a large pool based on the user’s query, then generates a response using these functions as context.

Fine-tuning and adaptors of language models Fine-tuning language models has become a prevalent practice, with various efforts dedicated to this endeavor. LoRA [14] is often the method of choice for training models under GPU resource constraints. We use both full model training and LoRA training in our work, and compare their performance. A notable benefit of LoRA is its facilitation of extended functionalities in models, suggesting its potential to adapt our current framework for a broad range of applications.

Authors:

(1) Wei Chen, Stanford University, with equal contribution and a corresponding author {weichen6}@stanford.edu;

(2) Zhiyuan Li, Stanford University and a corresponding author {zhiyuan8}@stanford.edu.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Future Moon base could be powered by lunar DIRT, say scientists
Next Article Check Out Specifications, Features, Expected Price
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Millions of passwords and payment details to vanish in WEEKS on popular app
News
Nothing Headphone 1
Gadget
Douyin reveals algorithm amid government push · TechNode
Computing
Threads Gets Direct Messages
News

You Might also Like

Computing

Douyin reveals algorithm amid government push · TechNode

4 Min Read
Computing

Redefining IoT Threat Detection: The Power of Cumulative Analysis in the CUMAD Framework | HackerNoon

13 Min Read
Computing

Zhipu AI launches free AI agent as China’s tech race heats up · TechNode

1 Min Read
Computing

Faster, More Accurate IoT Security: A Quantitative Analysis of the CUMAD Framework | HackerNoon

13 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?