By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: LinkedIn’s Migration Journey to Serve Billions of Users by Nishant Lakshmikanth at QCon SF
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > LinkedIn’s Migration Journey to Serve Billions of Users by Nishant Lakshmikanth at QCon SF
News

LinkedIn’s Migration Journey to Serve Billions of Users by Nishant Lakshmikanth at QCon SF

News Room
Last updated: 2025/11/26 at 12:06 PM
News Room Published 26 November 2025
Share
LinkedIn’s Migration Journey to Serve Billions of Users by Nishant Lakshmikanth at QCon SF
SHARE

At QCon San Francisco 2025, Engineering Manager Nishant Lakshmikanth provided a deep dive into how LinkedIn systematically dismantled its legacy, batch-centric recommendation infrastructure to achieve real-time relevance while dramatically improving operational efficiency.

The legacy architecture, responsible for critical products like “People You May Know” and “People You Follow,” suffered from three major constraints: Low freshness, high latency, and high compute cost.

In the prior system, recommendations were precomputed for the entire user base, regardless of whether a user logged in. This resulted in significant compute wastage and static, stale results. A pipeline incident could delay updates for days, directly impacting core business metrics. The clear goal was to achieve instant personalization while drastically reducing associated costs.

LinkedIn approached this monumental migration not as a single cutover, but as an iterative process defined by four architectural phases:

  1. Offline Scoring: The starting point is heavy batch computation, high latency, and massive precomputed storage.
  2. Nearline Scoring: An intermediate step, providing hourly or daily freshness rather than a multi-day lag.
  3. Online Scoring: The critical pivot, where model inference is run on-demand in real-time, reacting to the user’s current session and intent.
  4. Remote Scoring: The final destination, moving heavy model scoring to a high-performance cloud environment, often leveraging GPU-accelerated inference.

This framework enabled two parallel migrations: Offline-to-Online Scoring and Nearline-to-Online Freshness, shifting the focus from precomputation to dynamic execution.

A person standing in front of a large screenAI-generated content may be incorrect.

The success hinged on a fundamental architectural decoupling: separating the Candidate Generation (CG) pipeline from the Online Scoring Service.

  • Dynamic Candidate Generation: CG moved away from static lists. It now uses real-time search index queries, Embedding-Based Retrieval (EBR) for new user and content discovery (solving the cold-start problem), and immediate user context to retrieve a relevant set of candidates on the fly.
  • Intelligent Scoring: The Online Scoring Service uses a context-rich feature store, enabling modern models like Graph Neural Networks (GNNs) and Transformer-based models for precise ranking. Crucially, the team implemented Bidirectional Modeling, which scores connections from both the sender’s and recipient’s perspectives, yielding superior results.

Regarding LLMs, Lakshmikanth clarified the cost-performance trade-off: due to high computational overhead, LLMs are primarily reserved for Candidate Generation and Post-Ranking flows, where they add value without crippling the latency-sensitive, real-time core ranking loop.

The move to real-time architecture has delivered quantifiable results and future-proofed the system:

  • Cost Reduction: The cleanup of batch dependencies resulted in over 90% reduction in offline compute and storage costs and an overall compute cost reduction of up to 68% in some core flows.
  • Session-Level Freshness: The system now achieves session-level freshness, reacting to user clicks, searches, and profile views instantly, leading to significant increases in member engagement and connection rates.
  • Platform Flexibility: The compartmentalized design simplifies maintenance and enables faster model experimentation, allowing rapid deployment of cutting-edge models and seamless rollbacks.

A person standing at a podium in front of a large screenAI-generated content may be incorrect.

These architectural principles are now being applied to other critical surfaces, including LinkedIn Jobs (to better interpret career intent) and Video (where content freshness is paramount).

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Why SaaS Products Feel Harder to Use Every Year | HackerNoon Why SaaS Products Feel Harder to Use Every Year | HackerNoon
Next Article Common Mistakes to Avoid When Running Social Ads | Common Mistakes to Avoid When Running Social Ads |
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Grab a Eufy Robot Vacuum for £300 less with this handy voucher
Grab a Eufy Robot Vacuum for £300 less with this handy voucher
Gadget
Microsoft’s latest 13-inch Surface Laptop is down to 9.99, a new record low price
Microsoft’s latest 13-inch Surface Laptop is down to $549.99, a new record low price
News
I Turned My Backyard Into a Nature Documentary With a Flock of Smart Bird Feeders
I Turned My Backyard Into a Nature Documentary With a Flock of Smart Bird Feeders
Gadget
Why Are Cars Getting Rid Of Apple CarPlay? – BGR
Why Are Cars Getting Rid Of Apple CarPlay? – BGR
News

You Might also Like

Microsoft’s latest 13-inch Surface Laptop is down to 9.99, a new record low price
News

Microsoft’s latest 13-inch Surface Laptop is down to $549.99, a new record low price

2 Min Read
Why Are Cars Getting Rid Of Apple CarPlay? – BGR
News

Why Are Cars Getting Rid Of Apple CarPlay? – BGR

5 Min Read
Progressive caucus opposes push to add AI preemption provision to defense bill
News

Progressive caucus opposes push to add AI preemption provision to defense bill

0 Min Read
Breaking down the boom in the Nordic’s startup ecosystem |  News
News

Breaking down the boom in the Nordic’s startup ecosystem | News

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?