By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Distractor Robustness: RECKONING Significantly Outperforms FT-ICR in Reasoning Over Irrelevant Facts | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Distractor Robustness: RECKONING Significantly Outperforms FT-ICR in Reasoning Over Irrelevant Facts | HackerNoon
Computing

Distractor Robustness: RECKONING Significantly Outperforms FT-ICR in Reasoning Over Irrelevant Facts | HackerNoon

News Room
Last updated: 2025/10/24 at 7:16 PM
News Room Published 24 October 2025
Share
SHARE

Table of Links

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

4.2 Reasoning with Distractors

In cases where multiple questions must be answered about the same knowledge set, some knowledge that is relevant to one question will likely be irrelevant to another question. For example, in Table 7, the fact “Charlie is White.” is not needed to answer the question “Harry is red?”. Thus, it is important to evaluate the robustness of RECKONING when there exists irrelevant information (i.e., distractors) in the knowledge set. In this experiment, we analyze RECKONING’s ability to focus on the correct knowledge and ignore distractors when answering questions. We use ProofWriter as the evaluation dataset since it already has a setting with distractors included in the knowledge. For systematic analysis, we gradually add distractors to the context (starting from 2 and finishing at all possible distractors, of which there are an average of 7 per question). We train RECKONING and the baseline using the multi-task objective, where the model must (1) recall all of the facts and rules relevant to the question and (2) predict the conclusion based on the correct knowledge. In this case, we adapt training such that for each question x, the outer-loop (Equation (5)) CLM loss is only computed with respect to the relevant facts from K, thereby learning to recall only relevant facts during training.

In Figure 5, we see that RECKONING’s performance is consistently more robust under distractors than the FT-ICR baseline. When we include all of the distractors in the context, RECKONING achieves a significantly higher average label accuracy (82.5%) across hops than the baseline (70.9%), as computed by the average of the 3 considered hop depths. Additionally, compared to performance with no distractors, RECKONING’s performance only drops 17.1% while the baseline performance drops 28.6%, thereby exhibiting a better ability to disentangle the correct knowledge from the distractors.

Finally, we also explore RECKONING’s generalizability to models with a larger parameter size. We scale up the language model we used, GPT-2-small (124M), to GPT-2-XL (1.5B) by adopting a parameter efficient finetuning method LoRA [33]. For simplicity, we only evaluate the models on the most difficult settings, i.e., ProofWriter-5-hop with all the distractors. With GPT-2-XL-LoRA, in-context reasoning achieves 65% accuracy on the test set, while our RECKONING model achieves 70.2% accuracy, a 5% performance gain. This result suggests that RECKONING’s advantages in the presence of distractors hold even as models scale in size.

:::info
Authors:

(1) Zeming Chen, EPFL ([email protected]);

(2) Gail Weiss, EPFL ([email protected]);

(3) Eric Mitchell, Stanford University ([email protected])’;

(4) Asli Celikyilmaz, Meta AI Research ([email protected]);

(5) Antoine Bosselut, EPFL ([email protected]).

:::


:::info
This paper is available on arxiv under CC BY 4.0 DEED license.

:::

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article DHS Wants a Fleet of AI-Powered Surveillance Trucks
Next Article Gemini’s Canvas tool can now turn your doc into a full presentation in seconds
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

OpenAI’s Atlas Wants to Be the Web’s Tour Guide. I’m Not Convinced It Needs One
Gadget
Is the Galaxy XR really an Apple Vision Pro killer?
News
Anoboy – Watch Anime Online Free Like Samehadaku In HD Quality
Gadget
The little-known online stores that slash £1,000s off tech, clothes and food
News

You Might also Like

Computing

“Travel for depth, not for show” – Dubai-based digital nomad

21 Min Read
Computing

KDE Plasma 6.6 Will Cater To Windows Power Users With “winver”

2 Min Read
Computing

NVIDIA Starts Posting Open-Source Nova Driver Patches To Prep For Next-Gen GPUs

2 Min Read
Computing

Servo’s Demo Browser Adds Experimental Mode & More Performance Improvements

1 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?