By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI Coding Tip 015 – Force the AI to Obey You | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > AI Coding Tip 015 – Force the AI to Obey You | HackerNoon
Computing

AI Coding Tip 015 – Force the AI to Obey You | HackerNoon

News Room
Last updated: 2026/04/14 at 10:05 PM
News Room Published 14 April 2026
Share
AI Coding Tip 015 – Force the AI to Obey You | HackerNoon
SHARE

Don’t let your most important instructions drown in context noise

TL;DR: Bury critical rules and AI models ignore them. Use explicit markers to force compliance.

Common Mistake ❌

You write a long skill file with dozens of rules.

You bury the most critical ones somewhere in the middle, polluting the context.

The lazy AI follows the easy instructions and skips the hard ones.

You never notice until the output is already wrong, and you are frustrated.

Problems Addressed 😔

  • AI models suffer from attention dilution. The longer the context, the weaker the focus on any single rule
  • Critical constraints buried mid-file get ignored silently
  • The AI doesn’t tell you it skipped a rule. It just doesn’t follow it
  • Large skills consume so much context that late instructions compete with early ones
  • You waste time debugging outputs instead of trusting your skill
  • Re-running the same prompt gives inconsistent results

How to Do It 🛠️

  1. Start with a MANDATORY block. Put your non-negotiable rules at the very top, before any context or explanation
  2. Use explicit severity markers. Prefix rules with MANDATORY, CRITICAL, or IMPORTANT in ALL CAPS
  3. Apply progressive disclosure. Start with the strictest rules, then reveal nuance and context only after anchoring the constraints
  4. Repeat key rules at the end. Models give extra weight to what they read first and last (primacy and recency effect)
  5. Split large skills into focused modules. One file per concern beats one giant file every time
  6. Use structural separators. Use ---, ===, or explicit headers like ## RULES (READ FIRST) to visually isolate critical sections
  7. Prefer numbered rules over prose. “Rule 1: Never do X” is harder to skip than “you should avoid doing X in most cases.”
  8. Write a short TL;DR at the top. A one-line summary of the skill’s purpose acts as a memory anchor for the whole file
  9. Add a violation example. Show explicitly what breaking the rule looks like so the model has a concrete anti-pattern to avoid
  10. Test adversarially. Craft prompts designed to make the AI break your rules, then fix the skill until they hold
  11. Add lots of good and bad examples. Create another file in the skill directory with good and bad examples. Tell the LLM to document the bad examples when it doesn’t understand you
  12. Use Local Rules. Put the most critical rules in a separate file and reference it in the skill

Benefits 🎯

  • Your most critical constraints survive long context windows
  • You get consistent outputs across multiple runs
  • You spend less time debugging skill failures
  • Other developers understand which rules are non-negotiable at a glance
  • You can safely grow your skill file without burying old rules
  • You get more confident and less frustrated

Context 🧠

AI models don’t read skill files the way humans do.

They process tokens sequentially, but attention is not uniform.

Rules near the start and end of a prompt get more weight.

Rules in the middle of a 200-line skill file get the least.

This matters most when your skill file grows beyond ~50 lines.

The LLM loads all your previous messages (in and out) and the system prompt.

Small skills rarely have this problem.

Large, multi-purpose skills suffer from it constantly.

Progressive disclosure is a UX concept you can apply to prompts.

You reveal information in layers: constraints first, then context, then examples, then edge cases.

The AI commits to the constraints before it encounters exceptions.

Whenever your AI agent disobeys, you should:

Give it a Bad Example / Good Example and say to persist the existing rule as MANDATORY, REQUIRED

Prompt Reference 📝

Bad Prompt 🚫

You are a code reviewer. 
Here is a lot of context about the project, 
the team, the coding standards, 
the history of the codebase, 
the preferred libraries...

[100 lines later]
...and by the way, never suggest using any deprecated APIs.
Also, always respond in Swedish.

The language rule and the API rule are buried.

The AI will forget them or apply them inconsistently.

Good Prompt 👉

## MANDATORY RULES (apply to every response):
1. CRITICAL: Always respond in Swedish. No exceptions.
2. CRITICAL: Never suggest deprecated APIs.
3. MANDATORY: Keep suggestions under 5 lines each.

---

## Context (read after committing to the rules above)
You are a code reviewer for a legacy PHP project...
[context follows]

---

## Reminder (same rules repeated):
- Language: Swedish only
- No deprecated APIs
- Max 5 lines per suggestion

The AI reads the rules first, then the context.

The repetition at the end reinforces both constraints.

Considerations ⚠️

  • MANDATORY and CRITICAL only work if you use them sparingly. When everything is critical, nothing is.
  • Don’t repeat every rule. Repeat only the ones that are genuinely catastrophic to break.
  • Progressive disclosure doesn’t mean hiding context. It means ordering context from most-constrained to least.
  • Some models respond better to numbered rules than to prose. Test both formats with your target model.
  • Very long skill files are often a design smell. Ask yourself if you can split one skill into two focused ones.

Type 📝

[X] Semi-Automatic

Limitations ⚠️

  • This tip applies to models with large context windows (8k+ tokens). Smaller context limits change the tradeoff entirely.
  • You can’t fully compensate for a poorly structured skill by just adding CRITICAL markers. Clean structure matters more.
  • Repetition helps, but too much repetition wastes tokens and can confuse the model with contradictory-looking rewrites.
  • This doesn’t replace testing. Always validate your skill with adversarial prompts before trusting it in production.

Tags 🏷️

  • Context Window

Level 🔋

[X] Intermediate

Related Tips 🔗

https://hackernoon.com/ai-coding-tip-013-stop-wasting-tokens-with-progressive-disclosure?embedable=true

https://hackernoon.com/ai-coding-tip-014-one-agentsmd-is-hurting-your-ai-coding-assistant?embedable=true

  • Keep your skill files focused on a single concern
  • Use TL;DR anchors at the top of every long prompt
  • Test your prompts adversarially before shipping
  • Prefer explicit rules to implicit conventions in skills
  • Split skills by domain, not by file size

Conclusion 🏁

A long skill file doesn’t enforce itself.

You need to structure it so the AI can’t ignore the parts that matter.

Put critical rules first.

Mark them explicitly.

Repeat the non-negotiables at the end.

When you apply progressive disclosure, you guide the AI the same way you guide a human reader.

From constraints to context, not the other way around.

More Information ℹ️

https://arxiv.org/abs/2307.03172?embedable=true

https://arxiv.org/abs/1706.03762?embedable=true

https://hackernoon.com/why-your-ai-agent-keeps-forgetting-even-with-1m-tokens?embedable=true

https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview?embedable=true

https://platform.openai.com/docs/guides/prompt-engineering?embedable=true

https://arxiv.org/abs/2109.01652?embedable=true

https://arxiv.org/abs/2212.10535?embedable=true

https://arxiv.org/abs/2404.02060?embedable=true

Also Known As 🎭

  • Instruction-anchoring
  • Constraint-first-prompting
  • Rule-salience-in-prompts
  • Attention-aware-skill-design

Disclaimer 📢

The views expressed here are my own.

I am a human who writes as best as possible for other humans.

I use AI proofreading tools to improve some texts.

I welcome constructive criticism and dialogue.

I shape these insights through 30 years in the software industry, 25 years of teaching, and writing over 500 articles and a book.


This article is part of the AI Coding Tip series.

https://maximilianocontieri.com/ai-coding-tips

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article 6 Simple Strategies to Lower Your Web Hosting Bill 6 Simple Strategies to Lower Your Web Hosting Bill
Next Article The Memory Shortage Strikes Again, This Time With Rising Microsoft Surface Prices The Memory Shortage Strikes Again, This Time With Rising Microsoft Surface Prices
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

JD.com teams up with GAC and CATL to launch new car next month · TechNode
JD.com teams up with GAC and CATL to launch new car next month · TechNode
Computing
Google rolls out April Android Canary build with experimental features for Pixel devices
Google rolls out April Android Canary build with experimental features for Pixel devices
News
The success of this phone is why the iPhone Fold may be another failed Apple phone
The success of this phone is why the iPhone Fold may be another failed Apple phone
News
Should AI-generated content be taxed differently than human-created content? | HackerNoon
Should AI-generated content be taxed differently than human-created content? | HackerNoon
Computing

You Might also Like

JD.com teams up with GAC and CATL to launch new car next month · TechNode
Computing

JD.com teams up with GAC and CATL to launch new car next month · TechNode

1 Min Read
Should AI-generated content be taxed differently than human-created content? | HackerNoon
Computing

Should AI-generated content be taxed differently than human-created content? | HackerNoon

6 Min Read
EHang Launches VT35 eVTOL with 200 km Range, Priced at RMB 6.5 Million · TechNode
Computing

EHang Launches VT35 eVTOL with 200 km Range, Priced at RMB 6.5 Million · TechNode

1 Min Read
How AI-Driven Decision Intelligence Is Reshaping Enterprise Performance Management | HackerNoon
Computing

How AI-Driven Decision Intelligence Is Reshaping Enterprise Performance Management | HackerNoon

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?