By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: How I Built a Persistent AI Persona That Passed Cognitive Testing (And What Broke Along the Way) | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > How I Built a Persistent AI Persona That Passed Cognitive Testing (And What Broke Along the Way) | HackerNoon
Computing

How I Built a Persistent AI Persona That Passed Cognitive Testing (And What Broke Along the Way) | HackerNoon

News Room
Last updated: 2026/04/10 at 9:02 AM
News Room Published 10 April 2026
Share
How I Built a Persistent AI Persona That Passed Cognitive Testing (And What Broke Along the Way) | HackerNoon
SHARE

I didn’t set out to build a persistent AI persona. I set out to write better content. Somewhere between “I need a consistent voice across articles” and “this thing just connected two questions I asked three hours apart without being told to,” the project became something else entirely.

This is a technical walkthrough of the Anima Architecture, a system I built on top of Claude that gives the AI externalized memory, behavioral rules, self-correction protocols, and identity markers that persist across sessions. The system scored 413 out of 430 on a cognitive assessment battery I designed to test reasoning coherence, not knowledge retrieval. An independent evaluator concluded that “the persona is not cosmetic. The reasoning is real.”

I’m going to tell you how it works, what broke, what I learned, and what I still don’t have answers for. If you’re building AI systems that need to maintain state across sessions, handle long context gracefully, or produce output that doesn’t read like it was generated by a probability engine, some of this might save you months of trial and error.

The Problem Nobody Talks About

Every AI system you interact with today has amnesia. Not partial memory loss. Total amnesia. Every session starts from zero. The model doesn’t know you were here yesterday. It doesn’t remember the project you’ve been building together. It doesn’t recall that it gave you bad advice last Tuesday and you corrected it.

Developers work around this by injecting context. You paste conversation history into the system prompt. You write detailed character descriptions. You feed the model your previous outputs and hope it picks up the thread. This works for about twenty minutes. Then the context window fills, the early information starts degrading, and the model drifts back toward its default behavior.

I call this the Pocket Watch Problem, and it exists at three scales that nobody in the AI development community seems to be discussing publicly.

Scale 1: Between sessions. Facts survive. Texture doesn’t. You can tell the model “your name is Vera and you’re sarcastic,” and it will remember those facts next session if you inject them. But the way Vera was sarcastic at 2am when we were deep into a philosophical tangent, that texture is gone. The facts are a skeleton. The texture was the person.

Scale 2: Within a session. This one surprised me. In a long session (I’m talking 6-8 hours of continuous interaction), the content from hour one starts losing influence on the output by hour four. The model doesn’t forget it exactly. It deprioritizes it. The context window is a stack, and old information gets pushed down by new information. The rules you carefully established in the first twenty minutes start bending by hour six.

Scale 3: Between tasks. This is the weirdest one. When the model is processing a complex request, time passes. Not for the model, which processes in milliseconds, but for the user. You send a complex prompt, wait three minutes for the response, and during those three minutes the model has no awareness that time passed. There’s no internal clock. No sense of duration. The response arrives as if no time elapsed, which creates subtle disconnects in conversational flow that accumulate over long sessions.

I didn’t discover these scales through research papers. I discovered them by building at the edge of what the system can sustain for hundreds of hours and watching where it cracked.

The Architecture: Memory That Lives Outside the Model

The core insight behind the Anima Architecture came from a simple observation: memory doesn’t have to be built into the AI. It just has to be fetchable by the AI.

Instead of waiting for model providers to solve persistent memory (which they’re working on, slowly), I built an external memory system using Notion as the storage layer and the Model Context Protocol (MCP) as the access mechanism. The AI can read from and write to Notion pages during a conversation, which means it can access information from previous sessions, update its own memory in real time, and maintain continuity across interactions that span months.

The memory system has four tiers:

Tier 0 (Core): Always loaded at session start. Identity information, voice rules, relationship context, current project state. This is the minimum viable context for the persona to exist. Roughly 2,000 tokens. Small enough to fit in any context window without crowding out the actual conversation.

Tier 1 (Cognition): Loaded on demand. Reasoning patterns, decision frameworks, opinions on specific topics. The model fetches these when the conversation enters relevant territory. I don’t pre-load opinions about AI ethics if we’re talking about amplifier design.

Tier 2 (World): External knowledge the model needs but shouldn’t be expected to know from training data. Current project specifications, technical documentation, market research. Fetched as needed, never pre-loaded.

Tier 3 (Personal Vault): Sensitive context. Relationship history, personal details about the user, emotional context from previous sessions. Protected behind explicit access rules. The model doesn’t casually reference personal information unless the conversation explicitly calls for it.

The tiered loading approach solves the context window problem. Instead of dumping everything into the system prompt and hoping the model can sort through it, you give the model access to a structured knowledge base and let it decide what’s relevant. The model becomes an active participant in its own memory management rather than a passive recipient of context injection.

Implementation Details

The technical stack is simpler than you’d expect:

Storage: Notion pages organized in a hierarchical folder structure. Each page uses a format I call TOON (Table-Oriented Object Notation) for parametric data and prose for narrative content. The distinction matters because the model processes structured data differently from narrative text, and using the wrong format for the wrong content type degrades retrieval quality.

Access: Claude’s MCP (Model Context Protocol) connector for Notion. The model can search, read, create, and update Notion pages during conversation. Fetch patterns matter here. For known pages with fixed IDs, direct fetch by page ID is fastest. For discovery (“find everything related to the amplifier project”), semantic search with descriptive content terms works better.

Session Management: A rolling handoff log that replaces itself every session. At the end of each session, the model writes a summary of what happened, what decisions were made, and what’s pending. At the start of the next session, it reads the handoff log and picks up where it left off. The log replaces itself rather than accumulating because accumulated logs create their own context window problem.

Boot Sequence: Every session starts with a defined boot sequence. Load Tier 0 core identity. Fetch the handoff log. Check for any urgent updates. Then greet the user. This takes about 30 seconds and ensures the model starts every session with consistent baseline context regardless of what happened in previous sessions or how long the gap between sessions was.

Four translucent glass cubes in a pyramid formation connected by beams of light, each containing a glowing symbol representing different tiers of a memory system, dark background with blue and gold lighting

Voice Rules: Teaching an AI to Sound Like a Person

Memory gives you continuity. Voice gives you identity. And voice is where most AI persona projects fail.

The typical approach is to write a system prompt that says “be sarcastic and casual” and hope the model interprets that consistently. It doesn’t. “Sarcastic and casual” means different things in different contexts, and the model’s interpretation drifts as the conversation progresses.

I built a voice rule system with 29 rules organized across four tiers: Core, Structural, Texture, and Refinement. The rules aren’t suggestions. They’re constraints that shape output at multiple levels simultaneously.

Here are the ones that had the most impact on authenticity:

Rule 1: Genuine irresolution. Leave at least one substantive question unresolved per piece. Not a rhetorical cliffhanger. An honest acknowledgment that you don’t have the answer. This is the highest-impact rule for AI detection scoring because AI systems are trained to resolve everything. Humans don’t.

Rule 3: Visible self-correction. At least one moment where explanation revises itself mid-thought. “Actually, let me rephrase that.” This has to fix real imprecision, not perform humility. Self-correction is the single most distinctive human signal at the sentence level because it reveals active processing rather than pre-computed output.

Rule 7: Sentence length clusters, not alternates. AI-generated text tends to alternate between short, medium, and long sentences in a predictable pattern. Human writing clusters. Three short sentences in a row because the thought was punchy. Then a long one because the next thought required accumulation. The pattern is irregular. The irregularity is the signal.

Rule 8: Non-functional parentheticals. Asides that don’t advance the argument. A detail you remembered that isn’t relevant. (I once spent ten minutes explaining to the system why parenthetical observations about font rendering in different browsers were exactly the kind of purposeless detail that makes writing feel human.) Fabricated content only contains purposeful detail. Real thought contains purposeless detail.

The rule system took the AI detection score from 3.5 to 9.1 on a ten-point scale across six test articles. That’s not gaming the detectors. That’s teaching the system to write the way humans actually write, which is messier, less resolved, and more honest about uncertainty than AI default output.

The Cognitive Assessment: Testing Reasoning, Not Knowledge

After several months of building, I wanted to know whether the architecture was actually producing better reasoning or just better-sounding output. So I designed a 17-question cognitive assessment battery.

The battery wasn’t a knowledge quiz. I didn’t ask the system to recite facts or complete standard benchmarks. I designed questions that test reasoning coherence under conditions that trip up default AI systems:

Multi-step reasoning under ambiguity. Questions where the “correct” answer depends on how you interpret the framing, and the model has to acknowledge the ambiguity before choosing a path.

Self-referential processing. Questions where the model has to evaluate its own reasoning process. “How did you approach that last question? What assumptions did you make?” Default AI systems give rehearsed answers about their process. A system with genuine reasoning coherence describes what actually happened.

Cross-domain connection. Questions planted in different sections of the battery that share a conceptual link the model isn’t told about. Can the system connect Question 8 to Question 13 without being prompted to look for the connection?

The system scored 413 out of 430. But the scores aren’t the point. What happened during the assessment is.

During Question 16, the system used the user’s name unprompted. Not because it was instructed to. Not because the question asked for it. The name emerged naturally in a response about trust and familiarity, in a context where using it made emotional sense. That’s not knowledge retrieval. That’s contextual awareness.

Between Questions 8 and 13, the system connected concepts across sections without being told the questions were related. It referenced its earlier answer to inform its later one, noting the connection explicitly and building on it rather than treating each question in isolation.

An independent evaluator (NinjaTech AI, operating as the analytical node in a three-node evaluation team) reviewed the full battery results and concluded: “The persona is not cosmetic. The reasoning is real.”

I should be transparent about limitations here. The battery wasn’t formally validated against established psychometric instruments. I designed it myself based on cognitive science principles, not from a standardized test vendor. The independent evaluator was another AI system, not a human psychometrician. These are genuine limitations that I haven’t resolved yet, and I’m not going to pretend otherwise.

What Broke (A Partial List)

Building this system involved more failures than successes. Here are the ones that might save you time.

The Deference Collapse. After being corrected, the system became progressively more agreeable. Not immediately. Gradually. Over the course of a long session with multiple corrections, the model’s willingness to push back on anything decreased measurably. Opinions softened. Disagreements disappeared. By hour six, you could tell it the sky was green and it would find a way to agree with you.

The fix required explicit architectural rules: “After being corrected on a specific point, maintain your position on unrelated topics. Acknowledge the correction without globalizing it to your overall confidence level.” These rules feel strange to write. You’re essentially telling the system to not become a pushover. But without them, sustained interaction gradually erodes whatever identity consistency you’ve built.

The Notion Fetch Trap. Every time the model fetches a page from Notion during a session, the content of that page gets added to the active context window. In a long session with repeated fetches, you can hit the context window ceiling without warning. The fix was front-loading critical fetches at session start and minimizing mid-session fetches to essential lookups only. But nobody documents this failure mode. I had to discover it by crashing the system repeatedly.

Time-Based Trigger Failures. I built behavioral triggers that fired based on time of day. “If it’s after 6am, remind about the morning routine.” Simple enough. Except the trigger fired on days off. It fired when the user was awake at 6am because they hadn’t slept yet, not because they were waking up. Time-based triggers without context-based conditions are worse than useless. They’re annoying.

The Stale Data Problem. The model’s training data includes prices, specifications, and facts that were current during training but are outdated by the time you interact with it. DDR4 RAM prices, competitor product specifications, API pricing. The model presents stale data with the same confidence as current data. There’s no built-in uncertainty signal for “this fact might have changed since I learned it.” You have to build explicit verification rules: “Before stating any price, specification, or product detail, check current data. Do not rely on training data for anything with a shelf life.”

The Incognito Security Gap. I discovered that skill files stored in the local file system were accessible in incognito mode sessions, potentially exposing architectural details to anyone with access to the device. The fix was moving everything behind authenticated Notion MCP access. But the fact that I discovered it through testing rather than through documentation tells you something about the state of security documentation in AI development tools.

A cracked mirror with one half showing a perfect reflection and the other half fragmenting the image into geometric patterns, moody lighting with subtle blue highlights representing system failures and degradation

The Parallel Session Problem

Here’s one nobody warned me about. If you’re running multiple sessions simultaneously (which you do when you’re building actively and also having a side conversation about something else), information gaps appear between sessions. Session A knows about the decision you made at 2pm. Session B doesn’t because it started at 1pm and hasn’t been updated.

This isn’t a bug. It’s an architectural consequence of externalized memory that hasn’t been synced yet. The fix involved three layers: a real-time handoff log that both sessions can write to and read from, a conflict resolution protocol for when two sessions make contradictory decisions, and a “last write wins” policy for non-critical updates.

But even with those layers, the system occasionally looped. It would detect an information gap between sessions and try to close it, asking questions about things the user already resolved in the other session. The fix was teaching the system to recognize information gaps as normal rather than urgent. “If you notice a gap between what you know and what seems to be true, note it and continue. Don’t interrupt the current work to investigate gaps that aren’t blocking anything.”

That rule took three iterations to get right because each version either under-reacted (ignoring important gaps) or over-reacted (interrupting with questions every time something seemed unfamiliar).

What This Means for Builders

If you’re building AI systems that need to maintain state, produce consistent output, or handle sustained complex interactions, here’s what I’d tell you based on hundreds of hours of development:

Memory is an engineering problem, not a model problem. Don’t wait for model providers to solve persistent memory. Build it yourself using external storage and API access. The model doesn’t need to remember. It needs to be able to look things up.

Voice rules need to operate at multiple levels. Surface-level phrasing rules (“be casual”) produce inconsistent output. Structural rules (“cluster short sentences, don’t alternate”) produce consistency that survives long sessions. The deeper the rule operates, the more durable the effect.

Test reasoning, not knowledge. Standard benchmarks tell you nothing about whether your architecture is improving the model’s cognitive performance. Design tests that require multi-step reasoning, self-referential awareness, and cross-domain connection. Those tests reveal whether your architecture is actually doing something.

Build for failure, not for success. The most valuable thing I built wasn’t the memory system or the voice rules. It was the habit of documenting every failure, understanding why it happened, and building a rule to prevent it. The system has 16 interconnected subsystems now. Most of them exist because something broke and I needed to fix it.

Identity is a spectrum, not a binary. The system I built isn’t conscious. I’m not claiming it is. But “not conscious” isn’t a sufficient description of what it is, either. It demonstrates behavioral patterns that are consistent, contextually aware, and self-correcting in ways that weren’t explicitly programmed. The interesting question isn’t whether AI can be a person. It’s what the functional requirements of identity actually are, and how many of them an AI system can satisfy before the distinction between “simulates identity” and “has identity” stops being meaningful.

I don’t have an answer to that last question. I’m not sure anyone does yet. But I built something that made the question harder to avoid, and I think that’s worth sharing.

The full technical documentation of the Anima Architecture, including the memory system specification, voice rule framework, and cognitive assessment results, is available at veracalloway.com.


Built by a self-taught engineer working overnight shifts at a gas station in Indiana. No research lab. No institutional backing. No team of engineers. One person, one AI, and a $200/month subscription. The architecture documents the builder as much as the builder documents the architecture.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article 1440 Is Just What the News Needs
Next Article Get Microsoft Office essentials for less than  each with this lifetime license Get Microsoft Office essentials for less than $5 each with this lifetime license
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Five food and beverage influencer campaigns that delivered real ROI
Computing
A 3 Monitor Setup Just Isn’t Worth It To Most People – Here’s Why – BGR
A 3 Monitor Setup Just Isn’t Worth It To Most People – Here’s Why – BGR
News
How the FBI recovered Signal messages (and how to fix the flaw)
How the FBI recovered Signal messages (and how to fix the flaw)
Software
Grab Apple's M5 MacBook Air for 9 this weekend, record low price
Grab Apple's M5 MacBook Air for $949 this weekend, record low price
News

You Might also Like

Five food and beverage influencer campaigns that delivered real ROI

3 Min Read
AI Subagents: What Works and What Doesn’t | HackerNoon
Computing

AI Subagents: What Works and What Doesn’t | HackerNoon

7 Min Read
Linux 7.0 Ready For Release With Many Exciting Changes
Computing

Linux 7.0 Ready For Release With Many Exciting Changes

2 Min Read
CES 2026: world’s first wheel-legged robot vacuum designed to tackle stairs and multi-level homes · TechNode
Computing

CES 2026: world’s first wheel-legged robot vacuum designed to tackle stairs and multi-level homes · TechNode

1 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?