By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: What a 1960s Philosophy Book Taught Me About Shipping Production AI | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > What a 1960s Philosophy Book Taught Me About Shipping Production AI | HackerNoon
Computing

What a 1960s Philosophy Book Taught Me About Shipping Production AI | HackerNoon

News Room
Last updated: 2026/02/01 at 7:22 PM
News Room Published 1 February 2026
Share
What a 1960s Philosophy Book Taught Me About Shipping Production AI | HackerNoon
SHARE

A book first published in 1962—Thomas Kuhn’s The Structure of Scientific Revolutions—isn’t where you’d expect to find guidance for shipping production AI. But Kuhn’s core point about paradigms clicked instantly for me: reliability starts when you make your “world” explicit—what exists, what’s allowed, what counts as evidence—and then enforce it. That’s exactly what I needed to migrate 750k+ CRM objects and build a post-hangup VoIP pipeline where transcripts, summaries, and 1–10 scoring don’t drift into inconsistent interpretations.

I’m what real developers might politely call a “vibe-coder.”

I can spend hours unpacking geopolitical risk or debating why the EU defaults to caution. But if you ask me about Python memory management, I’ll assume you mean my ability to remember what I named a variable last week.

I’m not a Software Engineer. I’m a Strategic Product Lead and a General Manager. I define the what and the why, align stakeholders, and I’m accountable for whether a system behaves predictably once it leaves the sandbox. I also lecture in Business Problem Solving, so I tend to approach technology as a governance problem: clarify the objective, specify the constraints, then build something that fails safely.

Two years ago, I started using AI to prototype quickly—mostly to test whether an idea had legs. It worked until the project stopped being a prototype.

Because the requirement wasn’t “build a custom CRM.” It was:

  • migrate 122,000 contacts
  • migrate 160,000 leads
  • migrate ~500,000 notes and tasks
  • integrate our VoIP center so that every call, within seconds after hangup, produces:
  • a complete transcript
  • a summary
  • a sentiment score (1–10)
  • five agent-performance scores (1–10): solution effectiveness, professionalism, script adherence, procedure adherence, clarity
  • an urgency level for escalation

At this size, ambiguity doesn’t stay local. A small mapping inconsistency becomes thousands of questionable rows. A slightly inconsistent output format becomes a long-term reporting problem. The cost isn’t just technical—it’s organizational: people stop trusting the CRM.

AI sped up prototyping. Production reliability still requires boring engineering discipline: contracts, validation, idempotency, and constraints.

The point of this article is not that I “solved AI.” The point is that I stopped treating model output as helpful text and started treating it as untrusted input—governed by explicit definitions.

For clarity: “we” refers to my workflow—me and three LLMs used in parallel for critique and cross-checking. Final design decisions and enforcement mechanisms remained human-owned.

Ontology, in the International Relations sense (Kuhn + a quick example)

In International Relations, it helps to borrow Thomas Kuhn’s idea of a paradigm: a shared framework that tells a community what counts as a legitimate problem, what counts as evidence, and what a “good” explanation looks like.

An ontology is the paradigm’s inventory and rulebook: what entities exist in your analysis, what relationships matter, and what properties are allowed to vary. If you don’t make that explicit, people can use the same words and still analyze different phenomena.

A quick example: take the same event—say, a border crisis.

  • Under a realist paradigm, the ontology centers on states as primary actors, material capabilities, balance of power, and credible threats. The analysis asks: Who has leverage? What are the incentives? What capabilities change the payoff structure?
  • Under a constructivist paradigm, the ontology expands to identities, norms, shared meanings, and legitimacy. The analysis asks: How are interests constructed? What narratives make escalation acceptable—or taboo? Which norms constrain action?

Same event. Different ontology. Different variables. Different “explanations.”

Tech analogy: realism vs constructivism is like two incompatible schemas. Same input, different fields, different allowed states—so you should expect different outputs.

That’s exactly what happens in production AI and data migration. If you don’t explicitly define the entities, relationships, and admissible states in your system, you don’t get one consistent dataset—you get parallel interpretations that collide later. The model (and sometimes the humans) will quietly invent meanings.

So we made the system’s world explicit. Then we enforced it.

The reliability pattern: contract → gate → constraints

Nothing here is novel engineering. An experienced engineer will recognize familiar patterns: contract-first design, defensive validation, and strict constraints.

What was personally novel (to me) is that an IR habit—obsession with definitions and enforcement—got me to the boring, correct patterns faster than “prompting harder” ever could.

The pattern we implemented was simple:

  • Output contract: define exactly what the model is allowed to output
  • Schema gate: validate model output before it can touch the database
  • Database constraints: enforce the same rules at rest, so invalid data can’t accumulate

It’s not “trust the model.” It’s “verify the payload.”

Architecture: post-hangup, then enforce

We designed the pipeline around one clean boundary condition: hangup.

The call ends, the audio artifact is finalized, and only then do we run transcription and scoring. Operationally, a call is only considered “closed” when its insight record is persisted (idempotent retries handle transient failures).

n High-level flow:

  • hangup event triggers processing (idempotent by call_id)
  • speech-to-text produces a complete transcript
  • transcript goes to the insights engine with the policy/protocol (the output contract and scoring rules)
  • the model returns a structured payload
  • a schema gate validates required fields, types, and 1–10 ranges
  • only validated payloads get stored; non-compliant payloads go to retry/DLQ

This is where “AI” stops being an assistant and becomes a component in a production system: the output becomes a transaction, not a suggestion.

Define the world: the output contract

Instead of asking for “sentiment,” we defined an explicit output contract:

  • transcript_text (text)
  • summary_text (text)
  • sentiment_score (integer 1..10)
  • agent_solution_score (integer 1..10)
  • agent_professionalism (integer 1..10)
  • agent_script_adherence (integer 1..10)
  • agent_procedure_adherence (integer 1..10)
  • agent_clarity (integer 1..10)
  • urgency_level (enum: Low / Medium / High)
  • key_phrases (json)
  • confidence_score (optional float 0..1)
  • analyzed_at (timestamp)
  • model_version, schema_version (strings)

Anything outside the contract is invalid. Not “close enough.” Invalid.

This is the core idea: you don’t make model outputs reliable by asking nicely. You make them reliable by treating them as untrusted input until they pass a gate.

Scale: your database needs to be strict, not optimistic

At 50 rows, “we’ll fix it later” is a workflow. At 750,000+ objects, it’s a myth.

So we added a second enforcement layer: the database. Not as a backup plan—as a principle. If something doesn’t fit, the system should refuse it rather than store it and let it rot.

We enforced two invariants.

1) Enforce 1:1 mechanically

If you claim “every call gets insights,” make it a database invariant:

CALL_INSIGHTS.call_id is both PK and FK to CALL_LOG.call_id

This is not a philosophical statement. It’s a mechanical guarantee.

2) Constrain scoring at rest

We store:

  • sentiment score as CHECK 1..10
  • each agent metric as CHECK 1..10
  • urgency as a controlled enum/level
  • optional confidence for auditing
  • model_version and schema_version so the system stays explainable over time

Figure 2: Storage-level guardrails. Scores are constrained integers (1–10) and call insights are enforced 1:1 (PK=FK), so invalid payloads cannot silently accumulate.

n We don’t treat a 1–10 score as ground truth. It’s anchored to evidence (transcript + summary) and audited against downstream reality (lead evolution, resolution outcomes, escalation events). When the signal disagrees with outcomes, it’s traceable and correctable.

In IR terms: the “measurement” only makes sense because the ontology is explicit. You know what the object is (the call), what its properties are (scores, urgency), and what counts as a change of state (lead evolution, escalation, closure). Without that, a number is just a vibe.

Migration without semantic corruption

The VoIP pipeline was only half the story. The other half was moving the dataset into a custom CRM without importing legacy ambiguity.

The biggest risk in CRM migration isn’t missing data. It’s meaning drift—what I call semantic corruption:

  • two systems use the same label for different concepts
  • different teams treat the same field as different things
  • free-text “stages” become permanent, un-auditable taxonomy

We handled migration with the same ontology-first logic.

The schema of truth

Before scripts, we defined the target data model as a schema of truth:

  • explicit entities (Contact, Lead, Call, Call Insights, Notes/Tasks)
  • explicit relations (FKs, cardinality, invariants)
  • explicit admissible states (enums and allowed transitions)

This is ontology in the IR sense: a declared set of entities, relations, and admissible states—so your mapping can’t quietly change the phenomenon you think you are measuring.

Canonical IDs and idempotent imports

A migration without canonical identifiers is a one-time import you can’t reproduce.

We enforced stable identity:

  • legacy_system + legacy_id per object
  • uniqueness constraints
  • replay-safe (idempotent) writes

That allowed reprocessing without duplicates and enabled corrections without manual surgery.

Quarantine, don’t “best effort”

Ugly data exists. The question is whether you hide it.

We split incoming records into:

  • valid → import
  • repairable → normalize + log
  • invalid → quarantine with explicit reasons

At scale, rejecting invalid data isn’t being “harsh.” It’s preventing silent rot.

Tooling note (brief and honest)

We use OpenAI in production, selected after comparative testing against Gemini about five months ago. This isn’t a lab write-up—we’re writing after the pipeline has been running long enough for us to validate operational usefulness.

The transferable idea is the pattern: contract + gate + database constraints, not the vendor.

:::info
A quick note on privacy (non-negotiable): Call transcripts are sensitive data. We apply role-based access, retention policies, and redaction where needed. The technical architecture is only useful if governance around data access is equally strict.

:::

Takeaways (the practical checklist)

If you’re building AI features into a system people will rely on:

  • treat model output as untrusted input
  • define an output contract (fields, types, ranges)
  • validate at the boundary (schema gate)
  • enforce at rest (database constraints)
  • version definitions (schema/model versions) for explainability
  • design around a clean boundary condition (for us: post-hangup)
  • keep evidence (transcript/summary) so scores are traceable and correctable

You don’t need to become a data scientist to do this. But you do need to stop thinking like a prompt writer and start thinking like a system designer.

Reference(s)

Kuhn, T. S. The Structure of Scientific Revolutions (originally published 1962). University of Chicago Press. https://doi.org/10.7208/chicago/9780226458106.001.0001

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Samsung just gave Galaxy XR owners one more reason to never take off their headset Samsung just gave Galaxy XR owners one more reason to never take off their headset
Next Article Ukraine is leading a military revolution but needs more Western support Ukraine is leading a military revolution but needs more Western support
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Galaxy Watch 4 wrecked by One UI 8 update? Samsung just rolled out a fix for that
Galaxy Watch 4 wrecked by One UI 8 update? Samsung just rolled out a fix for that
News
Report: Apple ‘exploring’ clamshell foldable iPhone as potential follow-up model – 9to5Mac
Report: Apple ‘exploring’ clamshell foldable iPhone as potential follow-up model – 9to5Mac
News
Huawei Mate X7 foldable specs unveiled, set for November launch · TechNode
Huawei Mate X7 foldable specs unveiled, set for November launch · TechNode
Computing
Can Vibration Plates Help Achieve Fitness Goals? This Is What Fitness Experts Say
Can Vibration Plates Help Achieve Fitness Goals? This Is What Fitness Experts Say
News

You Might also Like

Huawei Mate X7 foldable specs unveiled, set for November launch · TechNode
Computing

Huawei Mate X7 foldable specs unveiled, set for November launch · TechNode

1 Min Read
DJI launches Neo 2, its lightest follow-me drone with omnidirectional obstacle avoidance · TechNode
Computing

DJI launches Neo 2, its lightest follow-me drone with omnidirectional obstacle avoidance · TechNode

1 Min Read
Taobao Instant Commerce to launch ‘Taobao Convenience Store’ chain · TechNode
Computing

Taobao Instant Commerce to launch ‘Taobao Convenience Store’ chain · TechNode

1 Min Read
KEENON Robotics drives intelligent transformation of service robot technology · TechNode
Computing

KEENON Robotics drives intelligent transformation of service robot technology · TechNode

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?