By David Wong and Joel Hron
When ChatGPT launched in late 2022, it was a wake-up call for many companies. For us, it wasn’t just a signal, it was a catalyst. It validated the long-held ambitions of our engineers and product leaders to apply AI in solving the kinds of customer problems we had struggled to crack with earlier-generation technologies.
Traditional chat-based interfaces, while useful for reactive tasks, often struggle to stay aligned with user goals, handle multistep reasoning or take meaningful action. They can be more like teammates waiting for the next assignment than ones who anticipate needs.
Now, a new chapter is emerging as agentic AI takes center stage. We have access to tools that don’t just respond, but act. These agents can interpret complex objectives, plan multistep tasks, adapt in real time, and execute workflows alongside human professionals.
The evolution of AI in professional domains has demanded platforms that prioritize trust, accuracy and domain expertise — values we’ve spent years integrating into our systems. Our work includes agentic capabilities in products such as CoCounsel Tax, a vertical-specific AI for tax and accounting, and CoCounsel Legal, the legal industry’s first professional-grade agentic AI research tool.
CoCounsel can interpret complex objectives, plan and execute multistep workflows, and deliver results with the same precision as an experienced lawyer, accountant or compliance officer. It draws on industry-leading tools — including Westlaw for legal research, Practical Law for procedural guidance, and Checkpoint for tax and accounting expertise — allowing it to do real work, like using a calculator to complete a tax return.
This means that instead of simply providing information, professionals can delegate complete assignments — like working through a tax return or drafting and reviewing legal motions — knowing the work will be handled to industry standards. Behind these capabilities is deep expert guidance from thousands of legal, tax and compliance specialists, ensuring that outputs are not only technically accurate but also aligned with the way seasoned practitioners actually work.
Watching this transformation from our vantage points as chief product officer and chief technology officer at a company with a 150-year-plus history of providing trusted expertise to professionals, it was clear early on that our roles and products would never be the same.
This shift is especially significant in the high-stakes domains we support, including law, tax, compliance and risk. In these industries, accuracy, transparency and trust are paramount. AI must perform reliably, align with regulation and support nuanced human judgment.
Transitioning from answers to outcomes
Three years ago, our AI journey looked very different. We were early adopters of tools like GitHub’s Copilot, and today, more than 80% of our engineers use them weekly. But that was just the beginning. Now, we’re focused on full agentic systems — tools that can reference internal documentation, interact with servers, retrieve live data, triage and fix bugs, and even build applications from scratch.
Unlike software code, which has testable and verifiable outcomes, many expert domains have a range of acceptable answers, some better than others. That is where human judgment is pivotal.
We’ve learned that embedding AI engineers within domain expert teams accelerates iteration and trust. Our 250-plus AI engineers work alongside more than 4,500 domain experts including lawyers, accountants and compliance leaders, to shape AI into real-world capabilities.
Imagine a system for lawyers that doesn’t just suggest clauses in a contract, but compares documents, identifies legal risks and escalates complex issues for expert judgment. Or a tool for tax professionals that goes beyond retrieving tax codes to flagging compliance risks, adapting to real-time data, and completing multistep workflows. These are not abstract concepts. They’re embedded, outcome-focused systems that are starting to redefine how professionals work.
Intelligence is only half the battle
Tech culture has long celebrated moving fast and breaking things. But in law and tax, breaking things isn’t an option. Speed is important, but trust is even more valuable. No matter how advanced agentic capabilities get, they won’t be adopted if professionals can’t trust them. Intelligent systems need to go beyond results to provide transparency, consistency and oversight.
Designing effective agentic products is as much a human challenge as a technical one. Agentic systems need to know when to escalate decisions, how to explain their reasoning, and how to adapt without straying from the user’s standards.
Human-in-the-loop controls are central to this process. Experts guide development, stress-test edge cases, and ensure performance in the contexts that matter most.
When systems are empowered to plan and act, small misalignments in goals, context or data quality can lead to significant errors. Overreliance on automation without clear guardrails can sideline human judgment when nuance matters most.
Some of the most powerful features of agentic systems are invisible, such as their ability to maintain context, reason over multiple sources, or decide when not to act without enough information. These are what allow professionals to work with greater confidence.
Rebuilding systems and teams
In our leadership roles, we focus on adaptability. We value curiosity, the ability to learn quickly, and cross-disciplinary work, rather than specialization alone.
We have restructured for small, highly aligned teams and empowered subject matter experts to shape AI behavior. We iterate quickly without sacrificing rigor or trust.
The future of true agentic capabilities is not about the fastest or most autonomous system. It is about building the most useful one: a system that can reliably assist professionals in moments where stakes are high and time is short. The most meaningful agentic AI will expand what professionals can achieve, especially when margins for error are small.
David Wong is the chief product officer at Thomson Reuters, where he leads the product management, editorial, content and design, and product analytics teams. He oversees product strategy and product development of Thomson Reuters’ global software and business information services portfolio, which includes creating new AI software and technology for the company’s professional customers. Wong has more than 15 years of experience building business-to-business software, information services, and machine learning and AI systems. Previously, he served as a senior product leader at Facebook. He has also held senior roles at Nielsen and worked at McKinsey & Co. as a management consultant. Wong holds a degree in engineering science from the University of Toronto and is an inventor with four named patents.
Joel Hron is the chief technology officer at Thomson Reuters, where he leads product engineering and AI R&D across legal, tax, audit, trade, compliance and risk. He joined Thomson Reuters in 2022 through the acquisition of ThoughtTrace, where he served as CTO. Since then, he has helped transform Thomson Reuters’ technology strategy. While leading TR Labs and AI, his teams launched seven generative AI products in just 18 months, including AI assistants for legal research, tax research and contract drafting. Hron holds a master’s degree in mechanical engineering from the The University of Texas at Austin and a bachelor’s in engineering from Texas Christian University.
Illustration: Dom Guzman
Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.