Thomson Reuters forms Trust in AI Alliance network focused on agentic artificial intelligence

The group’s inaugural session will concentrate on high-stakes professional environments

Thomson Reuters forms Trust in AI Alliance network focused on agentic artificial intelligence
By Jacqueline So
Jan 18, 2026 / Share

Thomson Reuters Labs has gathered artificial intelligence researchers and engineers together under the Trust in AI Alliance to progress agentic AI system development.

The alliance aims to facilitate collaboration among AI innovation technical leaders with the goal of improving trust in AI. Network members are expected to trade insights, identify common challenges, and develop linked approaches to constructing reliable, accountable AI systems.

The alliance highlighted reliability, interpretability, and verification as key factors to secure human confidence.

“As AI systems become more agentic, building trust in how agents reason, act, and deliver outcomes is essential. The Trust in AI Alliance brings together the builders at the forefront of this work to align on principles and technical pathways that ensure AI serves people and institutions responsibly, and at pace,” said Joel Hron, Thomson Reuters’ chief technology officer, in a statement.

Senior engineering and product leaders from Anthropic, AWS, Google Cloud, and OpenAI join Thomson Reuters’ experts as founding participants in the alliance. Anthropic’s head of product – enterprise, Scott White, said that establishing trust in AI systems was vital with advanced technology assuming more autonomous functions “in high-stakes settings and industries.”

“Building trusted agents requires grounding models in 'enterprise truth,' connecting them to the fresh, verifiable data that businesses run on,” said Michael Gerstenhaber, Google Cloud’s vice president of product management for Vertex AI.

The Trust in AI Alliance identified context integrity, immutable provenance, and security against adversarial prompts as the three foundational challenges determining whether or not agentic system deserve professional trust, according to a Thomson Reuters blog post. The first examines whether AI models can preserve critical decision criteria in the process of compressing or segmenting information; the second looks into ensuring that cited source texts are not modified and can be audited; and the third examines the protection of workflows from malicious inputs without affecting system usability.

The alliance’s inaugural session will concentrate on agentic AI systems used in high-stakes professional environments.

Related stories

Thomson Reuters: Turning AI pressure into legal advantage How law firms should embrace agentic AI for multi-step legal task automation