Apple has taken a bold step in its artificial intelligence strategy by building an internal ChatGPT-like app called Veritas to test its next-generation Siri features.
The move signals a major transformation for the voice assistant, whose full upgrade has been pushed back until early 2026 as the company adopts new large language models and revises its technical plans.
The Veritas testing app is part of Apple’s AI division efforts to prepare Siri for more advanced and human-like interactions.
Employees can run extended conversations on various topics, save and reference chats, and provide feedback, helping Apple fine-tune its technology before public release.
What is Apple’s Veritas testing app?
Veritas is a purpose-built internal app for experimenting with new voice assistant capabilities. It acts similarly to popular chatbots, allowing testers to conduct multiple chat sessions and review past messages for accuracy and context.
By limiting Veritas to in-house teams, Apple gathers targeted feedback and rapidly iterates on emerging AI features without exposing unfinished products to public scrutiny.
Testing within Veritas focuses on how a chatbot format might enhance or replace classic voice assistant functionality.
Apple engineers use the app to benchmark extended multi-turn conversations, probe new planning algorithms, and analyze the response accuracy required for a robust Siri overhaul. The approach allows technology risks to be evaluated safely away from the public eye.
Did you know?
The launch of the iPhone in 2007 was a watershed moment, fundamentally changing the mobile phone from a simple communication device into a powerful pocket computer.
Why did Apple delay Siri’s overhaul?
Apple’s original plan was to debut a vastly improved Siri as part of iOS 18, but several engineering setbacks prompted a rethink. Reports suggested some prototype features failed up to one-third of the time, undermining reliability.
Apple opted to abandon piecemeal upgrades and instead deploy a redesigned Siri powered by advanced language models. This requires more time and testing, pushing the launch to spring 2026 with iOS 26.4.
This delay also provides Apple breathing room to ensure the underlying AI meets both performance and privacy standards.
Transitioning from the first-generation Apple Intelligence Siri architecture to a second-generation LLM-driven model is complex. Veritas acts as the proving ground, helping the company learn from missteps before release.
How does Veritas compare to ChatGPT and Gemini?
While inspired by leading AI tools like ChatGPT, Claude, and Gemini, Veritas is purpose-built for testing Apple’s Siri integration instead of offering a standalone chatbot product.
Unlike consumer chatbots, Veritas experiments with multi-modal inputs, device context, and personalized data access to replicate what Siri users expect from an iOS experience.
Veritas’s design enables Apple’s engineers to simulate continuous conversations across apps and devices, supporting more realistic assistant scenarios.
It not only mirrors ChatGPT’s conversational abilities but also incorporates tasks requiring broader context and privacy protection, which remain Apple’s unique priorities.
ALSO READ | How Will OpenAI and Databricks’ $100M Deal Transform Enterprise AI?
What partnerships and technologies power the upgrade?
Apple is not building everything alone but is forming key partnerships with AI leaders. It is trialing Google’s Gemini model for web summarization functions and engaging Anthropic to assess Claude’s planning algorithms.
While Gemini and Claude may help with external content, Apple insists its foundation models will handle personal data searching for better privacy.
Formal agreements with Google anchor the summarization component of next-gen Siri, while talks with Anthropic show a willingness to adopt best-in-class planning tools.
Apple aims to combine third-party strengths for greater utility, yet keeps private user content processing strictly within its own secure ecosystem.
What will the new Siri do differently?
The upcoming Siri system is architected with three core engines: a planner to interpret spoken or typed input, a search engine that scans web and device information, and a summarizer for final responses.
These modules will enable Siri to handle continuous conversations, perform complex tasks, and return human-like answers in context, setting a new standard for user interaction on Apple devices.
Privacy remains central to the redesign. Apple will employ its proprietary Foundation Models for device and personal account searches, ensuring third-party tools process only public data.
The integrated approach advances Siri’s practical usefulness while aligning with Apple’s commitment to privacy.
The next Siri will be capable of understanding nuanced requests, managing multi-step workflows, and delivering information relevant to each user’s needs.
Apple’s broader AI strategy, reflected in the Veritas app and key partnerships, could push digital assistants into a new era for real-world productivity.
As Apple continues to test and refine its platform in the coming months, the industry will watch how successfully the company combines its privacy-first foundation with powerful AI models.
If Apple’s cautious approach succeeds, Siri may finally catch up with the promise of true conversational intelligence, transforming everyday device usage for millions of users worldwide.
Comments (0)
Please sign in to leave a comment