Apple CEO Tim Cook holds an iPhone 17 pro and an iPhone Air.
Image: REUTERS/Manuel Orbegozo
Apple has quietly built a ChatGPT-style iPhone app, internally dubbed Veritas, to accelerate development of the next generation of Siri, according to a widely cited Bloomberg report.
Rather than releasing Veritas to consumers, Apple is using it as a sandbox for designers and engineers to test more advanced conversational features, essentially turning Siri’s next evolution into a living lab.
Apple’s decision to embed the future of Siri in a standalone chatbot-like interface is striking.
The company has long eschewed aggressive AI branding in favour of seamless, embedded intelligence.
Yet internal pressures and lagging AI expectations appear to be shifting that posture.
Veritas gives Apple engineers a way to iterate features like memory, context tracking, and multi-step conversations without first shoehorning them into Siri’s voice-driven shell.
Because Veritas mimics popular chatbot affordances, maintaining multiple conversation threads, saving and referring to prior chats, and following up on earlier prompts, Apple can capture real usage data and identify friction points earlier, according to Appleinsider.
Business Times also reported that Veritas is built on the same underlying system, Linwood, which fuses Apple’s custom foundation models with third-party models.
This enables Apple to calibrate how different LLM components respond under real-world conditions before folding them into Siri.
Despite what feels like an AI arms race, Apple reportedly has no intention to release Veritas broadly. The company appears to want to defer a public chatbot identity in favour of refining Siri as the interface.
Problem | Symptoms | How Veritas Might Help |
Context breakdown / poor conversational memory | Siri often fails to carry context from one turn to the next—asking “What was that movie again?” might lose reference to the prior query | Veritas’s ability to reference prior messages and manage multi-threaded dialogues gives Apple a testing ground for stable context retention. |
Limited “in-app” actions & cross-app context | Siri struggles to execute complex tasks like “Edit my last photo based on this prompt” or “Find that song I mentioned earlier” | Veritas is already testing capabilities like editing photos, searching emails/music, and triggering in-app actions. |
Generic or shallow responses | Compared to ChatGPT or Gemini, Siri often offers terse or surface-level answers | Veritas offers a more conversational, generative format. Apple can use it to stress-test deeper reasoning. |
Stability and regressions | New Siri features reportedly failed in early builds, delaying public launch | By iterating within Veritas first, Apple reduces the risk of shipping regressions to customers |
User trust/privacy concerns | Users worry about voice assistants having too much access to personal data | Using Veritas internally lets Apple carefully calibrate data handling, consent flows, and local-vs-cloud tradeoffs before exposing them broadly |