VentureBeatApr 7, 03:00 PM
As models converge, the enterprise edge in AI shifts to governed data and the platforms that control it
Presented by Box
As frontier models converge, the advantage in enterprise AI is moving away from the model and toward the data it can safely access. For most enterprises, that advantage lives in unstructured data: the contracts, case files, product specifications, and internal knowledge.
For enterprise leaders, the question is no longer which model to use, but which platform governs the content those models are allowed to reason over.
"It's not what the model does anymore, it's the enterprise's own unstructured data – their content, how it's organized, how it's governed, and how it's made accessible to the AI." says Yash Bhavnani, head of AI at Box.
"The organizations that will lead in AI are the ones that built the governance infrastructure to make any model trustworthy, with the right permissions in place, the right content accessible, and a clear audit trail for every action taken," says Ben Kus, CTO of Box.
Enterprise AI must be grounded in secure systems of record
As the advantage in AI shifts from models to governed content, systems of record are becoming the foundation that makes enterprise AI trustworthy.
Employees use frontier models to summarize documents, draft reports, answer questions, but when those tools are disconnected from authoritative internal repositories, the results are difficult to trust, impossible to audit, and potentially dangerous. AI that cannot trace its outputs back to a governed source of record becomes a liability.
"It's not a theoretical concern," Bhavnani says. "For an insurance enterprise using AI to analyze client claims, low accuracy is simply not acceptable, and untraceable output can't be acted upon."
Systems of record provide authoritative, version-controlled content with embedded permissions and compliance controls already built in, and RAG pipelines retrieve data from live repositories at inference time, connecting responses directly to current, traceable sources.
Without integration into systems of record, employees build their own workarounds, content gets duplicated across tools that don't talk to each other, and shadow knowledge stores accumulate outside the visibility of IT and compliance teams.
"Customers tell us employees are uploading sensitive documents to personal accounts and running their own AI workflows, with no visibility from the enterprise into what is being shared or what is being generated," he says. "It's not just a security risk, it's an organizational one."
Permission-aware access is a requirement for agentic AI
As AI moves into agentic territory, executing multi-step tasks autonomously across documents, workflows, and enterprise systems, the risk profile changes entirely. Agents act faster than humans, often without the contextual judgment needed to decide what data they should access, making permissions-aware access essential.
"An AI platform without permissions-aware access is too dangerous to use," Kus says. "It's a precondition for safe enterprise AI deployment, and the mo