January 7, 2026
Part 2: The Evolving CFO Tech Stack – Owning the Logic
Type
Deep DivesContributors
Murat Kilicoglu
In our previous discussion on the evolving CFO stack, we mapped the migration from static ledgers to dynamic systems of intelligence. We established that the modern finance function is rapidly moving from a historical reporting agency to a predictive engine.
As we see these technologies enter the wild, a new structural tension is developing. In addition to what AI can do, we need to consider where the thinking happens. For the last decade, the primary architectural debate in finance IT has been about where the data lives (data warehouses vs. inside systems of record). Today, that debate is shifting to where the logic lives.
Does the intelligence sit inside your applications, processing transactions locally? Or does it sit above them, analyzing the business holistically? This is the debate between distributed intelligence and centralized intelligence. The industry is currently bifurcating, and for the CFO, the choice involves significant trade-offs regarding vendor leverage, auditability, and the very definition of truth within the organization.
The Argument for Distributed Intelligence
The path of least resistance, and arguably highest immediate utility, is distributed intelligence. In this model, AI is not a separate layer, but it is a feature embedded directly into the edge applications where work happens.
We see this with the aggressive roadmaps of major ERP and CRM players. They are embedding copilots directly into the UI. The logic here is grounded in context. An AI native to a billing platform implicitly understands the complex parent-child relationships of your customer hierarchy. It understands the specific configuration of your revenue recognition rules (ASC 606).
Generic LLMs are notoriously bad at specific, rule-based math unless heavily fine-tuned. A distributed model keeps the brain close to the hands. When an AP clerk reviews a flagged invoice, an embedded AI has access to the vendor master data, purchase order history, and specific payment terms without needing a complex data pipeline to fetch that information.
The Risk: Logic Lock-in and Limited Explainability
The hidden danger here is both data and logic silos. If you rely on your ERP’s proprietary AI to forecast cash flow and your CRM’s proprietary AI to forecast bookings, you are outsourcing your business logic to two different vendors. You cannot easily see how they are calculating those numbers. You cannot tweak the weights of their models.
If you eventually want to migrate off that ERP, you are not just moving data rows, but you are also leaving behind the intelligence that understands your business. Distributed intelligence increases vendor lock-in by making the software’s operational logic opaque and proprietary.
The Argument for Centralized Intelligence
The counterargument, championed by modern data platforms, is centralized intelligence. This involves decoupling the reasoning layer from the transaction layer. Data is extracted from source systems into a unified environment (like Snowflake or Databricks) where a central brain processes it.
The CFOs’ most difficult questions are mostly cross-functional: “How does the engineering team’s Jira velocity correlate with our cloud compute spend?”, “What is the impact of our recent marketing pivot on the expansion ARR of the cohort acquired in Q2 2025?”
An embedded ERP agent cannot answer these questions because it cannot see the engineering data or the marketing metadata. A centralized model is the only architecture that allows for cross-functional reasoning, combining financial data, operational logs, and unstructured text in a single inference pass.
The Risk: Context Collapse
The standard criticism of centralization is speed, but in finance, a few minutes of latency is acceptable. The real killer is context collapse.
For instance, when you extract data from an ERP to a data warehouse, you often leave the metadata behind. You strip the numbers of their nuance. ERP knows that a specific deferred revenue entry is linked to complex delivery milestones stored in a PDF contract. The data warehouse just sees a number. To make the centralized AI smart, your data team has to rebuild that business logic from scratch.
This creates what we’d like to call a translation tax. Every time the finance team changes a process, the data team has to update the central model. This friction often leads to the centralized dashboard constantly drifting away from the operational reality.
The Trade-Off Matrix
To help navigate this, we have broken down the structural differences. This is both about IT preferences and operational realities for the finance team.
| Vector | Distributed Intelligence | Centralized Intelligence |
| Value proposition | Process efficiency – reducing clicks and manual entry | Strategic correlation; finding patterns across departments |
| Logic ownership | Vendor owned – the model is a black box provided by the software | Company owned – company writes the prompts and controls the logic |
| Data | High fidelity – AI sees the raw transaction and UI context | Normalized – AI sees cleaned, aggregated rows |
| Inconsistency risk | Siloed realities – CRM AI and ERP AI may confidently disagree on bookings | Drift or tangential inference – central logic may reason on stale or misunderstood definitions |
| Security perimeter | Fragmented – access controls and policies must be managed separately in every app | Unified – a single governance layer controls access to all insights |
The Rise of the Semantic Vault
So, where does the market go? We believe the most sophisticated finance organizations will move toward a concept we call the semantic vault.
In the old world, we protected data. In the AI world, we must protect definitions. The biggest risk to the CFO in the age of AI is probabilistic drift. If you ask three different AI agents to calculate Gross Margin, you might get three different answers based on how they interpret the data.
The semantic vault is a governance layer that sits between your data and your AI agents. It houses the physics of the business – the immutable definitions of metrics such as what is ARR? What is churn? What constitutes deferred revenue? When the distributed ERP agent needs to report on operating margin, it must query the semantic vault for the definition; when the centralized strategy agent runs a forecast, it uses that same definition. This approach allows finance teams to use distributed tools for their speed and context, without losing the coherence of a centralized brain.
Strategic Considerations for the CFO
As we also previously noted in our first article, there is no single right architecture, and most companies will end up with a hybrid. However, as CFOs evaluate technology investments and internal roadmaps, we believe the following considerations are useful to frame the thinking.
Owning the Logic vs. Renting the Logic
The most critical diligence item in evaluating AI-enabled finance software is transparency. While a vendor offering a black box that auto-categorizes expenses offers convenience, it introduces a potential weakness in internal controls if the reasoning is opaque. It is essential to have the ability to inspect the AI’s chain of thought. Organizations should prioritize platforms that expose their reasoning or allow for bringing-your-own-prompt architectures. This ensures that the core business logic remains an internal asset that can be audited and refined, rather than a rented service that disappears if the vendor relationship ends.
Builders Who Actually Understand the Books
We often see centralized data initiatives stall not because of the technology, but because of a language barrier. Handing a general ledger to a generalist data science team rarely yields a GAAP-compliant forecast out of the box. A data scientist might be brilliant at Python, but they may not intuitively grasp the nuance between bookings and revenue, or why a specific accrual exists. Bridging this gap likely requires a specific kind of profile, what some are calling a finance engineer. These are the builders who can navigate a data pipeline in the morning and would be happy to see a balance sheet that actually balances in the afternoon. For a centralized model to really work, it helps to have people who both know code and understand the business.
Managing Agent-to-Agent Conflict
As intelligence becomes distributed, software systems will begin to produce conflicting conclusions. A CRM agent optimized for pipeline growth may predict revenue figures that diverge significantly from an ERP agent optimized for conservative recognition. Resolving this does not require a third AI, but rather a rigid human governance framework. Organizations must proactively define which system holds the system of truth designation for specific metrics. In the absence of this, distributed intelligence leads to fragmented realities.
Conclusion: Building Institutional Memory
Ultimately, the shift to systems of reasoning offers finance leaders a chance to solve a problem older than software itself: the loss of institutional knowledge.
In the past, when a Controller or FP&A leader left the company, their mental model of the business walked out the door with them. They knew why the gross margin dipped in Q3 or why a specific accrual was necessary, but that logic was rarely captured in the ledger.
By moving toward a semantic vault and carefully architecting where logic resides, whether embedded in the ERP or centralized in a data platform, companies are doing more than automating tasks. They are digitizing the decision-making process itself. The winning finance organizations of the next decade will be those that successfully capture this logic, turning transient human judgment into permanent, scalable, institutional memory.
