Insights

Batch vs. Continuous: The Reconciliation Model Your Treasury Team Didn't Know It Was Choosing

Trezh 2026-03-116 min read
Share on LinkedIn

Every treasury team that reconciles intercompany transactions has made an architectural decision — most without realizing it. They've chosen batch processing.

Every treasury team that reconciles intercompany transactions has made an architectural decision — most without realizing it. They've chosen batch processing. Not because someone evaluated the alternatives and selected batch as the optimal model. Because batch is what the tools default to, and nobody questioned it.

Bank statements arrive once a day, sometimes once a week. The TMS ingests them on a schedule. Matching runs happen at defined intervals. Discrepancies are compiled into exception reports that land on someone's desk at month-end. The entire workflow assumes that reconciliation is a periodic event — something you do after the fact, in arrears, on a cadence dictated by data availability and process convention.

This isn't wrong in the way that a broken system is wrong. It's wrong in the way that processing insurance claims by mail in 2026 would be wrong. The underlying model was designed for a world with different constraints, and those constraints have changed. The model hasn't.

How batch became the default

Batch processing in treasury reconciliation isn't arbitrary. It reflects real historical limitations.

Twenty years ago, bank statement data arrived via SWIFT MT940 files delivered once daily — sometimes less frequently. ERP systems exported intercompany data in scheduled batch runs. Network connectivity between systems was limited and expensive. Processing power was scarce enough that running matching algorithms continuously would have been impractical.

In that world, batch reconciliation made perfect sense. You collected all available data, ran your matching logic against it, produced your exceptions, and worked through them. The monthly close was the natural aggregation point where everything came together.

The problem is that every one of those constraints has evaporated. Banks now offer intraday reporting and real-time APIs. ERP systems can publish events as they occur. Cloud compute makes continuous processing trivially cheap. The data can flow continuously. The matching can happen continuously. But the process still operates as if data arrives once a month in a manila envelope.

What batch processing actually costs you

The cost of batch reconciliation isn't just the analyst hours spent on month-end close. It's the compound cost of delayed discovery.

Consider a straightforward scenario. On March 3rd, Entity A in Brazil records an intercompany payable to Entity B in Argentina. The amount is BRL 12.4 million. Entity B records the corresponding receivable on March 5th, but at an FX rate that produces a $47,000 variance from what Entity A booked.

In a batch model, this discrepancy sits undetected until the monthly matching run — typically in the first week of April. By then, three to four weeks have passed. The treasury analyst investigating it needs to reconstruct what rate was used, why it differed, whether a hedge was in place, and whether the difference represents a real economic exposure or a booking error. The people who initiated the transaction may not remember the specifics. The FX desk may have rotated. The email that explained the adjusted rate is somewhere in a thread with 40 replies.

Now multiply this by every entity pair, every corridor, every month. The investigation cost per discrepancy increases with the time delay. The probability that an analyst resolves it correctly decreases. And the audit trail — to the extent one exists — is a patchwork of after-the-fact documentation assembled under the pressure of a close deadline.

This isn't hypothetical. At companies with 20 or more entities operating across multiple currencies, the first week of every month is consumed by this exact process. The headcount dedicated to it is real. The opportunity cost of those people not doing higher-value work is real. And the risk of discrepancies slipping through — resolved incorrectly or simply accepted as immaterial — is real.

What continuous matching actually means

Continuous matching is not "faster batch." It's a different operational model.

In a continuous model, transaction data is processed as it arrives. When Entity A's payable is recorded in the ERP and ingested by the reconciliation system, the system immediately checks whether a corresponding entry from Entity B exists. If it does, the pair is evaluated: Do the amounts match? Is the FX rate within tolerance? Are the entity references consistent? If everything checks out, the transaction is marked as matched and the audit trail is written. If something is off, it surfaces immediately — not in an exception report four weeks later.

The practical implications of this shift are more significant than they appear.

Investigation while context is fresh. When a discrepancy surfaces on March 3rd instead of April 5th, the people involved still remember the transaction. The FX rate is still relevant. The communication trail is still active. The cost of resolution drops dramatically — not because the system does the investigation, but because it triggers it at the right time.

Unmatched transactions become visible immediately. In a batch model, a missing counterparty leg — where Entity A has booked a transaction but Entity B hasn't — is invisible until the matching run. In a continuous model, the system can flag that Entity A's payable has been sitting without a corresponding receivable for 48 hours. That's a signal. Maybe Entity B hasn't booked it yet. Maybe the transaction was cancelled. Maybe there's a communication gap. Whatever the reason, it's visible now, not at month-end.

Month-end close becomes confirmation, not discovery. This is the operational shift that matters most to the treasury team. In a batch model, month-end close is when you discover what happened. In a continuous model, month-end close is when you confirm that what you already know is complete. The first week of April stops being a reconciliation sprint and starts being a review of a process that's been running all month.

The audit trail writes itself. When matching happens continuously, every match decision, every state change, every flag, and every resolution is recorded in sequence as it occurs. There's no after-the-fact reconstruction. The audit trail is a byproduct of the process, not a separate documentation exercise.

The objection: "Our data doesn't arrive continuously"

This is the most common pushback, and it's partially valid. Not every bank delivers real-time statements. Not every ERP publishes events as they occur. Many companies still receive MT940 files on a daily or weekly schedule.

But continuous matching doesn't require real-time data from every source. It requires processing data as it arrives, whatever the cadence. If bank statements arrive daily, match daily. If ERP exports run twice a day, process twice a day. If one bank offers intraday reporting while another delivers weekly, handle each on its own schedule.

The key architectural difference isn't the frequency of data arrival — it's what happens when data arrives. In a batch system, incoming data sits in a staging area until the scheduled matching run. In a continuous system, incoming data is processed immediately, matched against whatever counterparty data already exists, and flagged if something is missing or inconsistent.

The practical result is that even with imperfect data frequency, discrepancies surface days or weeks earlier than they would in a monthly batch cycle. You don't need real-time banking to get 80% of the value of continuous matching. You just need a system that doesn't wait until month-end to do its job.

The transition isn't as hard as it sounds

The word "continuous" sounds like it requires a massive infrastructure overhaul — real-time APIs everywhere, event-driven architecture end-to-end, a complete rethinking of how data flows through the organization.

It doesn't. Not at the start.

The same file-based data sources that feed a batch process can feed a continuous one. The difference is on the processing side, not the data delivery side. An MT940 file that arrives every morning at 6 AM can be ingested and matched at 6:01 AM instead of sitting in a queue until the monthly run. An ERP export that runs nightly can be processed nightly. The data sources don't need to change. The system that consumes them does.

Over time, as confidence grows and the value becomes clear, the integration can deepen. Scheduled file delivery becomes automated file delivery. Automated delivery becomes API integration. API integration becomes event-driven streaming. Each step increases the frequency and reduces the latency — but the first step is just processing what you already have, faster.

The question isn't whether to switch. It's when.

The treasury teams that have moved to continuous matching — or are actively evaluating it — share a common profile. They have enough entities and enough intercompany volume that the month-end reconciliation burden is a measurable cost center. They operate across currencies and jurisdictions where FX discrepancies are frequent enough to require real investigation. And they've hit the ceiling of what their current process can handle without adding headcount.

If that sounds like your team, the batch model isn't just suboptimal. It's actively costing you — in analyst hours, in delayed discovery, in audit risk, and in the opportunity cost of a treasury team that spends its first week of every month looking backward instead of forward.


Trezh is building AI-powered intercompany reconciliation infrastructure for multinational treasury teams.