Kelly Friedman

Partner

April 30, 2026

 

Digital evidence often looks authoritative. A system-generated report, an email chain, a spreadsheet, or a chat export can appear objective and complete. But in litigation, the real question is not whether digital evidence looks reliable. It is whether its reliability can be proven.

 

As litigation becomes more data-heavy, and as AI-generated or AI-assisted content becomes more common, the reliability of digital evidence deserves closer attention. A digital record is not trustworthy simply because it looks technical, precise, or system generated. Its reliability depends on the system that created it, the controls that preserved it, and the surrounding information that allows it to be tested.

 

The consequences of assuming reliability of digital evidence can be disastrous.  Few examples make the point more clearly than the UK Post Office Horizon scandal.  For years, the UK Post Office relied on the Horizon computer system to identify supposed accounting losses in local branches. The system was presumed accurate. Those losses were treated as evidence of wrongdoing, leading to prosecutions and devastating personal consequences for sub-postmasters. It later emerged that the software itself was flawed, contributing to one of the gravest miscarriages of justice in modern British legal history. The central failure was not only technological, but evidentiary: the system was trusted before it was properly tested.

 

Horizon is not just a foreign cautionary tale. It exposes a risk that exists in Canadian evidence law. Under the Canada Evidence Act, an electronic record may be presumed authentic and accurate where the underlying system is shown to have operated properly. That rule serves an important function. Modern cases cannot grind to a halt while every electronic record is proven from first principles. But the shortcut has consequences. In practice, the party relying on the record often controls the system that generated it, while the opposing party may see only the output, not the information needed to assess whether the system was functioning properly, whether the record changed, or whether crucial context is missing.

 

That is why metadata matters so much. Metadata is often dismissed as technical detail, when in fact it is central to reliability. It can show when a record was created, modified, transmitted, or stored. Audit trails may reveal who accessed it, what changed, and when. System documentation may help establish whether the record emerged from a stable and controlled process. Without that surrounding information, a digital record may appear clear while being harder to authenticate, harder to place in context, and harder to challenge.

 

For organizations, this is not merely a litigation problem. It is an information governance issue. Records are more defensible when the underlying systems are well managed, when key metadata is preserved, and when there is a reliable way to explain how important information was created and maintained. In a dispute, that preparation can materially affect the strength of the evidence.

 

AI makes the issue more urgent, not because it changes the legal question, but because it sharpens it. AI-assisted outputs can appear polished, objective, and data-driven while raising new concerns about inputs, validation, limitations, and reproducibility. The right question is rarely whether every internal step of the system can be fully explained. More often, the question is whether the result can be meaningfully tested. What data went in? Was it complete? Was the output validated? Were limitations disclosed? Can the result be checked against the source material?

 

Those same questions become especially important where experts rely on AI tools. An expert opinion is only as sound as the material and methodology behind it. If an AI tool is fed selective, incomplete, or unreliable information, the resulting analysis may be equally flawed, however sophisticated it looks. The use of AI does not lessen the need for a proper evidentiary foundation. It makes that foundation more important.

 

The practical lesson is straightforward. Counsel should ask early for the metadata, audit history, and system information that make digital records testable. Organizations should ensure that important records can later be explained and defended. Where AI is involved, attention should remain fixed on inputs, methodology, validation, and limits.

 

The central problem is not simply new technology. It is the tendency to confuse system output with trustworthy evidence. Electronic records do not prove themselves. Their reliability must be demonstrated, and that will often require demanding from the producing party the metadata, audit history, and system context needed for meaningful scrutiny.

 

twitterlinkedinmail