Imaging Isn’t Done When the Hash Matches

2026 January 02
By Stephen Bunting

I want to be clear at the outset: this is not a criticism of FTK Imager or any other forensic imaging tool. I have used FTK Imager since it first became available—more than two decades ago—and continue to trust it. What follows is not a software failure story. It is a process failure story, rooted in physics, hardware fatigue, and human assumptions that software cannot see or correct.

Over the course of my career, I have encountered this scenario twice. The two events were separated by more than a decade, involved different examiners, different environments, and different cases. In both instances, the forensic image completed “successfully.” Hashes matched. The image previewed cleanly. Nothing appeared wrong—until much later, when it became clear that large portions of the acquired data consisted entirely of zero-filled space.

FTK Imager verification results showing matching MD5 and SHA1 hashes and no reported bad blocks
Figure 1. A completed forensic image reporting matching hashes and no bad blocks. This is the confirmation most examiners rely on before moving forward.

Case One: The Image That Looked Fine

In the first instance, a seasoned examiner traveled across the country to acquire a laptop. The image was completed in the field, previewed, and appeared normal. Hash values matched. The examiner returned with confidence that the acquisition was sound.

When the image was later examined more closely, it became clear that many files were unreadable—filled with zeros. Not every file was affected, but enough were compromised to render the image unusable for meaningful analysis. The metadata was present. Directory structures looked intact. But the underlying file content simply was not there.

The examiner was understandably stunned. He had previewed the image. It looked good. The software reported success. Yet the data told a different story.

Case Two: Five Terabytes of Nothing

The second instance occurred many years later in 2024 during active litigation. I was brought into the matter long after the original acquisition had been completed by a large e-discovery firm. The device in question was an 8 TB drive. Of that, approximately 5 TB consisted of zero-filled space.

At first glance, this did not immediately raise alarms. The custodian had been actively scrubbing data. Missing information was expected. Keyword searches returned few results, which aligned with the theory that relevant data had been deleted.

But that expectation itself became the blind spot.

No one had questioned whether the absence of data reflected deletion by the user—or silent failure during acquisition. For more than two years, multiple examiners worked on this case with the assumption that the image was sound because it had completed successfully and passed hash verification. No one looked deeper.

The downstream consequences became apparent only years later, when I discovered and reported that the original acquisition had failed to collect approximately five terabytes of data. The implications required no elaboration.

What Actually Failed (and What Didn’t)

In neither case was the failure attributable to the imaging software. The tools behaved as designed. What failed was the physical layer.

Hard drives heat up during sustained read operations. Cables flex. Connectors fatigue. Write blockers—especially well-used or abused equipment—experience micro-disconnects that may not register as catastrophic failures. When this happens, the imaging process may continue, filling unreadable sectors with zeros while still producing a consistent, verifiable image file.

The result is an image that is internally consistent but externally untrue.

Crucially, much of the metadata that gives an image the appearance of validity lives at the beginning of the drive. Directory structures, file names, and allocation tables are acquired early—before thermal stress and hardware instability are most likely to manifest. Previewing the image reinforces confidence at precisely the wrong moment.

Why This Goes Undetected

Several factors conspire to keep these failures hidden:

  • Software reports successful image completion.
  • Hashes confirm consistency, not correctness.
  • No bad blocks found.
  • Preview tools surface metadata, not underlying content integrity.
  • Keyword searches return no hits in zero-filled space, reinforcing expectations.
  • High-volume workflows reward completion, not skepticism.
  • Scenario fulfillment takes hold: the image completed, therefore it must be good.

In litigation contexts, this is especially dangerous. Silence is often interpreted as absence. In reality, silence may be the artifact.

The Imaging Log: Where the Truth Lives

In both cases, the warning signs were present—but they lived in a place that is too often ignored: the imaging log.

Imaging logs routinely record read errors, retries, sector timeouts, and other low-level anomalies that do not necessarily cause an acquisition to abort. In high-volume workflows, logs are frequently archived, skimmed briefly, or not reviewed at all once an image reports successful completion and hashes verify.

FTK Imager acquisition log indicating unreadable sectors replaced with zeros on an 8 TB drive
Figure 2. Imaging log from an 8 TB drive showing unreadable sectors replaced with zeros—amounting to more than five terabytes of missing data, despite a successful image completion.

That is where the devil hides.

A careful review of the logs in these cases revealed indicators that something was not right during acquisition. The software had not failed, but it had clearly struggled. Those details mattered—but only if someone took the time to look for them.

FTK Imager log from earlier case showing unreadable sectors replaced with zeros
Figure 3. Imaging log from an earlier, unrelated case showing unreadable sectors replaced with zeros under different conditions and equipment.

When examiners stop at the success banner, the log becomes an afterthought. When they read the log, the story often changes.

The Takeaway

The lesson here is not to distrust tools. It is to distrust assumptions.

Imaging is not finished when the hash matches. Verification must extend beyond completion banners and previews. Cables should be treated as consumables. Hardware should be rotated aggressively. And when large gaps or absences appear—especially when they conveniently align with expectations—they deserve scrutiny, not dismissal.

Software will always tell you when it succeeds. It cannot tell you when physics intervenes.

Always, always, read the logs!

That responsibility remains ours.

Scroll to Top