Home/Blog/Case File: Everything Encrypted by Morning
Case Files

Case File: Everything Encrypted by Morning

At 6:14 AM, a trading company's finance manager received an email from her own address. Three people opened it before anyone questioned it. By 7:30 AM, the network was gone. This is the story of what happened next.

S
SysOps Team
SysOps Team
April 6, 2026
8 min read
Case File: Everything Encrypted by Morning

First Contact

The email arrived at 6:14 AM.

It came from their own finance manager's address, which is why three people opened it before anyone thought to question it. The subject line read: Updated Payment Terms, Q2 Suppliers. The attachment was a PDF. Or something that looked like one.

By 7:30 AM, the network was gone.


We got the call at 8:02 AM. The voice on the other end belonged to the operations director of a local trading company — fifty-three staff, two warehouses, a head office. He had been in the building since 7:45, walking floor to floor, watching people stare at screens showing the same message on repeat.

Your files have been encrypted. To recover access, follow the instructions below.

He read us the ransom note over the phone. We were on site by 9:15.

The first thing you notice when you walk into an active ransomware incident is the quiet. No keyboards. No phones ringing. Staff sitting at desks with nothing to do — not because the day hasn't started, but because every system they use to do their jobs has been taken from them simultaneously. It is a specific kind of organisational paralysis that you don't fully appreciate until you've stood inside it.

We asked everyone to stop touching their machines. Then we went to the server room.


What the Malware Had Done

Ransomware at this tier follows a playbook. It isn't random. It is patient, sequential, and surgical about what it targets first.

The initial compromise — the opened attachment — had executed a macro-enabled script that established a persistent connection to an external command-and-control server. That connection was made over port 443, standard HTTPS traffic, which the firewall had no reason to flag. From that foothold, the malware spent approximately four hours moving quietly through the network: enumerating file shares, identifying backup locations, escalating privileges using a known vulnerability in an unpatched Windows service, and mapping every domain-joined machine.

Then, at precisely 7:22 AM, it detonated.

The encryption ran in parallel across every mapped share and every reachable endpoint simultaneously. Files were encrypted with a strong asymmetric key held on infrastructure the attackers controlled. The ransomware specifically targeted backup shadow copies, deleting them on every machine before encrypting. It targeted the backup server first, before the file server, before the workstations.

Whoever built this knew exactly what order to work in.

By the time the first employee arrived and saw the ransom note, 94% of the organisation's accessible data was already ciphertext.


Triage Before Recovery

The instinct in a ransomware incident is to start recovering immediately. That instinct is almost always wrong.

Before you recover anything, you contain. Before you contain, you understand. If you restore systems into an environment where the attacker still has persistence, you hand them back everything you just cleaned — usually faster than the first time.

We isolated the network at the perimeter first, cutting all external traffic. Then segment by segment, we took systems offline — not powered off, offline from the network — preserving volatile memory where possible for forensic purposes. We identified and physically disconnected the patient zero workstation, the finance manager's machine, and bagged it for analysis.

We then asked three questions that every ransomware response hinges on.

First: was there a backup that the malware hadn't reached? Second: what data had left the network before encryption — because modern ransomware operators frequently exfiltrate before they encrypt, turning the incident into a double extortion. Third: were the attackers still inside?

The answer to the first question was complicated. The answer to the second was deeply uncomfortable. The answer to the third was yes.


The Backup Situation

The company ran a daily backup job to a NAS in the server room. The ransomware had encrypted it at 7:22 AM, same as everything else. They also had an older tape rotation — a practice the previous IT manager had insisted on — which most of the current staff hadn't known was still running. The tapes lived in a fireproof cabinet bolted to the wall in the warehouse manager's office.

Tape is slow. Tape is unglamorous. Tape had not been touched in four months.

It was also entirely, perfectly, untouched by the ransomware.

The most recent viable tape was seventy-nine days old. That meant seventy-nine days of transactions, communications, contracts, and operational data simply didn't exist anymore in any recoverable form. The business would have to reconstruct what it could from email threads, supplier records, and memory. For a trading company that moves inventory daily, seventy-nine days is not a minor gap.

But it was not zero. And zero was what the attackers were banking on.


The Exfiltration Finding

The forensic image of the patient zero workstation told a story that started well before the morning of the attack.

The initial compromise hadn't happened through the attachment that morning. That attachment was the detonator, not the entry point. Six weeks earlier, the same machine had visited a convincing replica of a local supplier's web portal, entered credentials, and received what appeared to be a login error. In fact, the credentials had been harvested. The attacker had used those credentials to access the company's email system remotely, over legitimate channels, and spent six weeks reading internal communications before choosing their moment.

They knew which supplier names to put in the subject line. They knew which employee's address would be trusted. They knew the finance team opened external documents in the morning before the rest of the office was fully staffed.

They also knew where the sensitive documents lived, because they had read the emails discussing them.

Before the encryption ran, a staged archive of approximately twelve gigabytes had been exfiltrated to external infrastructure. Supplier pricing agreements, import documentation, and staff payroll records.

We reported this to the operations director in a private meeting with no one else in the room. He sat with it for a long moment. Then he asked who he needed to call.

We gave him the list.


The Recovery

Restoration from tape took three days of continuous work.

The sequence mattered. Domain controllers first, then DNS, then file services, then endpoints — each one verified clean before it touched the restored network. Every machine that had been domain-joined was assessed: those with confirmed encryption and no forensic value were wiped and reimaged from clean media. Credentials for every account, user and service, were reset across the board. The firewall was replaced and the rule set rebuilt from scratch — this time with outbound filtering, blocked execution paths for macro-enabled documents, and a SIEM receiving logs from every segment.

The email gateway was reconfigured with attachment sandboxing. Macro execution in Office applications was disabled by Group Policy across all workstations. Multi-factor authentication was enforced on every remote access point and on the email platform — a control that, had it been in place six weeks earlier, would likely have stopped the attacker's initial credential use cold.

On the seventy-second hour, the first workstation came back online. By the end of day four, the company was operational.

Not whole, not yet, but operational.


The Cost Nobody Talks About

The ransom demand was for forty thousand US dollars, payable in cryptocurrency, with a seventy-two hour countdown before the amount doubled.

They didn't pay it.

Not because they were certain the attackers would honour the decryption. Not because they had taken a principled stand. They didn't pay because we told them the tapes existed on the first morning, and they made the call before the clock became a source of pressure. That decision — made early, before panic compounded the problem — is the thing that determined everything that followed.

The actual cost of the incident, when tallied — including lost productivity, consultant fees, hardware replacement, legal and compliance engagement, and staff overtime — came to more than twice the ransom demand. That is typical. It is almost always more expensive to recover than to prevent, and almost always cheaper than paying and hoping.

What doesn't get added to that figure is the seventy-nine days of data that simply ceased to exist. You can't invoice for what's gone. You can only decide what you'll do differently with what remains.


What They Did Next

Three months after the incident, the company had a tested backup strategy with off-site replication and a monthly restore drill. They had endpoint detection and response tooling across every machine. They had a documented incident response procedure — printed, laminated, posted in the server room and in the operations director's office. They had run a phishing simulation with their own staff and used the results to design a thirty-minute awareness session, delivered in person, not via an online module nobody finishes.

They also called us before they signed any new software contract, any new vendor agreement, any new connectivity arrangement.

That last one matters more than any technical control. The question isn't only whether your systems are secure. It's whether you've built the habit of asking the question before something forces you to.

A ransomware attack is a violent answer to questions the organisation never thought to ask.

The quieter work — the audit, the segmentation, the tested backup — is how you make sure you never need to find that out firsthand.