Last week, the UK Information Commissioner’s Office announced that it was fining the Independent Inquiry into Child Sexual Abuse (IICSA) £200,000 (over $260,000 US) for a 2017 data breach.
The breach exposed the email addresses, and in some cases the full names, of 90 individuals. The information of these people, presumably sexual abuse victims and family members, was released through mishandled email notifications that used the “To” field instead of the “BCC” field.
There’s been a fair amount of coverage on this, with reactions ranging from shock at the size of the fine to discussion of how the vendor or the organization should have handled the situation differently. But I want to focus on something else, something relevant to every organization with privacy and security concerns – that is, every organization.
There is no technical fix to this type of breach, and there’s no way to apply technology to solve this problem. That’s important enough to bear repeating.
There is no technical fix to this type of breach, and there’s no way to apply technology to solve this problem.
Oh, it’s possible to create a system that couldn’t have sent the notification this way. And that’s probably something the IICSA should have done, especially given the sensitive nature of their communications.
But at its core, this breach and the resulting fine were caused by an individual just doing their job to the best of their ability and in good faith – and it just went horribly wrong.
It’s probably impossible to completely eliminate that risk, but there are ways to reduce it.
Have staff who are empowered to do their jobs, with all the right tools available. Foster a culture, not just of due caution and care for personal privacy and sensitive information, but of cooperation and compassion. Make uncertainty and hesitation reasons to seek assistance and confirmation, rather than just compounded sources of stress.
Periodically review processes and procedures and test not just for the expected failure scenarios, but the unexpected as well. Sure, there are procedures in the manual for handling a fire – that’s why you have fire drills – but how would your team respond to a sudden infestation of wombats?
Angry wombats.
I bet your disaster manual doesn’t even have a “marsupials” section.
The point is, whenever one of these incidents occurs, it generally turns out acceptable solutions were available the whole time. What makes the incident serious is not a failure of technology, or process, or regulatory control; it’s that the people involved didn’t have the tools or knowledge to respond appropriately and weren’t comfortable looking for help from others who might.
In information security we tend to be uncomfortable with cultural solutions. Problems should be technical or, failing that, procedural. And if that fails, well, it’s time to find someone to blame.
That doesn’t work. It never has, and it never will.
We need to think beyond the firewalls and check-box compliance programs and start focusing on risk reduction, impact mitigation, and building a culture of care that puts solving problems and optimum outcomes over and above the assignment of blame.
It’s time to stop celebrating individual heroics and searching to assign fault. It’s time to work together on combining skills and experiences to improve the way we do things in this industry. Before the next accident that harms victims of abuse through wayward email.
Or a thundering herd of wombats.