Well, 2016 is almost over and it’s had no shortage of “cyber” events. Right now you might be thinking about the 200 million Yahoo accounts that were compromised, the Democratic National Committee hack, or the warnings about potential election hacking. Compromises of confidentiality, integrity, and availability are so commonplace now that the average person is probably pretty desensitized to the whole thing – even when they are the victim.
The irony is that average people – you and I – continue to be the single largest threat vector for organizations. Up to 70 percent of breaches occur due to insider threat, usually unintentional. That’s why finding a way to help individuals understand their role in cybersecurity is a challenging but crucial effort.
As I’ve mentioned in other articles on insider threat, it’s hard for people to internalize rules that don’t have any immediate bearing on their welfare or safety. So, what can the C-Suite do about it? Training is the obvious answer, but consider a recent study of security savvy internet users and how often they violated their own rules.
Dr. Zinaida Benenson and researchers from the computer science department at Friedrich-Alexander University, sent 1,700 FAU students emails or Facebook messages under a false name with text indicating a link taking the recipient to pictures from a recent party. In a second variation, the text did not address the subject by name, as it had in the first, but more specific information was provided regarding the supposed photos on the provided link (e.g. a New Year’s party). Afterward, subjects were surveyed to self-assess their own awareness of online security and then asked why they did or did not click on the link.
In the variation that addressed the recipient by name, 56 percent of email and 38 percent of Facebook recipients clicked the link. In the nameless variation, email recipients who clicked went down to 20 percent while Facebook users went up to 42 percent. The survey results indicated 78 percent of participants were aware of the risk of accessing links from unknown sources. Only 20 and 16 percent of participants from each respective study self-identified as accessing the unknown source links, whereas technical results showed 45 and 25 percent (overall) had clicked links in each respective study. The main reason – curiosity.
This example should send chills up the spine of any network administrator – a simple phishing attack on an organization of any size is almost guaranteed to net at least one wayward user, even if they’ve passed their quarterly training.
So again, what can the C-Suite or managers in charge of protecting the realm do? You can never fully eliminate risk but here are four solid ways you can reduce it:
First and foremost, define what you need to defend. Is everything super sensitive and critical to your business? Or, can you narrow down critical assets to processes, products, property, and the like? This is an essential element of any insider threat program – consider who the various stakeholders are and what they define as critical assets, be they physical or virtual. Once the organization has defined what is critical (don’t forget people!) it can prioritize defense and mitigation mechanisms.
Next, you’ve heard it before and I’ll say it again – assign least privilege to users and programs and enforce separation of duties. Least privilege does everyone a favor. Not only does it act as a layer of defense, but to a degree, it protects the user from even looking like they tried to violate company protocols, accidentally or otherwise. Separation of duties is similar, it reduces the power of any one user to make significant changes to the system and can be viewed as reducing culpability. Will this mean it takes a bit longer to get things done sometimes? Yes, but would you rather be able to access/modify item X right now at the potential cost of losing credibility with your client? Security and speed always seem to be at odds, but in hindsight the correct choice always seems obvious.
Point three, do not assume employees will report anomalies, be they indicators of insider threat, “cyber” events, or social engineering attempts. Some will, some won’t, and there are a variety of valid behavioral reasons for the latter. Obviously, we should still train employees to the threat indicators, means of reporting, and consider alternate reporting streams, but we shouldn’t expect this to replace systems that detect, monitor, and mitigate threats. I’m talking about firewalls, intrusion detection systems, well-designed networks, securing ports, logs of device access to the network, user logs, and rule based log analysis (e.g. Splunk). Some of these are lower lift than others, but few of them are privy to the biases of human nature, meaning you’ll get results.
Last, but certainly not least, when you have all of this in place – test it. Test it often. In my military days we had something called an after action review (AAR). During an AAR, we talked about what went right and what went wrong during testing, and how we would correct those errors moving forward. It’s not enough to just test and read a report – have an AAR with stakeholders, then take steps to make systems better.
When you do test, make sure to include stakeholders from security, IT, and human resources at a minimum. You might want to include social engineering as part of the test, both the classic (e.g. holding the door and following someone inside a secure space) and the newer versions (e.g. phishing).
As the cost and time required to enter the cyber criminal market is declining, and critical assets are increasingly information based, the threat to businesses will only become more commonplace. Take the time not only to secure your networks, but to build resilience within your processes AND your people. Because only they – and you – can secure the network.