Social Engineering: Humans are the Weakest Link

In the realm of security within an organization, humans are undoubtedly always going to be the weakest link.

One reason why social engineering is so effective is because it is an attack that evades all traditional IT countermeasures.

Instead, the best way to defend against social engineering attacks is through user training and awareness. When personnel understand the purpose of cybersecurity, how it applies to strategic goals, and what impact it has – they can develop confidence in their ability to evaluate threats that seem out of place and contribute to the organization wide goal of security.

It is reported that over 90% of all security incidents involve human error. Identifying a social engineering attack can be extremely difficult but the first thing we need to know is what kind of attack methods exist. With extensive security training and awareness, an organization can mitigate some of the risk that insider threats and social engineering pose.

Leadership has to be able to set the tone for an organization and they should model effective security habits based on sound guidelines.

It is possible for all users to be trained in order to recognize common social engineering attacks. I recently visited a large financial company’s cybersecurity center and was told that by persistent phishing exercises and labeling clearly that all mail from non-company e-mails as”External Source”, they have reduced their phishing failure rate down to less than 10%.

Incentivizing Cybersecurity Awareness

One interesting idea I have heard is to incentivize employees to report suspicious behavior or security flaws they may come across in their company. For example, we are seeing many company’s offer a bug bounty program to the public as a way of crowd sourcing their security — how about doing something similar for your organization on an internal basis? If an employee recognizes a security flaw within their organization, what is the motivation to report it, unless it impacts them directly? By rewarding these findings with a ‘bounty,’ we can crowd source our security oversights to vigilant employees.

Another way to incentivize cybersecurity training is by rewarding those who successfully participate in security exercise. As an example, if an organization is running an internal phishing exercise to educate their personnel on how to spot suspicious e-mail, perhaps a reward is in order for those who succeed in reporting the suspect mail spotted during the exercise?

Top Security Trends of 2019

I would like to touch on one of these trends that seems very important to the security world: the rise and spreading of disinformation.

With “deep fake” videos becoming a reality and AI becoming effective enough at creating some seriously realistic products, it seems that we will soon be living in a world where a zero trust policy is necessary.

Zero trust security is a term found in the network security world that refers to an IT security model that requires strict identity verification for every person and device trying to access resources on a private network, regardless of whether they are sitting within or outside of the network perimeter.

Traditional security models are sometimes referred to as “castle and moat” models where it is very difficult to obtain access from outside of a network, but everyone inside is trusted by default. Zero trust security means that nobody is trusted by default from either the inside or outside. Verification is required from everyone trying to gain access to resources on the network.

How does this relate to disinformation campaigns with Photoshop’s and deep fakes? Well, it may become necessary to trust nothing by default until it’s been thoroughly vetted.