The workplace has never been the same since the coronavirus emerged. Working in a hybrid mode, or even entirely from home, has become the new norm and the most efficient manner of finishing projects, strategizing, and planning budgets. This is unquestionably progress following the pandemic, but it comes with its own set of difficulties.
For example, when working with critical company data at home, how secure is your internet connection? How many people (especially children) use an employee’s personal computer at home, and how many of them are responsible enough not to tamper with official documents?
“While its intellectual origins predate the industry by several decades, if not centuries, for our present purposes we need go back no further than the beginning of this millennium,” writes Mc Mahon in his July 2020 Frontiers in Psychology article In Defence of the Human Factor. “Since then, cybersecurity discourse has been awash with this cliché.”
Most businesses have security processes in place in case of an external assault, but they often overlook the fact that the greatest threat comes from within. Almost every security failure is blamed on human error, implying that employees are the ones who are largely to blame. This is due in part to the company’s culture and absence of a proactive cybersecurity strategy.
Targeting front-line personnel and even CEOs is a highly sophisticated approach used by cyber attackers. Information is publicly available — for example, LinkedIn and even the company website contain facts such as email addresses, employment history, connections, education, and so on – making it easier for attackers to target individuals.
They can utilize the employee as a point of entry to steal important company information, and if the person isn’t well-versed in cybersecurity, they could be a victim of spear-phishing. Detecting a cyber assault is far more difficult than preventing one in the first place. As a result, human error is blamed for 95% of security breaches, demonstrating that people are the weakest link in cybersecurity.
What is the definition of human error?
Returning to the topic of individuals being the weakest link, the most common explanation for this is human error. When it comes to human mistake, there are so many different meanings to pick from. The following is taken from Wikipedia:
That may appear straightforward, yet there are a host of academicians who would tell you that human mistake is a useless idea. In his paper The NO view of ‘human error,’ Erik Hollnagel, Ph.D., a respected safety expert, makes the following suggestion:
As an example, Hollnagel utilizes software detection of phishing attacks. He claims that a well-trained user is more likely than technology to detect a fresh phishing scam.
But Hollnagel doesn’t let us, the humans, off the hook. “Of course, we still need to account for human performance variability,” Hollnagel says. “The ETTO Principle serves as an example of this.”
Questions to think about when it comes to humanity and cybersecurity
Mc Mahon is adamant about not pointing fingers. When we hear someone say that humans are the weakest link, he created the following list of questions to ask:
- What other links are there in the chain, and how secure are they?
- Is the individual in issue no longer a part of the system?
- Is it possible that I’m blaming the victim of a crime? Am I treating my customers fairly and openly?
- Why are we advocating for such a dismal view of human capability? With such a message, who exactly are we serving?
Humans, rather from being the weakest link, may be the most important connection when it comes to attacks that are constantly changing, particularly those aimed directly at humans, as Mc Mahon and Hollnagel both point out.
The efficiency-thoroughness trade-off concept (ETTO principle), according to Wikipedia, explains that “On the one hand, there is a trade-off between efficiency or effectiveness and thoroughness (such as safety assurance and human reliability). Demands for productivity tend to undermine thoroughness, while demands for safety tend to lower efficiency, according to this theory.”
Organizations spend a lot of money on cybersecurity, which includes things like VPNs, encryption, anti-virus software, scanning, and so on. But the question is, how much do they put into their workforce? An annual cybersecurity conference has shown to be largely ineffective, and blasting individuals with information at a time when they are already feeling stressed is not a good idea.
Organizations must devise new methods for increasing employee cybersecurity knowledge. Employees expect the organisation to have adequate security measures in place in this digital age. They are unaware of the risks of clicking on rogue links and opening unverified attachments, both of which can result in a security breach.
While technology can screen out the majority of threats, it cannot eliminate all of them. Employees are the last line of defence, therefore they should be trained on cybersecurity, as well as how to deal with potential risks and how to report them. As a result, business leaders face the problem of delivering this information in a way that is simple to comprehend and remember, because making sound cybersecurity decisions is the last thing on an already overworked employee’s mind.
However, the most significant conclusion from this conversation is that staff should be viewed as security assets rather than threats. Companies can no longer rely just on retaliation; in order to provide more overall protection, a workplace culture of awareness and proactivity must be fostered.
FIND US ON SOCIALS