The Quick and Dirty History of Cybersecurity


The Quick and Dirty History of Cybersecurity- Cybersecurity has a long history that dates back to the 1970s. Words like ransomware, spyware, viruses, worms, and logic bombs didn’t exist back then. However, due to the explosive growth of cybercrime, such words now appear in news headlines on a daily basis.

Every company now considers cybersecurity to be a top priority. In the coming years, cybercrime is expected to cost the world trillions of dollars.

But, how has cybersecurity progressed? This article covers the history of cybersecurity from its inception to the present day.

From Academic to Criminality

Computer security threats were easily identifiable for the majority of the 1970s and 1980s, when computers and the internet were still in their infancy.

The majority of the threats came from malicious insiders who gained access to documents they shouldn’t have. As a result, computer security in software programmes evolved separately from security involving risk and compliance governance.

At the time, there were network breaches and viruses. They were, however, employed for reasons other than profit.

The Russians, for example, utilised them to deploy cyber force as a weapon. Marcus Hoss, a German computer hacker, hacked into an internet gateway in a similar way. To connect to the Arpanet, Hoss used a gateway in Berkeley. He then gained access to 400 military computers, including mainframes at the Pentagon. Hoss’ main goal was to gather information to sell to the KGB, the Russian spy agency. Clifford Stoll, an astronomer, however, used honeypot systems to detect the intrusion and thwart the plot.

Notably, this attack marked the beginning of a wave of serious computer crimes involving virus intrusion. Viruses were no longer employed just for research.

Robert Thomas, a researcher at BBN Technologies, discovered the potential of designing a software that could move in a network and leave a trace in the 1970s. The first computer worm was created as a result of this finding. Creeper was the name of the worm, and it was created to migrate between Tenex terminals. “I’M THE CREEPER: CATCH ME IF YOU CAN,” the note read.

The creation of viruses and worms, such as the Morris computer worm, had serious consequences, as discussed below. They came dangerously close to eradicating the early internet. Virus attacks sparked a massive expansion of the antivirus sector.

The 1980s – The Era of Computer Worms

In the history of cybersecurity, the creation of the first computer worm was a watershed moment. Robert T. Morris, a graduate student at Cornwell University, is credited with creating the first devastating computer worm. Morris, intrigued about the size of the internet, constructed a worm in 1988 to test it. The worm was created to infect UNIX systems in order to count the total number of connections on the internet. Morris created a worm programme that would spread across a network, enter UNIX terminals using a known vulnerability, and then repeat itself.

This, however, proved to be a big blunder. The worm infected machine after computer due to a programming fault. As a result, networks became jammed, resulting in the failure of connected systems. The worm replicated aggressively to the point where the internet slowed to a crawl, wreaking havoc in the process. This worm was one of the first to attract widespread media attention and was one of the first programmes designed to exploit system flaws.

The worm’s impact outlasted the internet’s and associated systems’ failures. Morris, for example, was the first individual to be successfully charged under the Computer Fraud and Abuse Act. He was fined $10,000, sentenced to three years of probation, and fired from Cornwell (although he went on to become an MIT tenured professor). The act also paved the way for the formation of a Computer Emergency Response Team, which served as the forerunner to US-CERT.

The Morris worm marked the beginning of a brand-new field in computer security. It prompted additional people to look into how they could make worms and viruses that were deadlier and more powerful. The greater the impact of worms on networks and computer systems, the more they evolved. Worms and viruses, in turn, prompted the development of antivirus software to combat worm and virus attacks.

The 1990s – The Rise of Computer Viruses

The Morris worm, as previously said, paved the path for newer sorts of dangerous software. Viruses, which first appeared in the 1990s, were more aggressive programmes. Viruses like I LOVE YOU and Melissa infected tens of millions of computers, causing email systems to fail all over the world. The majority of virus attacks were motivated by financial gain or strategic goals. However, at the time, insufficient security solutions resulted in a large number of unintended victims. As prominent news outlets in many regions of the world covered the attacks, they became front-page news.

Cyber threats and attacks had suddenly become a major worry, demanding the development of a quick remedy. Antivirus software was created as a result of this dilemma. The programmes were created to detect the existence of viruses and prevent them from carrying out their functions. The use of malicious email attachments was the primary technique of virus dissemination. Most importantly, the viral attacks raised awareness, particularly when it came to opening email communications from unknown senders.

The Antivirus Industry

The number of firms developing and selling antiviral software exploded in the early 1990s. The devices were used to scan computer systems for viruses and worms. At the time, antivirus software scanned and tested commercial IT systems against signatures stored in a database. Although the signatures were originally file computed hashes, they were later updated to include strings that were similar to those found in malware.

Two major issues, however, had a significant impact on the effectiveness of these early antivirus solutions. Some of the current cybersecurity solutions still have flaws. Intensive resource utilisation and a substantial number of false positives were among the issues. The former caused the most issues for users because antivirus solutions scanning systems consumed a large portion of available resources, interrupting user activities and productivity.

During the same time span, the size and scope of malware samples produced each day grew. In the 1990s, there were only a few thousand malware samples; by 2007, the number had risen to at least 5 million. As a result, older antivirus solutions were unable to handle such a workload because security experts were unable to build signatures that could keep up with new problems as they arose. A novel method was required to meet the challenge, one that would provide enough protection for all systems.

Endpoint protection platforms have gradually shown to be more effective security solutions for combating the rising number of virus attacks and other malware. Researchers employed signatures to identify malware families rather than relying on static signatures as the primary method for detecting viruses. The answers were based on the assumption that malware samples differed from other samples. The technique of using an endpoint protection platform was more effective. Customers found that by requiring simply a signature of other malware, it was able to detect and stop unknown malware.

Secure Sockets Layer

In light of the growing number of virus and worm attacks, security professionals needed to find ways to alert people when they were browsing the internet. In 1995, the secure sockets layer (SSL) was created. SSL (Secure Socket Layer) is an internet protocol that allows users to securely access the internet and conduct activities such as online purchasing. Shortly after the National Center for Supercomputing Applications built and launched the first internet browser, Netscape developed the SSL protocol. In 1995, Netscape released the secure protocol, which established the foundation for languages like HyperText Transfer Protocol Secure (HTTPS).

The rise of the first hacker group

Today, there are numerous hacker and organised cybercrime groups. They are made up of people who have a specialised hacking talent and frequently begin cyberattack campaigns with a variety of goals. On October 1, 2003, Anonymous became the first hacker organisation to make headlines. The group has no clear leader, and its members come from a variety of offline and online communities. It originally made headlines when it used distributed denial of service attacks to hack a website belonging to the Church of Scientology (DDoS). Anonymous has been tied to a number of high-profile cyberattacks and has inspired other groups such as Lazarus and Apt38 to carry out large-scale attacks.
Credit card hacks in the 2000s

Cyberattacks were more targeted in the 2000s, or the new millennium, as it was generally known. The first known incidence of serial data breaches targeting credit cards was one of the most noteworthy attacks during this time period. Between 2005 and 2007, Albert Gonzales established a cybercriminal network dedicated on hacking credit card systems. At least 45.7 million cards[1] were successfully hacked by the gang, which resulted in the theft of personal information. These belonged to TJX customers who frequented the stores.

The massive retailer suffered a loss of $256 million as a result of the hack. The incident drew the attention of US authorities, especially because it entailed the compromise of controlled data. In addition, the corporation was forced to set aside monies for the compensation of the victims. Gonzales was sentenced to 40 years in jail for his crimes. When the attack happened, TJX was defenceless, and other companies took notice and implemented sophisticated cybersecurity measures to defend themselves.

EternalBlue: Lateral movement attack techniques

Cybercriminals can use lateral movement attack tactics to run software, issue commands, and expand throughout a network. System administrators are familiar with such procedures because they have been in use for several years. For years, lateral movement vulnerabilities have existed in several operating system protocols, allowing attackers to carry out lateral stealth attacks. The game EternalBlue is a good illustration of lateral movement vulnerability.

The EternalBlue flaw lets an attacker to take advantage of SMB protocols, which are used to distribute data across a network. As a result, cyber enemies are attracted to the protocol. The protocol was disclosed by Shadow Brokers on April 14, 2017, and it was utilised as an exploit by the renowned Lazarus group in the infamous WannaCry attack on May 12, 2017. The WannaCry ransomware attack was a global ransomware campaign that primarily targeted health institutions in Europe. The attack was extremely damaging, as it brought health services to a halt for about a week.

Other high-profile hacks have employed the EternalBlue exploit as well. The vulnerability was exploited in the NotPeyta assaults on June 27, 2017, which targeted banks, ministries, electrical companies, and media companies across Ukraine. France, the United States, Russia, Poland, Italy, Australia, and the United Kingdom were among the countries hit by the attack. Retefe banking trojans were also executed via it.

Cybersecurity laws and regulations

Cyber laws have emerged as a result of the advancement of technology in several businesses. These laws are intended to safeguard systems and confidential information. The Health Insurance Portability and Accountability Act is one of the most well-known cybersecurity rules in history (HIPAA). HIPAA was signed into law on August 21, 1996, with the goal of increasing employee accountability for insurance coverages. Despite this, the statute has been changed over time to place a greater emphasis on protecting employees’ personally identifiable information (PII).

In addition, in 1999, the Gramm-Leach-Bliley Act (GLBA), often known as the Financial Modernization Act, was enacted to protect the personal data of financial institution consumers. A financial institution is required by law to disclose detailed information on the techniques it plans to take to protect a customer’s personal information. Financial institutions must always notify clients about how their personal information will be shared in order to comply with the law. Furthermore, the law states that customers have the right to refuse financial institutions’ requests to divulge sensitive information. In addition, the financial institution must have a written information security programme in place to secure the sensitive data of its customers.

Despite this, the Federal Information Security Management Act (FISMA) was enacted in 2003 to give enterprises direction on how to secure their information systems. The law establishes a complicated structure for securing government IT assets, data, and operations against natural and man-made disasters. The act was passed in response to the passage of the E-Government Act (Public Law 107-347), which listed the primary vulnerabilities to information systems. The E-government Act also emphasised the importance of implementing appropriate security measures to protect against attacks. The E-Government Act covers FISMA.

All federal agencies must design and publish agency-wide programmes for protecting information systems, according to the FISMA statute. To be FISMA compliant, an agency must follow the following guidelines:

  • Inventories of current security measures should be done on a regular basis.
  • Examine any current or potential dangers.
  • Create practical security plans.
  • Designate security professionals to oversee the implementation of security plans and to check their efficacy on a regular basis.
  • Plan for monitoring security plans and assessing security activities on a regular basis.

Other regulations have lately been established. The General Data Protection Regulation is an example (GDPR). This rule establishes mandatory guidelines for organisations managing personally identifiable information (PII), as well as stiff penalties for non-compliance. The GDPR safeguards personal data belonging to European Union members. The regulation’s core principle is to ensure that businesses have proper data protection controls in place, which include encryption for both data in transit and data at rest.

Furthermore, before using confidential information for any reason, every entity must obtain the explicit approval of data owners. For failing to properly secure PII information or accessing consumer data without their permission, businesses face a fine of at least 4% of their yearly earnings. They can also be penalised 4% if a breach happens as a result of insufficient security measures.

Frameworks for cyber-security

In addition to cybersecurity laws and regulations, various frameworks have been proposed. These guidelines are intended to assist federal and commercial entities in improving the security of their information systems. For example, the US Department of Homeland Security’s policy was unveiled in 2018. This technique gives rules for detecting and identifying risks in a company. It also discusses ways for minimising cyber vulnerabilities, lowering threat levels, and dealing with the aftermath of a cyber-attack, among other things.

On the other hand, the Federal Cybersecurity Research and Development (R&D) programme has been in operation since 2012 and is updated every four years. This paradigm takes into account the fact that being completely safe from cyber-attacks is nearly impossible. As a result, the framework provides guidance to federal agencies on how to detect and respond to risks effectively. It contains instructions for examining risk history and categorising it according to severity levels. Both frameworks are commonly used by businesses to create and update solid cybersecurity programmes.

Recent cybersecurity attacks

Cybercrime has now become commonplace. Attacks have been used by cybercriminals for a variety of monetary gains. Since the 1980s and 1990s, when worm and virus attacks were used to gain illegal access, cybercrime has developed.

The sections that follow contain some information about recent cyberattacks. The implications of the attacks for the future of cybersecurity are then discussed.

Yahoo was the target of one of the worst cyber-attacks in 2013 and 2014. Yahoo accounts belonging to nearly 3 billion users were compromised as a result of the attacks[2]. The attacks took use of flaws that had not yet been addressed. Hackers installed malware on Yahoo’s systems using spear-phishing techniques, giving them unrestricted backdoor access. They gained access to Yahoo’s backup databases and stole sensitive data such as names, emails, passwords, and password recovery questions and answers.

Attacks perpetrated by the government: There have been several instances of attacks perpetrated by the government. In 2018, a total of 144 universities around the United States were targeted by various sorts of attacks. The attacks lasted three years and resulted in the loss of $3 billion in intellectual property and at least 31 gigabytes of data[3]. Iran was found to be behind the attack, according to investigations. Nine Iranian hackers were identified and punished by the US government.

There have also been numerous additional instances of state-sponsored attacks. North Korea backed the Lazarus Group, which hacked Sony in 2014. The hackers leaked trailers for forthcoming films as well as images of actors. Lazarus has also targeted other countries, primarily focusing on their financial institutions. The Bangladesh Bank theft was Lazarus’ greatest heist, with the squad stealing more than $80 million.

Gmail and Yahoo hacking: In 2018, Iranian hackers were able to successfully break into the Gmail and Yahoo accounts of prominent US activists, journalists, and government officials. The attackers utilised spear-phishing emails to deceive the targets into typing their login credentials in mock pages accessible by the hackers after studying their tendencies. Even the well praised two-factor authentication systems were not immune to the attacks.

The future of cybersecurity

Understanding the history of cybersecurity will help you see how the field has progressed from modest experiments and academic studies. Cybersecurity efforts are now aimed towards preventing catastrophic strikes. According to current figures, the prevalence of cybersecurity will continue to rise. Cybercriminals are predicted to employ novel stealth attack tactics based on developing technologies such as artificial intelligence, blockchain, and machine learning.

Furthermore, as recent cyberattacks have demonstrated, attackers are capable of circumventing well-known security procedures such as two-factor authentication. Such attacks demonstrate that we still have a long way to go before we can be really cyber secure. Organizations and security firms must reconsider their cybersecurity strategies.

In the future, academics and security specialists will have to focus all of their efforts on maximising the benefits of developing technology. They must lessen the number of cyber-attacks and their consequences whenever they occur.

Artificial intelligence is currently being implemented into antivirus and firewall products to improve detection and response times. Furthermore, because most firms’ procedures have been automated, cyberattacks are more concerned with jeopardising system security. By locking out system users or stealing essential data, the goal is to prohibit them from carrying out normal operations.

The rise of 5G networks is expected to automate key infrastructure such as transportation, as technological advancements drive cyberattacks to grow.

Anticipate these advancements by actively building countermeasures.





Jennifer Thomas
Jennifer Thomas is the Co-founder and Chief Business Development Officer at Cybers Guards. Prior to that, She was responsible for leading its Cyber Security Practice and Cyber Security Operations Center, which provided managed security services.