Skip to content

The Top 5 Data Security Breaches of 2022 (and How to Avoid Them)

Today’s leading organizations use personal data to create eerily accurate insights into user behaviors, preferences, and conversations. While the primary goal is often to improve customer experience, the stakes are higher when sensitive or confidential information is involved. 

Malicious actors are always on the hunt for fresh exploitation opportunities; one might even say data is the new oil in terms of espionage! User credentials, medical records, and financial information have all come under attack in recent years, leading to millions of dollars in costs

This article will highlight the most prominent high-profile data security breaches of 2022. In it, we’ll also share how each organization responded with the intention of learning from their experiences. Let’s get started: 

5 Lessons Learned From 2022’s Biggest Security Breaches

confidential data screen

Unfortunately, 2022 was no exception to breach activity. 

According to Statista, approximately 24 million data records were exposed worldwide during the year’s first three quarters. Has data taken over for oil as the most valuable commodity of the modern age?

  1. Crypto.com Witnesses Widespread Theft

Crypto.com is a cryptocurrency trading exchange based in Singapore. On the 17th of January 2022, it became the latest (at the time) high-profile victim of hackers targeting crypto wallets and making away with customers’ crypto tokens.

What Happened?

According to an official report from the exchange company, its risk monitoring systems detected transactions from customer accounts that were approved without two-factor authentication (2FA) from the account holders. The attack targeted 500 customers’ accounts and saw the actors steal up to $33 million worth of bitcoin and Ethereum.

The Aftermath

Crypto.com put its withdrawal services on hold for 14 hours and upgraded to a new 2FA infrastructure. It revoked existing 2FA tokens and required users to create new ones compatible with the new infrastructure.

The exchange also maintained that it conducted a full-scale audit of its network infrastructure and improved its security posture.

It also contracted with external security firms to carry out security checks and provide threat intelligence services.

What about the poor customers whose crypto tokens got filched? Despite initially claiming that “No funds were lost,” Crypto.com acknowledged that money had been stolen and reimbursed its customers.

Fingerprint Biometric Authentication Button. Digital Security Concept

2. International Committee of the Red Cross Gets Attacked

The Red Cross is a reputable international organization that provides essential medical and humanitarian aid to vulnerable persons worldwide. 

However, in January 2022, they became data insecurity victims after cyberattackers gained entry to their network due to a late patch of their security systems. The attack led to the breach of records of 515,000 vulnerable persons, containing their names, locations, and other personal data.

What Happened?

The attack on the Red Cross’s servers was a deliberate target that featured sophisticated techniques and codes designed to run on specific ICRC servers.

The cyberattackers gained access to the Red Cross’s network on the 9th of November 2021 through an unpatched vulnerability in an authentication module. Upon gaining entry, they deployed security tools that helped them pose as authorized users and admins.

From there, the attackers could access the sensitive information they wanted despite the data encryption.

To date, there’s been no evidence that the information stolen from this attack has been traded or used for illicit purposes. And despite speculation that the responsible actors may be state-sponsored, the identity of the persons behind the attack and their motives is still anyone’s guess.

The Aftermath

After determining on the 18th of January that their systems had been compromised, the Red Cross worked with security experts to investigate and secure the vulnerability through which the attackers gained entry.

For a time, the affected systems were taken offline and were only taken back up after several penetration tests had been carried out to prevent reoccurrence.

The organization also took extensive measures to communicate the breach to those affected.

Shot of a young businessman looking bored while working at his desk during late night at work

3. Whistleblower Reveals Suisse Secrets

Switzerland is world-famous for three things: the Alps, staying neutral during conflicts, and banking secrecy laws. The latter forms the background of this data breach incident.

At its forefront was Credit Suisse, one of the world’s biggest financial institutions, with its clients’ financial details totaling assets worth $108.5 billion being publicly revealed.

What Happened?

The leak was an intentional attempt by a person or group to expose the bank’s alleged lucrative business of helping clients hide their wealth. Financial details from as far back as the 1940s-2010 were revealed to a network of 163 journalists from 48 media organizations worldwide.

It is believed that the attack was from an inside threat, as the source was most likely an employee of the bank who gained access through their legitimate credentials.

Although the bigger story is definitely about how some of the bank’s clients controversially acquired their wealth, there is no shying away from the fact that the data breach itself is a significant concern for the organization’s security integrity.

This is particularly so when one considers that, as the whistleblower themselves admitted, owning a Swiss bank account is not a crime, and many of the bank’s clients had gotten their wealth through honest means.

The Aftermath

Credit Suisse denied any wrongdoing and maintained that the information revealed was history taken out of context.

As for the data breach itself, well, all of the information itself had become publicly available, and, as such, remediation was not really possible.

What the bank could do, however, was to review and reinforce its internal processes and data security protocols. All of which they, of course, said they did.

Connection network in dark servers data center room storage systems 3D rendering

4. The North Face Data Breach

The North Face is one of the world’s leading apparel companies and has been supplying outdoor adventurers with everything they need to get out into nature since 1968. However, in August 2022, they became one of the companies that fell victim to a data breach.

What Happened?

The attackers had used credential-stuffing tactics to gain access to about 200,000 customers’ accounts, where they acquired names, emails, billing & shipping addresses, phone numbers, and more. Tellingly though, no financial information was compromised in the attack.

The public got informed of the data breach through a notification the company sent out to customers who may have been affected. In it, they mentioned that the attack was launched on the 26th of July and got detected and blocked on August 11 and 19, respectively.

The Aftermath

Upon detection, The North Face moved quickly to contain the attack, resetting passwords of all affected accounts and erasing payment card tokens. The company maintained that compromising the payment card tokens did not put the customers at risk, as the information in them is only useful on the North Face’s website. Customers were also encouraged to use new passwords which they hadn’t used in other accounts.

5. Toyota Exposed by Contractor Mistake

Think all data breaches boil down to malicious intent? Think again.

Toyota is arguably the biggest name in the automotive industry so we can skip the introductions. In October 2022, Toyota experienced a significant data breach due to an error made by a third-party contractor.

What Happened?

Sometime in 2017, Toyota hired a website development subcontractor for its T-Connect service. The subcontractor then mistakenly posted some of the source code to a GitHub repository that was publicly accessible. This granted third-party access to almost 300,00 persons’ email addresses and customer control numbers.

This remained in place for five years and was discovered in 2022.

The Aftermath

As soon as Toyota made the discovery, it immediately changed the access key and made the source code private. It assured customers that there was no possibility of data such as names, telephone numbers, or credit cards being compromised as the affected servers held no such information.

It also urged customers to remain vigilant and watch out for phishing or spoofing attacks. It also set up a help center where customers can confirm whether their email address was among those that were breached.

How to Reduce Your Risk of Data Breaches

If there’s any lesson the aforementioned events provide, it’s to never be too careful as the data security space can be unpredictable. Data breaches can happen anytime, from insider threats to malicious external actors and even human error.

Here are a few measures you can take to minimize the risk:

  • Implement multi-factor authentication (MFA) systems for all sensitive accounts and services.
  • Ensure that all software is up to date and patched with the latest security updates.
  • Restrict employee access to sensitive data and use encryption software whenever possible.
  • Perform regular security audits and risk assessments to identify any possible weak points in your data security.
  • Use a reputable cloud provider for all of your data storage needs.
  • Make sure all passwords are strong, unique, and changed regularly.

Following these measures will help you stay one step ahead of the bad guys and keep your data safe. And as hackers become more sophisticated, we must become even more vigilant and update our security strategies accordingly.

Beef Up Security With JumpCloud

The JumpCloud Directory Platform boosts IT admin and MSP peace of mind by unifying their most integral security tools in one place. From MFA to single sign-on (SSO) to mobile device management (MDM), JumpCloud provides a comprehensive solution to keep organizational data safe and secure from nefarious hackers. 

It provides time-saving capabilities like automated patch management, wipe and lock, and one-touch deployment that help save time. The best part? Most users saved money after switching to JumpCloud and reduced their IT stacks. Stay steps ahead of making the news for the wrong reasons. Sign up for a free trial today.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About JumpCloud
At JumpCloud, our mission is to build a world-class cloud directory. Not just the evolution of Active Directory to the cloud, but a reinvention of how modern IT teams get work done. The JumpCloud Directory Platform is a directory for your users, their IT resources, your fleet of devices, and the secure connections between them with full control, security, and visibility.

JumpCloud Linux Capabilities Roundup in 2022

At JumpCloud, we are constantly investing and developing our Linux infrastructure and capabilities for our customers. We want to enable admins with the flexibility to manage and control Linux devices on the same platform as any other OS (ie. Mac, Windows, iOS, and Android) so they can continue to utilize the speed, stability and security of Linux-based systems wherever they need them.

Since the beginning of 2022, we had planned to increase the velocity and focus of our Linux capabilities. Some of the key areas of focus for Linux included:

  • Enable Remote Security Management
  • Improve and Strengthen Security Posture 
  • Provide Simple & Scalable Patch Policies
  • Introduce New Popular Linux Distros

Just take a look at what our customers have been leveraging this year. 

Security Commands

JumpCloud Commands let you quickly and easily automate tasks across multiple servers, launch those tasks based on a number of different types of events, and get full auditing of all command results. To that end, we added more security commands that allow Linux devices to remotely execute management commands, such as:

  • Lock
  • Restart
  • Shutdown
  • Erase
  • Screensaver/ Inactivity Lock based on timeout period 
screenshot of security commands

New Linux Policies

We added new Linux policies to help organizations manage and secure their deployed Linux endpoints more efficiently while improving their overall security posture. They include:

  • Partition Options
  • File Ownership and Permissions
  • Network Parameters
  • Disable Unused Filesystems
  • Additional Process Hardening
  • Configure RSyslog
  • Forbidden Services
  • Secure Boot Settings
  • Service Clients
  • SSH Root Access
  • SSH Server Security
screenshot of new policy

Patch Management

JumpCloud Patch Management was launched in Q1, 2022 with initial support for Windows and iOS. Our Linux (Ubuntu) support was a fast-follow in April. The Ubuntu default policies are preconfigured with conservative defaults for the following settings: 

  • Defer Rollup/Patch Updates: The number of days to defer the availability of future minor OS updates. For Deferral Days, specify how many days to defer a minor OS update after it’s released.
  • Defer Major Updates to Ubuntu LTS versions only: Specify how many days to defer the availability of future major LTS OS updates. For Deferral Days, specify how many days to defer a major OS update after it’s released.
screenshot of fleet distribution homepage

Expanded Linux Agent Support

JumpCloud continues to build out our support across Linux-based systems to enable IT administrators the flexibility to manage all of their deployed devices. Expanding to a variety of new distributions, the JumpCloud agent can be deployed to secure, manage, and view these systems in the admin portal. Our Linux distros include:

  • Amazon Linux and Amazon Linux 2
  • CentOS 
  • Debian 
  • Fedora 
  • Mint 
  • Rocky Linux 
  • Ubuntu 
  • RHEL and more

What’s Next?

Exciting new capabilities are already in the pipeline for Linux. Perhaps a sneak peek is allowed as we bring good cheers to the new year. Linux support is coming to JumpCloud Remote Assist! Admins will be able to remotely access (view and control) a Linux laptop or desktop to help troubleshoot and resolve issues.

If you have not tried any of our Linux capabilities, sign up for a free account for up to 10 users and 10 devices. Support is available 24×7 within the first 10 days of your account’s creation!

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About JumpCloud
At JumpCloud, our mission is to build a world-class cloud directory. Not just the evolution of Active Directory to the cloud, but a reinvention of how modern IT teams get work done. The JumpCloud Directory Platform is a directory for your users, their IT resources, your fleet of devices, and the secure connections between them with full control, security, and visibility.

ChatGPT Storms Onto the Cybersecurity Scene

Anyone perusing this site has probably also read more than a few articles about ChatGPT, the latest “AI writer” that can turn user prompts into text that faithfully mimics human writing. I would venture to guess many readers here have even tried the tool for themselves (it’s free to experiment with if you haven’t). Chat GPT has dominated the conversation in tech over the last few weeks. It has been hard to escape, frankly.

Among the countless think pieces written about whether ChatGPT will spell the death of the college essay or usher in the end of creativity and critical thinking as we know them have been plenty of articles focused on cybersecurity specifically. Now that AI can instantaneously produce endless amounts of writing for almost any purpose, there are serious implications, both good and bad, for the future of digital defense.

Of course, the bad would seem to seriously outweigh the good (more on that soon). But amidst all the doom and gloom thrown at ChatGPT, it’s important to also acknowledge how this technology could be an asset to developers, security teams, or end users. Let’s look at it from three angles.

The Good

Cybersecurity suffers from a serious information deficiency. New attacks, techniques, and targets appear all the time, requiring the broad security community to keep constantly updated. On the other hand, average users need better information about cyber safety best practices, especially considering that years of consistent training and warnings haven’t cured deep-seated problems like password recycling. In both of these cases and others, I can see ChatGPT or a similar tool being extremely helpful for quickly yet effectively encapsulating information.

Of course, documenting cybersecurity hasn’t exactly been its biggest problem, and I question how much an AI writer can actually do to prevent or lessen attacks. Nonetheless, knowledge is power in cybersecurity but the scale of the issue stands in the way, so I can see automated writers playing a role in a host of different security tools, defensive techniques, and training strategies. They can (and arguably must) be a force for good.

The Bad

Almost the minute ChatGPT went live, the naysayers and doomsday prognosticators started to come out of the woodwork. Which is neither surprising nor troubling. ChatGPT is just the latest example of how artificial intelligence will transform the world in ways that we can’t predict, will struggle to control, and in some cases would never want.

Cybersecurity is a prime example. ChatGPT can generate passable (if not perfect) code just as it can prose. This could be a boon for developers of all kinds – including those that develop malware and other attacks. What’s to stop a hacker from using ChatGPT to expedite development and iterate endlessly, flooding the landscape with new threats? Similarly, why write your own phishing emails when ChatGPT, trained on countless past phishing emails, can generate thousands of them in seconds?

Automated writers lower the barrier to entering cybercrime while helping established criminals and gangs scale their efforts. More alarming, new technology always has unexpected, often unintended consequences, meaning that ChatGPT is sure to surprise us with how it gets weaponized, which is to say that the worst is yet to come.

The Ugly

To emphasize my previous point, let me outline a scenario I haven’t yet seen addressed in the ChatGPT conversation. Business email compromise (BEC) attacks are where hackers personalize phishing emails, texts, or other communications with personal information to make them seem like they are coming from the recipient’s boss, close colleague, or another trusted source. They also contain careful social engineering to inspire the recipient to act without considering risk or applying good judgment. They are basically phishing attacks carefully calibrated to succeed. Back in June, Wired wrote that they were “poised to eclipse ransomware” because they have proven so lucrative and also so resistant to security measures.

The saving grace was that BEC messages took time. Someone had to first do research on the targets and then turn that into fine-tuned copy. Therefore, they were hard to scale and difficult to get just right (many of these attacks still failed). There was a difficult if not definitive upper limit.

From my perspective, ChatGPT obliterates that obstacle. Imagine if an attacker trained automation to comb LinkedIn for data about people’s professional relationships, then fed that data into ChatGPT to create convincing BEC emails customized for hundreds or thousands of different recipients. If we can automate both the research and the writing parts, and do both on not just a massive scale but with uncanny precision, hackers can scale BEC campaigns to any size.

And then what? Will every email seem suspect? The cloud of doubt hanging over the authenticity of any piece of information or string of communication (did this come from someone real?) may prove as much or more disruptive than the attacks themselves. I’m just speculating. These doomsday scenarios, like so many others, may never materialize…Or BEC attacks could prove to be the least of our concerns.

That puts it on us – probably most people reading this site – to somehow ensure the good outweighs the rest.

 

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Why OT Research Is Controversial – But Necessary

I want to discuss a subject that doesn’t get enough attention in the world of OT/ICS cyber security considering how fundamental it is, and also sparks a surprising amount of controversy. The topic is the importance of conducting ongoing research into OT endpoint device vulnerabilities, particularly for legacy devices.

It should be a unanimous opinion that this research is important. The more we know about vulnerabilities and the more CVEs we generate, the better for everyone involved. However, I frequently encounter industry analysts and self-styled experts that repeatedly question the need and validity of research in the OT sector. Their argument is that legacy equipment is guaranteed to have vulnerabilities, that it is flawed by design and therefore advanced endpoint research is unnecessary. I find this argument ironic because these same experts are often involved in creating products that help detect and manage the vulnerabilities found by researchers. They state publicly that there is no point in doing research and then in the same breath talk about how their product can help mitigate the problems.

Continue reading

Why do you need both IDS and IPS, or maybe the NGFW too?

I would like to straighten the defense of the web application by talking about Intrusion Detection and Prevention Systems (IDS and IPS) as the third member of this security trio defense: WAF, RASP, and IDPS. In the previous articles, I talked about security defense technology Runtime Application Self-Protection (RASP) and Web Application Firewall (WAF).

What are IDS and IPS?

Intrusion Detection Systems and Intrusion Prevention Systems are used to detect intrusions and, if the intrusion is detected, to protect from it.

First, I will focus on explaining the differences between the WAF, RASP, and IDPS.

What is the difference between WAF, RASP, and IDPS?

I have already explained in previous articles the difference between WAF and RASP. Still, I will introduce IDPS and show you exactly why a combination of this trio is the best security choice.

Summary: IDPS is used to detect intrusions and protect from them. WAF will detect and block attacks based on rules, patterns, algorithms, etc. RASP detects the application runtime behavior using algorithms.

Why is it best to use both IDS and IPS?

To better understand why it is important to use both systems, we need to know what each of them does and doesn’t do and how combining them gives more effective protection. Each of those systems has its own types, which will be explained below.

Location and Range

These two types of security systems operate in different locations and have different ranges.

Facts:

·   IDS works across the enterprise network in real-time by monitoring and analyzing network traffic.

·   IPS works in the same network location as a firewall by intercepting network traffic.

·   IPS can use IDS to expand the range of monitoring.

By knowing this and using both IDPS, you can cover more range.

Host-based IDS and IPS

There are a few types of IDS and IPS. I will mention them so you can know which one targets what, but there is plenty of online documentation for more information.

Host-based IDS (HIDS) is used for protecting individual devices. It is deployed at the endpoint level. It checks network traffic in and out of a device, and it can examine logs and running processes. HIDS protects only the host machine. It does not scan complete network data. Similar to this type, IPS has its own Host-based IPS (HIPS). HIPS is deployed on clients/servers, and it monitors the device level as well.

Network-based IDS and IPS

Network-based IDS (NIDS) works on monitoring the entire network. It looks out at every network device and analyzes all the traffic to and from those devices. On the other side, IPS has its own type, called Network-based IPS (NIPS), deployed within the network infrastructure. It monitors the complete network and, if needed, tries to protect it.

**NIDS and NIPS are very important to network forensics and incident response because they compare incoming traffic to malicious signatures and differentiate good traffic from suspicious traffic.

Wireless IPS

IPS also has Wireless IPS (WIPS) type that monitors radio waves (wireless LAN) for unauthorized access points, which you can use to automate wireless network scanning. Techtarget site provided ways of using WIPS in enterprise in this article. Check it out!

Protocol-based intrusion detection systems (PIDS) and Application protocol-based intrusion detection systems (APIDS)

Both protocol-based systems are the type of IDS. They both monitor traffic to and from devices. The only difference is that PIDS monitors one server and APIDS group of servers.

Network behavioral analysis (NBA)

Network behavioral analysis (NBA) is the type of IPS that looks for unexpected behavior within patterns of a network itself.

IDS and IPS modes

IDS is generally set to work in inline mode. As for IPS, it is set to work in the network behind the firewall. It can operate in both modes: as an end host or in inline mode.

Most used IDS/IPS tools in 2022

According to softwaretestinghelp.com, the list of most used IDS tools is this:

·   SolarWinds Security Event Manager

·   Bro

·   OSSEC

·   Snort

·   Suricata

·   Security Onion

·   Open WIPS-NG

·   Sagan

·   McAfee Network Security Platform

·   Palo Alto Networks

For more info regarding pricing, pros, cons and features of these tools checkout the softwaretestinghelp site.

Also, spiceworks.com provided the list of the most used IDPS tools:

·   AirMagnet Enterprise

·   Amazon Web Services (AWS) GuardDuty

·   Azure Firewall Premium IDPS

·   Blumira

·   Cisco Secure IPS (NGIPS)

·   Darktrace Enterprise Immune System

·   IBM Intrusion Detection and Prevention System (IDPS) Management

·   Meraki MX Advanced Security Edition

·   NSFocus Next-Generation Intrusion Prevention System

·   Snort

For more info regarding pricing, pros, cons and features of these tools check out the spiceworks site. This research will also help you choose the right IDPS solution based on these tools’ features.

What is Next-Generation Firewall (NGFW) or Unified Threat Management (UTM)?

There is a modern type of technology that combines IDS and IPS with firewalls called Next-Generation Firewall (NGFW) or Unified Threat Management (UTM).

NGFW includes:

·   Standard firewall features (packet filtering, stateful inspection, and VPN awareness)

·   Integrated Intrusion Prevention (IPS)

·   Application awareness of threats

·   Detect and block risky apps

·   Threat intelligence

·   Upgrading security features (such as future information feeds)

·   New techniques that help to address new security threats

Researchers for nomios site have gathered information and made a list of the top 5 vendors for NGFW in 2022. Also, they gave suggestions on what you should look for when choosing the right NGFW tool. Check it out!

Conclusion

You should combine IDS and IPS because of three things: response, protection, and impact. If you decide to use IDS, the testing will stop at the detection phase but using IPS based on settings and policy testing will also include the prevention. Because IPS reacts immediately, it gives a certain layer of protection aside from detecting malicious activity. However, there are false positives possible using IPS that will end up shutting your network.

Organizations often set up Integration Detection Systems to handle the logs and notifications/alerts, routers, firewalls, and servers to fight threats.

A better solution would be using a combination of IDPS and setting it up when planning security. In the future, when the organization grows and needs better protection, it will be possible to use IDS/IPS solutions for additional networks, servers, or devices.

Also, depending on the organization’s security needs and cost restrictions, NGFW can be a good choice too!

Cover photo by krakenimages

#IPS #IDS #IDPS #NGFW

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

5 Simple Security Measures for SME Compliance on a Budget

Did you know that nearly half of small businesses experienced cybersecurity breaches in 2021? 

The information comes from a 2021 AdvisorSmith survey of 1,122 small business owners and managers. Yet, a whopping 61% of them aren’t concerned about falling victim to cyberattacks. They think they’re “too small to be a target.” 

Bad actors target small businesses and small-to-medium-sized enterprises (SMEs) just as frequently (if not more so) than established organizations. Websites get hacked, email accounts get compromised, and sometimes, employees even steal sensitive information. 

While it’s understandable for budget-conscious SMEs to put cybersecurity measures on the back burner, it just isn’t worth the risk. Especially when there are simple actions organizations of all sizes can take to improve their security tenfold. 

Before we dive into our top five cybersecurity tips for SMEs, let’s take a moment to better understand what factors might make your organization an easy target. 

Why SMEs Are Easy Targets for Cybercrime 

blue key with code overlayed on the image

As previously mentioned, many folks assume adversaries solely target enterprise companies because they provide larger opportunities for blackmail profits.

What they don’t realize is that SMEs are often targeted by chance, not by choice. Cybercriminals may impersonally wade through lists including hundreds of business names without doing much research into organizational holdings. 

With that said, SMEs and enterprise-level companies alike are often chosen for the following reasons: 

1. Money

Most cybercriminals carry out attacks for financial benefits. Naturally, receiving direct payments from victims is the most efficient way to profit from an attack. They usually lock down assets, before demanding a ransom to unlock them. 

Intellectual property (IP) is a highly motivating asset to steal. Criminals know that an SME will pay big to get it back as a leaked IP can bring a small business down to its knees. Some hackers also sell breached assets, data, and information in the black market for profit.

2. Company Damage

Alternatively, some attacks are politically, competitively, or ideologically motivated. Though it may sound like the plot of a thriller movie, disgruntled former partners, business rivals, and unhappy employees have all been known to hijack organizational systems. 

A successful cyberattack can cause major damage. They can wipe data, cause downtime, or even drive a total business shutdown. In addition to depleting bottom lines, they can ruin consumer trust. Breached SMEs also risk facing compliance ramifications, especially if the breach affected other consumers and other third parties. 

3. Access to Resources

Cyberattacks can also be aimed at leveraging the company’s resources and relationships. For example, cybercriminals may target your business as part of a larger DDoS attack, to steal customers’ personally identifiable information (PII) for financial fraud, or just to hijack your computer resources for crypto mining.

4. Testing Tactics

Software engineers aren’t the only ones who run tests! Cybercriminals sometimes experiment with new tactics and attack vectors on smaller businesses before targeting the big fish in the pond. 

SMEs are an easy target in such cases because the criminals expect their defenses to be weak. Don’t allow your organization to be someone’s stepping stone to a more high-impact target.

5. Becoming a Casualty in a Supply Chain Attack

Finally, SMEs are sometimes victims of circumstances. An attack may target a large vendor’s asset and infect the entire supply chain, spreading out to customers, other third parties, and even SMEs that interact with the compromised assets or parties. 

These unintentional attacks may still end up crippling businesses. There are many other reasons why SMEs make easy targets for criminals. But the bottom line is that SMEs’ resource limitations can make them attractive and impactful targets to cybercriminals. 

Read Combining Business Priorities and Security: Choose Your Own Adventure.

5 Simple Security Measures for SMEs

coworkers in sever room looking at a tablet

Whether you’re the target of an intentional attack or a victim of an unintentional attack, the implications of a security breach can be dire. 

It’s better to take a proactive approach to cybersecurity than deal with potential financial, legal, and reputational challenges down the line. Below are five simple measures that can help you to improve your business’s cybersecurity even on a budget: 

1. Implement Multi-Factor Authentication

Leveraged credentials such as passwords cause 61% of data breaches. Implementing multi-factor authentication can help in reducing these breaches.

Multi-factor authentication (MFA) is a security method for protecting access to online resources by utilizing multiple (often two) factors to verify a user’s identity. The MFA requires an additional form of identity besides a password. This can be a security key, biometric data, one-time passcode (OTP) via email or SMS, or a push notification from a supported smartphone or tablet app. 

Implementing MFA has many benefits, including securing your resources even if your passwords have been compromised. 

Read How Effective Is Multi-Factor Authentication.

2. Stay on Top of Patch Management

Antivirus software is great at stopping known malware threats. But admins must keep systems up to date in order for them to work properly. This is why it’s important to stay on top of patch management. Your computers, servers, and operating systems should always be patched. 

System patch management is critical because patches often fix bugs and address security vulnerabilities in operating systems. For the modern business with distributed workforces and a variety of work devices and operating systems, manual patching can be a headache. Consider cloud patch management solutions within unified toolkits like the JumpCloud Directory Platform. 

Here’s how JumpCloud cloud patch management works for Mac and Windows systems. 

3. Use Firewalls

A firewall is a security system that filters network traffic and prevents unauthorized access to your network. Besides blocking unwanted traffic, firewalls also protect your systems from malicious software infections. It prevents unauthorized access to sensitive company data. They are an invaluable tool in web traffic management.

With a dependable firewall in place, only trusted sources and IP addresses can access your systems. Firewalls often differ based on their structure, functionality, and traffic filtering methods. Some of the most common firewalls include:

Firewalls are crucial components of any perimeter-based cybersecurity. For your network and devices to be protected, you need to properly set up and maintain your firewall. Always ensure your firewalls are up to date.

4. Enforce Strong Password Policies 

All your cybersecurity efforts can go to waste if you have ineffective password policies. Besides emphasizing strong passwords that are difficult to crack, you should also encourage your employees to change their passwords regularly and not share them with other people. Implement multi-factor authentication as discussed above.

Read Best Practices for IT Security Passwords. 

5. Implement the Principle of Least Privilege

People within your organization can pose significant security risks too. Insider threats happen when people with access and privileges abuse them. This is why it’s crucial to carefully consider who needs access to what.

Implementing the principle of least privilege will protect your resources from insider threats. Additionally, it makes it easier to monitor compliance and makes it easier for your employees to access the resources they need instead of having to sift through everything.

Read Your Guide to Privileged Access Management. 

Simplify Security With JumpCloud

For SMEs with lean budgets, cybersecurity can feel unattainable. But you can’t afford to completely skip on security. 

The five simple, cost-effective actions outlined above can significantly improve cybersecurity without breaking the bank. There are also affordable tools such as JumpCloud, with a la carte options, that can help SMEs streamline security efforts in a centralized platform. 

Simplify your security with JumpCloud.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About JumpCloud
At JumpCloud, our mission is to build a world-class cloud directory. Not just the evolution of Active Directory to the cloud, but a reinvention of how modern IT teams get work done. The JumpCloud Directory Platform is a directory for your users, their IT resources, your fleet of devices, and the secure connections between them with full control, security, and visibility.

CISA BOD 23-01: Why vulnerability scanners miss the mark on asset inventory

On October 3, 2022, the Cybersecurity and Infrastructure Security Agency (CISA) issued Binding Operational Directive (BOD) 23-01: Improving Asset Visibility and Vulnerability Detection on Federal Networks. The directive requires that federal civilian executive branch (FCEB) departments and agencies perform automated discovery every 7 days and identify and report potential vulnerabilities every 14 days. Additionally, it requires the ability to initiate on-demand asset discovery to identify specific assets or subsets of vulnerabilities within 72 hours of receiving a request from CISA.

To meet these requirements, agencies will need to start with an accurate asset inventory. Most agencies will attempt to leverage existing solutions, like their vulnerability scanners, to build their asset inventories. It seems reasonable to do so, since most vulnerability scanners have built-in discovery capabilities and can build asset inventories. However, they will quickly learn that vulnerability scanners are not up for the task and cannot help them sufficiently and effectively meet the requirements laid out by CISA.

Let’s take a look at why agencies need a solution solely focused on asset inventory, in addition to their vulnerability scanner, if they want to tackle CISA BOD 23-01.

Asset inventory is a foundational building block

Every effective security and IT program starts with a solid asset inventory. CISA BOD 23-01 reinforces that imperative. Specifically, it states, “Asset discovery is a building block of operational visibility, and it is defined as an activity through which an organization identifies what network addressable IP-assets reside on their networks and identifies the associated IP addresses (hosts). Asset discovery is non-intrusive and usually does not require special logical access privileges.”

What does this mean? FCEB agencies looking to meet the requirements outlined by CISA BOD 23-01 must be able to discover managed and unmanaged devices connected to their networks. Internal and external internet-facing assets must be cataloged with full details and context. All within the timeframe outlined by CISA.

So now, the question is why vulnerability scanners can’t be used to meet the requirements laid out in the directive.

The challenges of asset inventory with vulnerability scanners

As the number of devices connecting to networks continues to grow exponentially, agencies need to stay on top of these devices; otherwise, they could provide potential footholds for attackers to exploit. However, common issues like shadow IT, rogue access, and oversight continue to make it difficult to keep up with unmanaged devices. BOD 23-01 highlights the importance of identifying unmanaged assets on the network. That’s why the need for a fully comprehensive asset inventory is the key to adequately addressing the directive.

So, why can’t vulnerability scanners deliver on asset inventory? Most vulnerability scanners combine discovery and assessment together, resulting in slower discovery times, delayed response to vulnerabilities, and limited asset details. As a result, most agencies are left wondering how they can do a better job building their asset inventories.

Combining discovery and assessment slows everything down

Vulnerability scanners typically combine asset discovery and assessment into one step. While on the surface, this appears to be efficient, it is actually quite the opposite. In regards to asset discovery, CISA BOD 23-01 specifically requires that FCEB agencies perform automated discovery every 7 days and identify and initiate on-demand discovery to identify specific assets or subsets of vulnerabilities within 72 hours of receiving a request from CISA.

Because vulnerability scanners leverage a lot of time-consuming checks, they’re not able to scan networks quickly enough. Add in the complexity of highly-segmented networks and maintenance windows, and it is nearly impossible to effectively utilize vulnerability scanners for discovery and meet the timing requirements outlined by CISA.

Under the new directive, assessing the potential impact of vulnerabilities becomes even more urgent. Agencies will need to perform on-demand discovery of assets that could be potentially impacted within 72 hours, if requested by CISA. When security news breaks, agencies need to respond as quickly as possible, but vulnerability scanners slow down the process. In a scenario like this, it would be more efficient to have a current asset inventory that agencies can search–without rescanning the network. This is particularly useful if agencies know there are specific assets they need to track down, they can query their existing asset inventory to identify them immediately.

For example, let’s say a new vulnerability is disclosed. Vendors will need some time to develop the vuln checks, and agencies will need to wait for the vuln checks to become available. Once they’ve been published, agencies can finally start rescanning their networks. Imagine waiting for the vuln check to be released, and then delaying the rescan due to scan windows. Without immediate insight into the potential impact of a vulnerability, agencies are playing the waiting game, instead of proactively being able to assess the risk.

How agencies can speed up discovery

So, what can agencies do? Let vulnerability scanners do what they do best: identify and report on vulnerabilities. Complement them with a dedicated solution that can automate and perform the discovery of assets within the timeframe set by the directive. In order to accomplish this, the asset inventory solution must be able to quickly and safely scan networks without a ton of overhead, be easy to deploy, and help security teams get ahead of new vulnerabilities.

Agencies need to have access to their full asset inventory, on-demand, so they can quickly zero in on any asset based on specific attributes. This information is invaluable for tracking down assets and investigating them, particularly when new zero-day vulnerabilities are uncovered. When the new zero-day is announced, agencies can find affected systems by searching across an existing asset inventory–without rescanning the network.

Meet CISA BOD 23-01 requirements with a dedicated asset inventory solution

It is increasingly evident that decoupling discovery and assessment is the most effective way to ensure that agencies have the data needed to accelerate vulnerability response and meet the requirements outlined in the directive. Because let’s face it: vulnerability scanners are really good at vulnerability enumeration–that’s what they’re designed to do. However, they really miss the mark when it comes to discovering assets and building comprehensive asset inventories. Because vulnerability scanners combine discovery and assessment, they aren’t able to scan entire networks quickly, and at times, they don’t fingerprint devices accurately.

As a result, many agencies are wondering how to meet the requirements outlined in CISA BOD 23-01 if they can’t depend on their vulnerability scanner for discovery. Agencies will need to start looking for a standalone asset inventory solution that is capable of performing unauthenticated, active discovery, while also enriching data from existing vulnerability management solutions.

How runZero can help agencies focus on asset discovery

runZero separates the discovery process from the vulnerability assessment stage, allowing agencies to perform discovery on-demand. Because runZero only performs discovery, it can deliver the data about assets and networks much faster than a vulnerability scanner. Customers have found that runZero performs scans about 10x faster than their vulnerability scanner, allowing them to:

  • Get a more immediate day one response to new vulnerabilities.
  • Gather as much information as possible about assets while waiting for vulnerability scan results.

That means, while waiting for vulnerability assessments to complete, agencies can already start digging into their asset inventory and identifying assets that may be impacted by a vulnerability. runZero regularly adds canned queries for assets impacted by newly disclosed vulnerabilities and highlights them via Rapid Response. Users can take advantage of these canned queries to instantly identify existing assets in the inventory that match specific identifiable attributes. For example, querying by hardware and device type can narrow down assets to a specific subset that may be affected by a vulnerability. All of the canned queries can be found in the Queries Library.

All in all, runZero is the only asset inventory solution that can truly help FCEB agencies stay on top of their ever-changing networks. By decoupling asset discovery from vulnerability assessment, agencies will gain visibility and efficiencies, while meeting the requirements set by CISA BOD 23-01.Learn more about runZero

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About runZero
runZero, a network discovery and asset inventory solution, was founded in 2018 by HD Moore, the creator of Metasploit. HD envisioned a modern active discovery solution that could find and identify everything on a network–without credentials. As a security researcher and penetration tester, he often employed benign ways to get information leaks and piece them together to build device profiles. Eventually, this work led him to leverage applied research and the discovery techniques developed for security and penetration testing to create runZero.

Backup Strategy and the 3-2-1 Principle

Data loss comes in all sizes: small (individual files), medium (SharePoint site), and large (ransomware and disaster recovery). No matter the size of the loss of data, none of them are fun, and even the smallest of data loss events could leave you lacking your most critical data. That one spreadsheet or that one hard disk drive could have what you and your business rely on most – it’s not always something someone can “just create again” on a whim as data loss is indiscriminate in its impact. All data loss events negatively impact workflow, and all are risk and data protection concerns that ultimately are a business imperative. Proactive data protection through backup and data management is at the forefront of all of our minds—or at least should be. Now why is that? Years ago, the assumption prevailed that cloud services would “take care of everything” once you signed up for a cloud service, with backup being lumped in. But now, more than ever, as the awareness of shared responsibility models for SaaS applications grows which states it is the user who is responsible, it’s clear the onus is on you to have that backup strategy in place. That’s why the 3-2-1 backup rule—a principle established for on-premises infrastructure which requires multiple copies of backup data on different devices and in separate locations—is still relevant to today’s cloud-based infrastructures by providing essential data-protection guidelines.

Why Back Up Cloud SaaS Data, and Why Now?

Your data is critical to your business operations, and in many cases, maintaining control of and access to it is required by law. (Read more about how third-party security keeps companies in control of their data here.)

SaaS Shared Responsibility Model

Software-as-a-service providers have established documentation that clarifies the areas of responsibilities they have and also those responsibilities that are retained by the customer. Microsoft, well known for its Microsoft 365 SaaS offering, delineates the boundaries of shared responsibility in the cloud. While Microsoft does provide some degree of data protection, many people are not aware of the limitations of this protection. The short of it is that Microsoft does not provide suitable backup and restore functionality to customers. Learn more about why your M365 is not backed up (and how to fix it) in our in-depth article here.
And it’s not only Microsoft that has a shared responsibility for their SaaS services. Google (and backup files to Google drive) has what they refer to, almost ominously, as “shared fate” on Google cloud shared responsibilities. Likewise, Amazon Web Services (AWS) have their own shared responsibility model. It’s vital customers know and understand the extent of their agreement.

Risks to Data Security

In the days of on-premises backup, the only credible risks were acts of mother nature and hardware failure. That is, of course, if you ignore software issues. Lots of software (from firmware on RAID adapters to drivers to operating system filesystem implementations and the user applications) problems would cause data loss and a need for restore, from system level down to file level. (That’s one thing I don’t miss about the ‘90s.) However, in the cloud-computing era, the risks have evolved as much as the ways in which we create, share, and store data, so things are much more complicated now. With both the prevalence and penetration of ransomware, cybercrime, and not to mention the increased access users have in order to streamline collaboration interactions and boost productivity, data—the lifeblood of a company—has, in many ways, never been more susceptible to data loss, regardless of whether it’s international (malicious actors, ransomware, etc.) or unintentional (human error, accidental deletion). Sometimes going back to basics can be the place to start in developing or hardening security.

3-2-1 Backup Method

The 3-2-1 principle comes from the days of on-premises data storage. It is still commonly referenced today in the modern, cloud-computing area. Even though it isn’t directly applicable, word for word, to cloud data, this well-known and widely used principle can still be used today to guide security decision makers in their process of improving their security infrastructure against today’s data risks.
Roughly speaking, the 3-2-1 backup rule requires 3 copies of data, across two types of storage media, with one off-site copy stored.

What Is the Origin of the 3-2-1 Rule?

Backup and recovery solutions have existed since long before cloud computing. However, the methodologies have shifted due to the modernization of the infrastructures, behaviors, needs, and of course a lot more variables (but we won’t get into that here), which has resulted in some discrepancies between best-practice principles and their application to modern data infrastructures. This is also the case with the 3-2-1 backup rule, with the biggest change being the shift of how data is created and stored (or rather where). Formerly, production data was created on site and stored in on-premises hardware, alongside one backup copy, and the third being stored off premises and typically on tapes. ComputerWeekly has a feature on if the cloud has made 3-2-1 obsolete. In the cloud era, data is created in numerous places by remote workers in SaaS applications, where it is often transferred around the globe, and is stored “somewhere else” from a business’s physical office. More than likely, the extent of an answer to the question of “where is your data stored” is that it’s in the cloud. But is that backup? And what is true backup in the cloud?

How Does the Rule Apply to Cloud Backup?

We often see iterations of this backup principle in fancy infographics that almost forget to translate the rules to apply to the current scenarios. However, with a few tweaks, there’s plenty of relevant guidance that can help lead to a successful, modern, data security system.
Let’s look at the rules with a modern lens:

3 Copies of Your Data

The ‘3’ in the rule refers to the number of “copies of your data,” with one being the primary dataset in the production environment while the remaining two copies are backups. This is still applicable to modern data protection best practices.

2 Administrative Domains

As mentioned, the ‘2’ can be understood as “two administrative domains” so that copies are managed independently from the other or are stored within separate logical environments. You often see this written as “two types of media,” which is a relic from the on-prem past when it was made up of disks and tapes. Now, it’s about having copies across multiple disks and across two administrative domains so that one data-loss event cannot possibly—or is extremely unlikely to—impact all copies of the data. This is known as a logical gap. Without it, should there be a cloud-wide compromise (such as a breach) or data loss event of the cloud where your primary data lives, your data would not be available to you. One of the best-known examples of this is the Danish shipping giant Maersk and the infamous NotPetya cyberattack, dubbed “the most devastating cyberattack in history” in the full Wired story here. When working “in” the cloud, the building you are in isn’t of any real consequence to the data. Rather, it’s the cloud you are working in and storing data in that matters. In many regards, this step could envelop the step below, “1 copy external,” but in respect to the principle, it serves us here to keep it a separate consideration. Should there be a cloud-wide compromise or data loss event of the cloud where your primary data lives, your data would still be available to you by following the rule. Without doing so, you’ve lost access to your data (or even lost your data permanently), with an impact that has a massive potential for business disruption and costs (as in the case of Maersk).

1 Copy External

Formerly the ‘1 off-site storage copy,’ this still applies for the same reasons as it did in the past: You don’t want to store all of your data in the same exact location, and whether all are aware or not, the cloud is located in physical data centers. From the on-premises days, this meant literally having a copy of disks and/or tapes in a different location from your business in case someone, something, or some event with the power to destroy the building did so. Let’s call this the “in case of fire” step. In cloud computing, this means having a backup copy outside the cloud of the production environment and outside the administrative domain of the other backup. Remember, the cloud is ‘just’ physical data centers, so by working in the cloud, the centers you are storing your data in are of real importance to the data. What if the data center of the cloud you are working in is also the same data center that your backup cloud data is stored in? Should there be a data loss event at that center, all of your data would be at risk from that event. That’s bad.

Use Case: What would this look like in real life?

If, for example, you are working on a Microsoft Word document and you save it to OneDrive that has OneDrive Backup turned on, you’re totally protected, because it says “backup,” right? This is an example where the 3-2-1 principle still helps shed light on modern data protection in the cloud. By following the 3-2-1 rule above, one can deduct that this example isn’t backup (but neither is a lot of what SaaS providers offer as ‘backup’) because true backup requires a logical infrastructure separate from the primary data. As the “in case of fire” step requires, you must have one copy outside of the administrative domain. By working in and backing up OneDrive data to Microsoft’s cloud services, the data remains in the same administrative domain. What if something were to happen to Microsoft servers? You’d lose access to your primary data and the copies “backed up” since they all relied on the same cloud. What’s even worse is that since the backup is configured by “you” (i.e., the admin), a compromise of your account can unconfigure it, too. So, a simple case of ransomware could completely and automatically disable or work around such in-service protections—even leading to immediate backup data deletion. Keepit, on the other hand – aside from being separate (and therefore unlikely to be compromised at the same time by the same mechanism), as a dedicated backup solution – will actually protect even the administrator from quickly or immediately deleting backup data. In this respect, Keepit offers some of the most desirable features of “the tape in an off-site vault” in a modern cloud service solution.

Here’s how to use the 3-2-1 backup rule to ensure you’re covered: Independent cloud

If you’re interested in further reading, check out our e-Guide on SaaS data security for a thorough look into leading SaaS data security methodologies and how companies can raise the bar for their data protection in the cloud era. Convinced you need backup, but want to know more about data protection and management for your particular SaaS application, then explore how Keepit offers cloud data backup coverage for the main SaaS applications here.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

Cyber Kill Chain

Intro

This is an important concept and I want to provide you with a quick overview of what are kill chains, what is threat modelling, why we do these things, and why we need them. This understanding is crucial in creating a stable and strong security posture.

One other thing to note is that all these frameworks are generally made to complement other frameworks. For example, the UKC – Unified Kill Chain – is made to be a complement to MITRE.

Cyber Kill Chain – what is it?

This term is a military term/concept that relates to an attack, in particular its structure. Lockheed Martin (security and aerospace company) is the one that established the Cyber Kill Chain in 2011, based on the aforementioned military concept. The idea of the framework is to define the steps adversaries are taking when attacking your organization. In theory, to be successful, the adversary would pass all the phases within the Kill Chain.

Our goal here is to understand what the Kill Chain means from an attacker’s perspective, so that we can put up our defences in place and either pre-emptively mitigate that, or disrupt their attacks.

Why do we need to understand the (Cyber) Kill Chain?

Understanding of the Cyber Kill Chain can help you protect against myriad of attacks, like ransomware, for example. It can also help you understand in what ways do APTs operate. Through this understanding, as a SOC Analyst, or Incident Responder, you can potentially understand the attacker’s goals and objectives by comparing it to the Kill Chain. It can also be used to find those gaps and remediate on missing controls.

The attack phases within the Cyber Kill Chain are:

  • Reconnaissance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command & Control
  • Actions on Objectives (Exfiltration)

Reconnaissance

As we all know, this means searching for and collecting the information about system(s). In this phase, our adversaries are doing their planning. This is also where OSINT comes in, and is usually the first step an adversary will take, before going further down the chain. They will try and collect any possible piece of info on our organization, employees, emails, phones, you name it.

This can be done, for example, through email harvesting – process of collecting email addresses from online (public, paid, or free) services. These can further be used for a phishing campaign, or anything else.

Within the recon step, they also might collect the socials of the org’s employees, especially if some employee in particular might seem of interest or is a bit more of an easier target. All this information goes into the mix and is nothing new. First step – Recon.

Weaponization

After the initial recon phase, our adversary goes on to create their weapons! Usually, this entails some sort of malware/exploit combination bundled into a payload of sorts. Some adversaries will straight up buy the malware on the Darkweb marketplaces, but some more sophisticated attackers, as well as the APTs, will usually write their own malware. This is advantageous as it might actually evade your detection systems.

They can go on about this in numerous ways, but some examples might include them creating a malicious MS Office document with bad macros or VBA scripts, they could also use Command & Control techniques so that your affected machine calls to the Command server for more of those malicious payloads. (Yikes!) Or, they could add a backdoor, some other type of malware, or anything really.

Delivery

This step entails the attacker choosing a way to deliver the payload/malware to their victim. There are many options here, but in general, the most used one is good old phishing email.

With a phishing email that’s sent after the successfully completed reconnaissance phase, the attacker can target a specific person (spearphishing), or a group of employees at your organization. Within the email would be the embedded payload.

Other ways to distribute the payload may include the attackers planting infected USBs in public places, like parking lots, the streets, etc. Or, they could use a so-called watering hole attack which basically aims at a specific group of people by sending them to an attacker controlled website, by redirecting them to that site off a site they generally use but is now compromised by the attacker.

The attacker exploits the website, and then somehow tries to get the unsuspecting users to browse to the site where the victim basically downloads the malware/payload unintentionally.

Exploitation

Before finally getting access to our systems, the attackers need to carry out an actual exploit. Suppose the previous steps worked and the user downloaded or somehow ran the malicious payload, the attacker is ready for the next steps… or whatever’s in between! They can try to move laterally, get to your server, escalate privileges, anything goes.

This step boils down to

  • Victim opening the malicious file, thus triggering the exploitation
  • The adversary exploits our systems through a server, or some other way
  • They use a 0 day exploit

Whatever the vector, it comes down to them exploiting our systems and gaining access.

Installation

This step comes after the exploitation and it usually pertains to the adversary trying to keep a connection of sorts to our system. This can be achieved in many ways, for example, they might try to install the backdoor on the compromised machine, or they could modify our services, they could also install a web shell on the webserver, or anything else that helps them achieve persistence. This is key for the Installation phase.

The persistent backdoor is what will let the attacker interact and access our systems that were already compromised.

In this phase, they also might try to cover their tracks from your blue team by trying to make the malware look as if it was a legitimate app/program.

Command & Control

The Command & Control, also known as C2, C2 beaconing, or C&C is the penultimate part, and this is where the adversary uses the previously installed malware on the victim’s device to control the device remotely. This is usually built into the malware itself, and it has some sort of logic through which it calls back home to its Control server.

As the infected device calls back to the C2 server, the adversary now has full control over the compromised device. Remotely!

The most used C2 channels these days are:

  • HTTP protocol 80, HTTPS port 443 – HTTPS is interesting as it can help hide within the encrypted stream and it can potentially help the malicious traffic evade firewalls
  • DNS – the infected host will make constant DNS queries calling to its C2 server

An interesting fact – In the past, adversaries used IRC to send C2 traffic (beaconing), but nowadays it’s became obsolete as this type of bad traffic is much more easily detectable by the modern/current security solutions.

Actions on Objectives (Exfiltration)

This is your exfiltration (or exfil) step, where the adversary tries to gather all the goodies they just stole, so, user credentials, messing with the backups and/or shadow copies, corrupt/delete/overwrite/exfiltrate data. They can also escalate to domain admin, for the keys to the kingdom, or move laterally through the organization. They could also try to find vulnerabilities of your software internally, and much more.

This step depends on their specific goals/objectives, and is where all the action will happen, thus actions on objectives.

Conclusion

I hope I’ve managed to give a brief overview of this incredibly important concept. I will cover it a bit more in the future, but for now, I felt like this was a good (traditional) start. I do hope to cover the Unified Kill Chain soon, though. So – stay tuned!

Ps: you’ll notice there are a couple of other frameworks/variations aside from the Cyber Kill Chain, and I will try to explain the distinctions. Just remember these are models/methodologies and there’s no silver bullet. They should be used in conjunction with other security controls.

Image source – https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html

Cover image by Linus Sandvide

#kill-chain #cyber #C2 #threat-modelling

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Zero Trust Guidance Rewrites US Cyber Strategy

“Our adversaries our in our networks, exfiltrating our data, and exploiting the Department’s users.”

So reads the humbling introduction to zero trust guidance recently released by the Department of Defense (DoD). It acknowledges in the very first line that cybersecurity has failed on almost every front. Then it makes a complete commitment to zero trust as the solution.

Many were waiting on this guidance and wondering what, exactly, it would entail. It comes following an order from the Biden administration 18 months ago to strengthen America’s cybersecurity in a big way. Many changes and long-overdue improvements have come out of that order. But by far the most significant is a commitment on the part of all federal agencies to adopt a complete zero-trust posture by 2027.

We now have a road map for how the government plans to get there. I will cover that shortly. Before that, let me highlight a few reasons I think the latest guidance (and the strategy that prompted it) are worth paying attention to.

First, that strategy will form the backbone of U.S. cybersecurity, which in turn will play a critical role – or may even be the cornerstone – of continued national security. Cyber attacks will be the most accessible, most common, and most devastating kinds of attacks in the future, so how countries defend themselves against this massive risk really matters. I have been writing about national cyber defenses from a few lenses recently. What makes the US approach unique, from my perspective, is the insistence on not just applying a cyber strategy consistently across all agencies but focusing it so specifically on zero trust. Some will call it practical, even mandatory, to make zero trust the guiding principle of cybersecurity in a decentralized world. Others, however, might view it as putting too many eggs in one basket. Time will tell.

Which brings me to my second observation, which is that the US government is embarking on the biggest experiment in zero trust ever undertaken. Keep in mind that the phrase “zero trust” has barely existed for more than a decade, and few large-scale, trust-free environments are actually up and running. Despite widespread zero trust adoption across the private sector, the government is by far the biggest trailblazer on this front, and the road ahead will be illustrative for all. What will it take to eliminate trust from the whole of the federal government? And once 2027 arrives, how secure will the government really be? This test case could cement zero trust as the centerpiece of cybersecurity moving forward – or it could reveal zero trust to be just the latest flawed fad. I suspect the answer will land somewhere in the middle. But unpredictability is the dominant feature of cybersecurity, so who knows what will happen? It will be important no matter what.

The Next Five Years in Zero Trust

A 2027 deadline to standardize zero trust across all federal agencies creates a lot of work to finish in a short five years. To its credit, the DoD seems to be fully aware of that fact because the roadmap is systematic and comprehensive to an extreme degree. Since there are so many different agencies with so many levels of cyber maturity – along with existing zero trust deployments – the guidance aims (and largely succeeds) at being accessible and universal. Which is a bonus for the private sector because companies can then easily adopt the government’s zero trust strategy as their own.

The roadmap has four distinct goals:

  • Zero Trust Cultural Adoption – Everyone in the DoD understands and commits to zero trust principles (trust nothing, verify everything, encrypt automatically, segment risks etc).

  • DoD Information Systems Secured & Defended – All new and legacy systems follow the DoD zero trust framework and put prescribed capabilities in place. Further guidance on this is forthcoming.

  • Technology Acceleration – The DoD and its vendors get faster at scaling, innovating, or replacing new technologies as new threats and new tools emerge in the coming years.

  • Zero Trust Enablement – The zero trust framework has the resources and support it needs to remain a robust and consistent effort.


Each of the goals has multiple objectives considered imperative for achieving the desired outcome. Overall, the DoD identifies 45 capabilities and 152 total activities required for framework compliance. I would encourage anyone to peruse the framework – it’s heavy on jargon but also a valuable visualization of how the disparate components of zero trust fit together to form a cohesive security strategy. It’s not just MFA and encryption (though the framework calls for both of those things). Perhaps more important to realize, it’s not just about security or IT either – it’s a whole new way for information to move.


As such, what the DoD has set out to do (and the timeline they have committed to) is fairly remarkable. Whether it will succeed is debatable. Whether it’s interesting, important, and impactful for everyone in America isn’t. It will be a fascinating five years.




About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.