Skip to content

Important update for our clients

On February 19, 2023, we will be updating our blockpage certificate.
This certificate is not a necessary part of the filter and is used to display blockpage for HTTPS webpages. HTTP webpages are not affected.

Don’t worry, the filtering will continue to work without the new certificate.

You need to install a new version if:
1. You have manually installed the certificate before.
2. You are using SafeDNS Agent and want blockpage to be displayed on HTTPS webpages.

Here is a step-by-step guide on how to download and install the certificate.

Direct link to the certificate file.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About SafeDNS
SafeDNS breathes to make the internet safer for people all over the world with solutions ranging from AI & ML-powered web filtering, cybersecurity to threat intelligence. Moreover, we strive to create the next generation of safer and more affordable web filtering products. Endlessly working to improve our users’ online protection, SafeDNS has also launched an innovative system powered by continuous machine learning and user behavior analytics to detect botnets and malicious websites.

ChatGPT Storms Onto the Cybersecurity Scene

Anyone perusing this site has probably also read more than a few articles about ChatGPT, the latest “AI writer” that can turn user prompts into text that faithfully mimics human writing. I would venture to guess many readers here have even tried the tool for themselves (it’s free to experiment with if you haven’t). Chat GPT has dominated the conversation in tech over the last few weeks. It has been hard to escape, frankly.

Among the countless think pieces written about whether ChatGPT will spell the death of the college essay or usher in the end of creativity and critical thinking as we know them have been plenty of articles focused on cybersecurity specifically. Now that AI can instantaneously produce endless amounts of writing for almost any purpose, there are serious implications, both good and bad, for the future of digital defense.

Of course, the bad would seem to seriously outweigh the good (more on that soon). But amidst all the doom and gloom thrown at ChatGPT, it’s important to also acknowledge how this technology could be an asset to developers, security teams, or end users. Let’s look at it from three angles.

The Good

Cybersecurity suffers from a serious information deficiency. New attacks, techniques, and targets appear all the time, requiring the broad security community to keep constantly updated. On the other hand, average users need better information about cyber safety best practices, especially considering that years of consistent training and warnings haven’t cured deep-seated problems like password recycling. In both of these cases and others, I can see ChatGPT or a similar tool being extremely helpful for quickly yet effectively encapsulating information.

Of course, documenting cybersecurity hasn’t exactly been its biggest problem, and I question how much an AI writer can actually do to prevent or lessen attacks. Nonetheless, knowledge is power in cybersecurity but the scale of the issue stands in the way, so I can see automated writers playing a role in a host of different security tools, defensive techniques, and training strategies. They can (and arguably must) be a force for good.

The Bad

Almost the minute ChatGPT went live, the naysayers and doomsday prognosticators started to come out of the woodwork. Which is neither surprising nor troubling. ChatGPT is just the latest example of how artificial intelligence will transform the world in ways that we can’t predict, will struggle to control, and in some cases would never want.

Cybersecurity is a prime example. ChatGPT can generate passable (if not perfect) code just as it can prose. This could be a boon for developers of all kinds – including those that develop malware and other attacks. What’s to stop a hacker from using ChatGPT to expedite development and iterate endlessly, flooding the landscape with new threats? Similarly, why write your own phishing emails when ChatGPT, trained on countless past phishing emails, can generate thousands of them in seconds?

Automated writers lower the barrier to entering cybercrime while helping established criminals and gangs scale their efforts. More alarming, new technology always has unexpected, often unintended consequences, meaning that ChatGPT is sure to surprise us with how it gets weaponized, which is to say that the worst is yet to come.

The Ugly

To emphasize my previous point, let me outline a scenario I haven’t yet seen addressed in the ChatGPT conversation. Business email compromise (BEC) attacks are where hackers personalize phishing emails, texts, or other communications with personal information to make them seem like they are coming from the recipient’s boss, close colleague, or another trusted source. They also contain careful social engineering to inspire the recipient to act without considering risk or applying good judgment. They are basically phishing attacks carefully calibrated to succeed. Back in June, Wired wrote that they were “poised to eclipse ransomware” because they have proven so lucrative and also so resistant to security measures.

The saving grace was that BEC messages took time. Someone had to first do research on the targets and then turn that into fine-tuned copy. Therefore, they were hard to scale and difficult to get just right (many of these attacks still failed). There was a difficult if not definitive upper limit.

From my perspective, ChatGPT obliterates that obstacle. Imagine if an attacker trained automation to comb LinkedIn for data about people’s professional relationships, then fed that data into ChatGPT to create convincing BEC emails customized for hundreds or thousands of different recipients. If we can automate both the research and the writing parts, and do both on not just a massive scale but with uncanny precision, hackers can scale BEC campaigns to any size.

And then what? Will every email seem suspect? The cloud of doubt hanging over the authenticity of any piece of information or string of communication (did this come from someone real?) may prove as much or more disruptive than the attacks themselves. I’m just speculating. These doomsday scenarios, like so many others, may never materialize…Or BEC attacks could prove to be the least of our concerns.

That puts it on us – probably most people reading this site – to somehow ensure the good outweighs the rest.

 

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Why OT Research Is Controversial – But Necessary

I want to discuss a subject that doesn’t get enough attention in the world of OT/ICS cyber security considering how fundamental it is, and also sparks a surprising amount of controversy. The topic is the importance of conducting ongoing research into OT endpoint device vulnerabilities, particularly for legacy devices.

It should be a unanimous opinion that this research is important. The more we know about vulnerabilities and the more CVEs we generate, the better for everyone involved. However, I frequently encounter industry analysts and self-styled experts that repeatedly question the need and validity of research in the OT sector. Their argument is that legacy equipment is guaranteed to have vulnerabilities, that it is flawed by design and therefore advanced endpoint research is unnecessary. I find this argument ironic because these same experts are often involved in creating products that help detect and manage the vulnerabilities found by researchers. They state publicly that there is no point in doing research and then in the same breath talk about how their product can help mitigate the problems.

Continue reading

Why do you need both IDS and IPS, or maybe the NGFW too?

I would like to straighten the defense of the web application by talking about Intrusion Detection and Prevention Systems (IDS and IPS) as the third member of this security trio defense: WAF, RASP, and IDPS. In the previous articles, I talked about security defense technology Runtime Application Self-Protection (RASP) and Web Application Firewall (WAF).

What are IDS and IPS?

Intrusion Detection Systems and Intrusion Prevention Systems are used to detect intrusions and, if the intrusion is detected, to protect from it.

First, I will focus on explaining the differences between the WAF, RASP, and IDPS.

What is the difference between WAF, RASP, and IDPS?

I have already explained in previous articles the difference between WAF and RASP. Still, I will introduce IDPS and show you exactly why a combination of this trio is the best security choice.

Summary: IDPS is used to detect intrusions and protect from them. WAF will detect and block attacks based on rules, patterns, algorithms, etc. RASP detects the application runtime behavior using algorithms.

Why is it best to use both IDS and IPS?

To better understand why it is important to use both systems, we need to know what each of them does and doesn’t do and how combining them gives more effective protection. Each of those systems has its own types, which will be explained below.

Location and Range

These two types of security systems operate in different locations and have different ranges.

Facts:

·   IDS works across the enterprise network in real-time by monitoring and analyzing network traffic.

·   IPS works in the same network location as a firewall by intercepting network traffic.

·   IPS can use IDS to expand the range of monitoring.

By knowing this and using both IDPS, you can cover more range.

Host-based IDS and IPS

There are a few types of IDS and IPS. I will mention them so you can know which one targets what, but there is plenty of online documentation for more information.

Host-based IDS (HIDS) is used for protecting individual devices. It is deployed at the endpoint level. It checks network traffic in and out of a device, and it can examine logs and running processes. HIDS protects only the host machine. It does not scan complete network data. Similar to this type, IPS has its own Host-based IPS (HIPS). HIPS is deployed on clients/servers, and it monitors the device level as well.

Network-based IDS and IPS

Network-based IDS (NIDS) works on monitoring the entire network. It looks out at every network device and analyzes all the traffic to and from those devices. On the other side, IPS has its own type, called Network-based IPS (NIPS), deployed within the network infrastructure. It monitors the complete network and, if needed, tries to protect it.

**NIDS and NIPS are very important to network forensics and incident response because they compare incoming traffic to malicious signatures and differentiate good traffic from suspicious traffic.

Wireless IPS

IPS also has Wireless IPS (WIPS) type that monitors radio waves (wireless LAN) for unauthorized access points, which you can use to automate wireless network scanning. Techtarget site provided ways of using WIPS in enterprise in this article. Check it out!

Protocol-based intrusion detection systems (PIDS) and Application protocol-based intrusion detection systems (APIDS)

Both protocol-based systems are the type of IDS. They both monitor traffic to and from devices. The only difference is that PIDS monitors one server and APIDS group of servers.

Network behavioral analysis (NBA)

Network behavioral analysis (NBA) is the type of IPS that looks for unexpected behavior within patterns of a network itself.

IDS and IPS modes

IDS is generally set to work in inline mode. As for IPS, it is set to work in the network behind the firewall. It can operate in both modes: as an end host or in inline mode.

Most used IDS/IPS tools in 2022

According to softwaretestinghelp.com, the list of most used IDS tools is this:

·   SolarWinds Security Event Manager

·   Bro

·   OSSEC

·   Snort

·   Suricata

·   Security Onion

·   Open WIPS-NG

·   Sagan

·   McAfee Network Security Platform

·   Palo Alto Networks

For more info regarding pricing, pros, cons and features of these tools checkout the softwaretestinghelp site.

Also, spiceworks.com provided the list of the most used IDPS tools:

·   AirMagnet Enterprise

·   Amazon Web Services (AWS) GuardDuty

·   Azure Firewall Premium IDPS

·   Blumira

·   Cisco Secure IPS (NGIPS)

·   Darktrace Enterprise Immune System

·   IBM Intrusion Detection and Prevention System (IDPS) Management

·   Meraki MX Advanced Security Edition

·   NSFocus Next-Generation Intrusion Prevention System

·   Snort

For more info regarding pricing, pros, cons and features of these tools check out the spiceworks site. This research will also help you choose the right IDPS solution based on these tools’ features.

What is Next-Generation Firewall (NGFW) or Unified Threat Management (UTM)?

There is a modern type of technology that combines IDS and IPS with firewalls called Next-Generation Firewall (NGFW) or Unified Threat Management (UTM).

NGFW includes:

·   Standard firewall features (packet filtering, stateful inspection, and VPN awareness)

·   Integrated Intrusion Prevention (IPS)

·   Application awareness of threats

·   Detect and block risky apps

·   Threat intelligence

·   Upgrading security features (such as future information feeds)

·   New techniques that help to address new security threats

Researchers for nomios site have gathered information and made a list of the top 5 vendors for NGFW in 2022. Also, they gave suggestions on what you should look for when choosing the right NGFW tool. Check it out!

Conclusion

You should combine IDS and IPS because of three things: response, protection, and impact. If you decide to use IDS, the testing will stop at the detection phase but using IPS based on settings and policy testing will also include the prevention. Because IPS reacts immediately, it gives a certain layer of protection aside from detecting malicious activity. However, there are false positives possible using IPS that will end up shutting your network.

Organizations often set up Integration Detection Systems to handle the logs and notifications/alerts, routers, firewalls, and servers to fight threats.

A better solution would be using a combination of IDPS and setting it up when planning security. In the future, when the organization grows and needs better protection, it will be possible to use IDS/IPS solutions for additional networks, servers, or devices.

Also, depending on the organization’s security needs and cost restrictions, NGFW can be a good choice too!

Cover photo by krakenimages

#IPS #IDS #IDPS #NGFW

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

CISA BOD 23-01: Why vulnerability scanners miss the mark on asset inventory

On October 3, 2022, the Cybersecurity and Infrastructure Security Agency (CISA) issued Binding Operational Directive (BOD) 23-01: Improving Asset Visibility and Vulnerability Detection on Federal Networks. The directive requires that federal civilian executive branch (FCEB) departments and agencies perform automated discovery every 7 days and identify and report potential vulnerabilities every 14 days. Additionally, it requires the ability to initiate on-demand asset discovery to identify specific assets or subsets of vulnerabilities within 72 hours of receiving a request from CISA.

To meet these requirements, agencies will need to start with an accurate asset inventory. Most agencies will attempt to leverage existing solutions, like their vulnerability scanners, to build their asset inventories. It seems reasonable to do so, since most vulnerability scanners have built-in discovery capabilities and can build asset inventories. However, they will quickly learn that vulnerability scanners are not up for the task and cannot help them sufficiently and effectively meet the requirements laid out by CISA.

Let’s take a look at why agencies need a solution solely focused on asset inventory, in addition to their vulnerability scanner, if they want to tackle CISA BOD 23-01.

Asset inventory is a foundational building block

Every effective security and IT program starts with a solid asset inventory. CISA BOD 23-01 reinforces that imperative. Specifically, it states, “Asset discovery is a building block of operational visibility, and it is defined as an activity through which an organization identifies what network addressable IP-assets reside on their networks and identifies the associated IP addresses (hosts). Asset discovery is non-intrusive and usually does not require special logical access privileges.”

What does this mean? FCEB agencies looking to meet the requirements outlined by CISA BOD 23-01 must be able to discover managed and unmanaged devices connected to their networks. Internal and external internet-facing assets must be cataloged with full details and context. All within the timeframe outlined by CISA.

So now, the question is why vulnerability scanners can’t be used to meet the requirements laid out in the directive.

The challenges of asset inventory with vulnerability scanners

As the number of devices connecting to networks continues to grow exponentially, agencies need to stay on top of these devices; otherwise, they could provide potential footholds for attackers to exploit. However, common issues like shadow IT, rogue access, and oversight continue to make it difficult to keep up with unmanaged devices. BOD 23-01 highlights the importance of identifying unmanaged assets on the network. That’s why the need for a fully comprehensive asset inventory is the key to adequately addressing the directive.

So, why can’t vulnerability scanners deliver on asset inventory? Most vulnerability scanners combine discovery and assessment together, resulting in slower discovery times, delayed response to vulnerabilities, and limited asset details. As a result, most agencies are left wondering how they can do a better job building their asset inventories.

Combining discovery and assessment slows everything down

Vulnerability scanners typically combine asset discovery and assessment into one step. While on the surface, this appears to be efficient, it is actually quite the opposite. In regards to asset discovery, CISA BOD 23-01 specifically requires that FCEB agencies perform automated discovery every 7 days and identify and initiate on-demand discovery to identify specific assets or subsets of vulnerabilities within 72 hours of receiving a request from CISA.

Because vulnerability scanners leverage a lot of time-consuming checks, they’re not able to scan networks quickly enough. Add in the complexity of highly-segmented networks and maintenance windows, and it is nearly impossible to effectively utilize vulnerability scanners for discovery and meet the timing requirements outlined by CISA.

Under the new directive, assessing the potential impact of vulnerabilities becomes even more urgent. Agencies will need to perform on-demand discovery of assets that could be potentially impacted within 72 hours, if requested by CISA. When security news breaks, agencies need to respond as quickly as possible, but vulnerability scanners slow down the process. In a scenario like this, it would be more efficient to have a current asset inventory that agencies can search–without rescanning the network. This is particularly useful if agencies know there are specific assets they need to track down, they can query their existing asset inventory to identify them immediately.

For example, let’s say a new vulnerability is disclosed. Vendors will need some time to develop the vuln checks, and agencies will need to wait for the vuln checks to become available. Once they’ve been published, agencies can finally start rescanning their networks. Imagine waiting for the vuln check to be released, and then delaying the rescan due to scan windows. Without immediate insight into the potential impact of a vulnerability, agencies are playing the waiting game, instead of proactively being able to assess the risk.

How agencies can speed up discovery

So, what can agencies do? Let vulnerability scanners do what they do best: identify and report on vulnerabilities. Complement them with a dedicated solution that can automate and perform the discovery of assets within the timeframe set by the directive. In order to accomplish this, the asset inventory solution must be able to quickly and safely scan networks without a ton of overhead, be easy to deploy, and help security teams get ahead of new vulnerabilities.

Agencies need to have access to their full asset inventory, on-demand, so they can quickly zero in on any asset based on specific attributes. This information is invaluable for tracking down assets and investigating them, particularly when new zero-day vulnerabilities are uncovered. When the new zero-day is announced, agencies can find affected systems by searching across an existing asset inventory–without rescanning the network.

Meet CISA BOD 23-01 requirements with a dedicated asset inventory solution

It is increasingly evident that decoupling discovery and assessment is the most effective way to ensure that agencies have the data needed to accelerate vulnerability response and meet the requirements outlined in the directive. Because let’s face it: vulnerability scanners are really good at vulnerability enumeration–that’s what they’re designed to do. However, they really miss the mark when it comes to discovering assets and building comprehensive asset inventories. Because vulnerability scanners combine discovery and assessment, they aren’t able to scan entire networks quickly, and at times, they don’t fingerprint devices accurately.

As a result, many agencies are wondering how to meet the requirements outlined in CISA BOD 23-01 if they can’t depend on their vulnerability scanner for discovery. Agencies will need to start looking for a standalone asset inventory solution that is capable of performing unauthenticated, active discovery, while also enriching data from existing vulnerability management solutions.

How runZero can help agencies focus on asset discovery

runZero separates the discovery process from the vulnerability assessment stage, allowing agencies to perform discovery on-demand. Because runZero only performs discovery, it can deliver the data about assets and networks much faster than a vulnerability scanner. Customers have found that runZero performs scans about 10x faster than their vulnerability scanner, allowing them to:

  • Get a more immediate day one response to new vulnerabilities.
  • Gather as much information as possible about assets while waiting for vulnerability scan results.

That means, while waiting for vulnerability assessments to complete, agencies can already start digging into their asset inventory and identifying assets that may be impacted by a vulnerability. runZero regularly adds canned queries for assets impacted by newly disclosed vulnerabilities and highlights them via Rapid Response. Users can take advantage of these canned queries to instantly identify existing assets in the inventory that match specific identifiable attributes. For example, querying by hardware and device type can narrow down assets to a specific subset that may be affected by a vulnerability. All of the canned queries can be found in the Queries Library.

All in all, runZero is the only asset inventory solution that can truly help FCEB agencies stay on top of their ever-changing networks. By decoupling asset discovery from vulnerability assessment, agencies will gain visibility and efficiencies, while meeting the requirements set by CISA BOD 23-01.Learn more about runZero

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About runZero
runZero, a network discovery and asset inventory solution, was founded in 2018 by HD Moore, the creator of Metasploit. HD envisioned a modern active discovery solution that could find and identify everything on a network–without credentials. As a security researcher and penetration tester, he often employed benign ways to get information leaks and piece them together to build device profiles. Eventually, this work led him to leverage applied research and the discovery techniques developed for security and penetration testing to create runZero.

Backup Strategy and the 3-2-1 Principle

Data loss comes in all sizes: small (individual files), medium (SharePoint site), and large (ransomware and disaster recovery). No matter the size of the loss of data, none of them are fun, and even the smallest of data loss events could leave you lacking your most critical data. That one spreadsheet or that one hard disk drive could have what you and your business rely on most – it’s not always something someone can “just create again” on a whim as data loss is indiscriminate in its impact. All data loss events negatively impact workflow, and all are risk and data protection concerns that ultimately are a business imperative. Proactive data protection through backup and data management is at the forefront of all of our minds—or at least should be. Now why is that? Years ago, the assumption prevailed that cloud services would “take care of everything” once you signed up for a cloud service, with backup being lumped in. But now, more than ever, as the awareness of shared responsibility models for SaaS applications grows which states it is the user who is responsible, it’s clear the onus is on you to have that backup strategy in place. That’s why the 3-2-1 backup rule—a principle established for on-premises infrastructure which requires multiple copies of backup data on different devices and in separate locations—is still relevant to today’s cloud-based infrastructures by providing essential data-protection guidelines.

Why Back Up Cloud SaaS Data, and Why Now?

Your data is critical to your business operations, and in many cases, maintaining control of and access to it is required by law. (Read more about how third-party security keeps companies in control of their data here.)

SaaS Shared Responsibility Model

Software-as-a-service providers have established documentation that clarifies the areas of responsibilities they have and also those responsibilities that are retained by the customer. Microsoft, well known for its Microsoft 365 SaaS offering, delineates the boundaries of shared responsibility in the cloud. While Microsoft does provide some degree of data protection, many people are not aware of the limitations of this protection. The short of it is that Microsoft does not provide suitable backup and restore functionality to customers. Learn more about why your M365 is not backed up (and how to fix it) in our in-depth article here.
And it’s not only Microsoft that has a shared responsibility for their SaaS services. Google (and backup files to Google drive) has what they refer to, almost ominously, as “shared fate” on Google cloud shared responsibilities. Likewise, Amazon Web Services (AWS) have their own shared responsibility model. It’s vital customers know and understand the extent of their agreement.

Risks to Data Security

In the days of on-premises backup, the only credible risks were acts of mother nature and hardware failure. That is, of course, if you ignore software issues. Lots of software (from firmware on RAID adapters to drivers to operating system filesystem implementations and the user applications) problems would cause data loss and a need for restore, from system level down to file level. (That’s one thing I don’t miss about the ‘90s.) However, in the cloud-computing era, the risks have evolved as much as the ways in which we create, share, and store data, so things are much more complicated now. With both the prevalence and penetration of ransomware, cybercrime, and not to mention the increased access users have in order to streamline collaboration interactions and boost productivity, data—the lifeblood of a company—has, in many ways, never been more susceptible to data loss, regardless of whether it’s international (malicious actors, ransomware, etc.) or unintentional (human error, accidental deletion). Sometimes going back to basics can be the place to start in developing or hardening security.

3-2-1 Backup Method

The 3-2-1 principle comes from the days of on-premises data storage. It is still commonly referenced today in the modern, cloud-computing area. Even though it isn’t directly applicable, word for word, to cloud data, this well-known and widely used principle can still be used today to guide security decision makers in their process of improving their security infrastructure against today’s data risks.
Roughly speaking, the 3-2-1 backup rule requires 3 copies of data, across two types of storage media, with one off-site copy stored.

What Is the Origin of the 3-2-1 Rule?

Backup and recovery solutions have existed since long before cloud computing. However, the methodologies have shifted due to the modernization of the infrastructures, behaviors, needs, and of course a lot more variables (but we won’t get into that here), which has resulted in some discrepancies between best-practice principles and their application to modern data infrastructures. This is also the case with the 3-2-1 backup rule, with the biggest change being the shift of how data is created and stored (or rather where). Formerly, production data was created on site and stored in on-premises hardware, alongside one backup copy, and the third being stored off premises and typically on tapes. ComputerWeekly has a feature on if the cloud has made 3-2-1 obsolete. In the cloud era, data is created in numerous places by remote workers in SaaS applications, where it is often transferred around the globe, and is stored “somewhere else” from a business’s physical office. More than likely, the extent of an answer to the question of “where is your data stored” is that it’s in the cloud. But is that backup? And what is true backup in the cloud?

How Does the Rule Apply to Cloud Backup?

We often see iterations of this backup principle in fancy infographics that almost forget to translate the rules to apply to the current scenarios. However, with a few tweaks, there’s plenty of relevant guidance that can help lead to a successful, modern, data security system.
Let’s look at the rules with a modern lens:

3 Copies of Your Data

The ‘3’ in the rule refers to the number of “copies of your data,” with one being the primary dataset in the production environment while the remaining two copies are backups. This is still applicable to modern data protection best practices.

2 Administrative Domains

As mentioned, the ‘2’ can be understood as “two administrative domains” so that copies are managed independently from the other or are stored within separate logical environments. You often see this written as “two types of media,” which is a relic from the on-prem past when it was made up of disks and tapes. Now, it’s about having copies across multiple disks and across two administrative domains so that one data-loss event cannot possibly—or is extremely unlikely to—impact all copies of the data. This is known as a logical gap. Without it, should there be a cloud-wide compromise (such as a breach) or data loss event of the cloud where your primary data lives, your data would not be available to you. One of the best-known examples of this is the Danish shipping giant Maersk and the infamous NotPetya cyberattack, dubbed “the most devastating cyberattack in history” in the full Wired story here. When working “in” the cloud, the building you are in isn’t of any real consequence to the data. Rather, it’s the cloud you are working in and storing data in that matters. In many regards, this step could envelop the step below, “1 copy external,” but in respect to the principle, it serves us here to keep it a separate consideration. Should there be a cloud-wide compromise or data loss event of the cloud where your primary data lives, your data would still be available to you by following the rule. Without doing so, you’ve lost access to your data (or even lost your data permanently), with an impact that has a massive potential for business disruption and costs (as in the case of Maersk).

1 Copy External

Formerly the ‘1 off-site storage copy,’ this still applies for the same reasons as it did in the past: You don’t want to store all of your data in the same exact location, and whether all are aware or not, the cloud is located in physical data centers. From the on-premises days, this meant literally having a copy of disks and/or tapes in a different location from your business in case someone, something, or some event with the power to destroy the building did so. Let’s call this the “in case of fire” step. In cloud computing, this means having a backup copy outside the cloud of the production environment and outside the administrative domain of the other backup. Remember, the cloud is ‘just’ physical data centers, so by working in the cloud, the centers you are storing your data in are of real importance to the data. What if the data center of the cloud you are working in is also the same data center that your backup cloud data is stored in? Should there be a data loss event at that center, all of your data would be at risk from that event. That’s bad.

Use Case: What would this look like in real life?

If, for example, you are working on a Microsoft Word document and you save it to OneDrive that has OneDrive Backup turned on, you’re totally protected, because it says “backup,” right? This is an example where the 3-2-1 principle still helps shed light on modern data protection in the cloud. By following the 3-2-1 rule above, one can deduct that this example isn’t backup (but neither is a lot of what SaaS providers offer as ‘backup’) because true backup requires a logical infrastructure separate from the primary data. As the “in case of fire” step requires, you must have one copy outside of the administrative domain. By working in and backing up OneDrive data to Microsoft’s cloud services, the data remains in the same administrative domain. What if something were to happen to Microsoft servers? You’d lose access to your primary data and the copies “backed up” since they all relied on the same cloud. What’s even worse is that since the backup is configured by “you” (i.e., the admin), a compromise of your account can unconfigure it, too. So, a simple case of ransomware could completely and automatically disable or work around such in-service protections—even leading to immediate backup data deletion. Keepit, on the other hand – aside from being separate (and therefore unlikely to be compromised at the same time by the same mechanism), as a dedicated backup solution – will actually protect even the administrator from quickly or immediately deleting backup data. In this respect, Keepit offers some of the most desirable features of “the tape in an off-site vault” in a modern cloud service solution.

Here’s how to use the 3-2-1 backup rule to ensure you’re covered: Independent cloud

If you’re interested in further reading, check out our e-Guide on SaaS data security for a thorough look into leading SaaS data security methodologies and how companies can raise the bar for their data protection in the cloud era. Convinced you need backup, but want to know more about data protection and management for your particular SaaS application, then explore how Keepit offers cloud data backup coverage for the main SaaS applications here.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

Cyber Kill Chain

Intro

This is an important concept and I want to provide you with a quick overview of what are kill chains, what is threat modelling, why we do these things, and why we need them. This understanding is crucial in creating a stable and strong security posture.

One other thing to note is that all these frameworks are generally made to complement other frameworks. For example, the UKC – Unified Kill Chain – is made to be a complement to MITRE.

Cyber Kill Chain – what is it?

This term is a military term/concept that relates to an attack, in particular its structure. Lockheed Martin (security and aerospace company) is the one that established the Cyber Kill Chain in 2011, based on the aforementioned military concept. The idea of the framework is to define the steps adversaries are taking when attacking your organization. In theory, to be successful, the adversary would pass all the phases within the Kill Chain.

Our goal here is to understand what the Kill Chain means from an attacker’s perspective, so that we can put up our defences in place and either pre-emptively mitigate that, or disrupt their attacks.

Why do we need to understand the (Cyber) Kill Chain?

Understanding of the Cyber Kill Chain can help you protect against myriad of attacks, like ransomware, for example. It can also help you understand in what ways do APTs operate. Through this understanding, as a SOC Analyst, or Incident Responder, you can potentially understand the attacker’s goals and objectives by comparing it to the Kill Chain. It can also be used to find those gaps and remediate on missing controls.

The attack phases within the Cyber Kill Chain are:

  • Reconnaissance
  • Weaponization
  • Delivery
  • Exploitation
  • Installation
  • Command & Control
  • Actions on Objectives (Exfiltration)

Reconnaissance

As we all know, this means searching for and collecting the information about system(s). In this phase, our adversaries are doing their planning. This is also where OSINT comes in, and is usually the first step an adversary will take, before going further down the chain. They will try and collect any possible piece of info on our organization, employees, emails, phones, you name it.

This can be done, for example, through email harvesting – process of collecting email addresses from online (public, paid, or free) services. These can further be used for a phishing campaign, or anything else.

Within the recon step, they also might collect the socials of the org’s employees, especially if some employee in particular might seem of interest or is a bit more of an easier target. All this information goes into the mix and is nothing new. First step – Recon.

Weaponization

After the initial recon phase, our adversary goes on to create their weapons! Usually, this entails some sort of malware/exploit combination bundled into a payload of sorts. Some adversaries will straight up buy the malware on the Darkweb marketplaces, but some more sophisticated attackers, as well as the APTs, will usually write their own malware. This is advantageous as it might actually evade your detection systems.

They can go on about this in numerous ways, but some examples might include them creating a malicious MS Office document with bad macros or VBA scripts, they could also use Command & Control techniques so that your affected machine calls to the Command server for more of those malicious payloads. (Yikes!) Or, they could add a backdoor, some other type of malware, or anything really.

Delivery

This step entails the attacker choosing a way to deliver the payload/malware to their victim. There are many options here, but in general, the most used one is good old phishing email.

With a phishing email that’s sent after the successfully completed reconnaissance phase, the attacker can target a specific person (spearphishing), or a group of employees at your organization. Within the email would be the embedded payload.

Other ways to distribute the payload may include the attackers planting infected USBs in public places, like parking lots, the streets, etc. Or, they could use a so-called watering hole attack which basically aims at a specific group of people by sending them to an attacker controlled website, by redirecting them to that site off a site they generally use but is now compromised by the attacker.

The attacker exploits the website, and then somehow tries to get the unsuspecting users to browse to the site where the victim basically downloads the malware/payload unintentionally.

Exploitation

Before finally getting access to our systems, the attackers need to carry out an actual exploit. Suppose the previous steps worked and the user downloaded or somehow ran the malicious payload, the attacker is ready for the next steps… or whatever’s in between! They can try to move laterally, get to your server, escalate privileges, anything goes.

This step boils down to

  • Victim opening the malicious file, thus triggering the exploitation
  • The adversary exploits our systems through a server, or some other way
  • They use a 0 day exploit

Whatever the vector, it comes down to them exploiting our systems and gaining access.

Installation

This step comes after the exploitation and it usually pertains to the adversary trying to keep a connection of sorts to our system. This can be achieved in many ways, for example, they might try to install the backdoor on the compromised machine, or they could modify our services, they could also install a web shell on the webserver, or anything else that helps them achieve persistence. This is key for the Installation phase.

The persistent backdoor is what will let the attacker interact and access our systems that were already compromised.

In this phase, they also might try to cover their tracks from your blue team by trying to make the malware look as if it was a legitimate app/program.

Command & Control

The Command & Control, also known as C2, C2 beaconing, or C&C is the penultimate part, and this is where the adversary uses the previously installed malware on the victim’s device to control the device remotely. This is usually built into the malware itself, and it has some sort of logic through which it calls back home to its Control server.

As the infected device calls back to the C2 server, the adversary now has full control over the compromised device. Remotely!

The most used C2 channels these days are:

  • HTTP protocol 80, HTTPS port 443 – HTTPS is interesting as it can help hide within the encrypted stream and it can potentially help the malicious traffic evade firewalls
  • DNS – the infected host will make constant DNS queries calling to its C2 server

An interesting fact – In the past, adversaries used IRC to send C2 traffic (beaconing), but nowadays it’s became obsolete as this type of bad traffic is much more easily detectable by the modern/current security solutions.

Actions on Objectives (Exfiltration)

This is your exfiltration (or exfil) step, where the adversary tries to gather all the goodies they just stole, so, user credentials, messing with the backups and/or shadow copies, corrupt/delete/overwrite/exfiltrate data. They can also escalate to domain admin, for the keys to the kingdom, or move laterally through the organization. They could also try to find vulnerabilities of your software internally, and much more.

This step depends on their specific goals/objectives, and is where all the action will happen, thus actions on objectives.

Conclusion

I hope I’ve managed to give a brief overview of this incredibly important concept. I will cover it a bit more in the future, but for now, I felt like this was a good (traditional) start. I do hope to cover the Unified Kill Chain soon, though. So – stay tuned!

Ps: you’ll notice there are a couple of other frameworks/variations aside from the Cyber Kill Chain, and I will try to explain the distinctions. Just remember these are models/methodologies and there’s no silver bullet. They should be used in conjunction with other security controls.

Image source – https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html

Cover image by Linus Sandvide

#kill-chain #cyber #C2 #threat-modelling

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Zero Trust Guidance Rewrites US Cyber Strategy

“Our adversaries our in our networks, exfiltrating our data, and exploiting the Department’s users.”

So reads the humbling introduction to zero trust guidance recently released by the Department of Defense (DoD). It acknowledges in the very first line that cybersecurity has failed on almost every front. Then it makes a complete commitment to zero trust as the solution.

Many were waiting on this guidance and wondering what, exactly, it would entail. It comes following an order from the Biden administration 18 months ago to strengthen America’s cybersecurity in a big way. Many changes and long-overdue improvements have come out of that order. But by far the most significant is a commitment on the part of all federal agencies to adopt a complete zero-trust posture by 2027.

We now have a road map for how the government plans to get there. I will cover that shortly. Before that, let me highlight a few reasons I think the latest guidance (and the strategy that prompted it) are worth paying attention to.

First, that strategy will form the backbone of U.S. cybersecurity, which in turn will play a critical role – or may even be the cornerstone – of continued national security. Cyber attacks will be the most accessible, most common, and most devastating kinds of attacks in the future, so how countries defend themselves against this massive risk really matters. I have been writing about national cyber defenses from a few lenses recently. What makes the US approach unique, from my perspective, is the insistence on not just applying a cyber strategy consistently across all agencies but focusing it so specifically on zero trust. Some will call it practical, even mandatory, to make zero trust the guiding principle of cybersecurity in a decentralized world. Others, however, might view it as putting too many eggs in one basket. Time will tell.

Which brings me to my second observation, which is that the US government is embarking on the biggest experiment in zero trust ever undertaken. Keep in mind that the phrase “zero trust” has barely existed for more than a decade, and few large-scale, trust-free environments are actually up and running. Despite widespread zero trust adoption across the private sector, the government is by far the biggest trailblazer on this front, and the road ahead will be illustrative for all. What will it take to eliminate trust from the whole of the federal government? And once 2027 arrives, how secure will the government really be? This test case could cement zero trust as the centerpiece of cybersecurity moving forward – or it could reveal zero trust to be just the latest flawed fad. I suspect the answer will land somewhere in the middle. But unpredictability is the dominant feature of cybersecurity, so who knows what will happen? It will be important no matter what.

The Next Five Years in Zero Trust

A 2027 deadline to standardize zero trust across all federal agencies creates a lot of work to finish in a short five years. To its credit, the DoD seems to be fully aware of that fact because the roadmap is systematic and comprehensive to an extreme degree. Since there are so many different agencies with so many levels of cyber maturity – along with existing zero trust deployments – the guidance aims (and largely succeeds) at being accessible and universal. Which is a bonus for the private sector because companies can then easily adopt the government’s zero trust strategy as their own.

The roadmap has four distinct goals:

  • Zero Trust Cultural Adoption – Everyone in the DoD understands and commits to zero trust principles (trust nothing, verify everything, encrypt automatically, segment risks etc).

  • DoD Information Systems Secured & Defended – All new and legacy systems follow the DoD zero trust framework and put prescribed capabilities in place. Further guidance on this is forthcoming.

  • Technology Acceleration – The DoD and its vendors get faster at scaling, innovating, or replacing new technologies as new threats and new tools emerge in the coming years.

  • Zero Trust Enablement – The zero trust framework has the resources and support it needs to remain a robust and consistent effort.


Each of the goals has multiple objectives considered imperative for achieving the desired outcome. Overall, the DoD identifies 45 capabilities and 152 total activities required for framework compliance. I would encourage anyone to peruse the framework – it’s heavy on jargon but also a valuable visualization of how the disparate components of zero trust fit together to form a cohesive security strategy. It’s not just MFA and encryption (though the framework calls for both of those things). Perhaps more important to realize, it’s not just about security or IT either – it’s a whole new way for information to move.


As such, what the DoD has set out to do (and the timeline they have committed to) is fairly remarkable. Whether it will succeed is debatable. Whether it’s interesting, important, and impactful for everyone in America isn’t. It will be a fascinating five years.




About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Zero Trust: What Is It and How to Implement

Due to the surge of ransomware attacks, the increased risks for data loss, and the continuous adverse effects cybercrime poses, many organizations have adopted the zero-trust principle to harden the security of their systems, thereby increasing their cyber resiliency.

Cyberattacks have become so ubiquitous that the Biden White House issued a statement urging American business leaders to strengthen their organization’s cybersecurity measures.

As it stands, GlobeNewswire reported that zero trust security is expected to reach a market value of $29 million USD by the end of 2022 and increase to US $118.7 billion by 2032. This significant growth in the coming decade comes from the value zero trust brings companies.

 

The simple fact is that business leaders are following its principles, like consistent monitoring and validation, because these principles help prevent data breaches and mitigate data loss.

This post will dive into what the zero principle is, as well as its capacity to tighten workplace data and security, effectively ushering in what Microsoft calls:

A new security model that more effectively adapts to the complexity of the modern environment, embraces the hybrid workplace, and protects people, devices, apps, and data wherever they’re located.

What are the cybercrime trends that zero trust can help curb?

One trend that’s risen in recent years is ransomware. Ransomware cripples businesses by locking their computer systems until a sum of money is paid. These attacks are expected to have a price tag of $265 billion USD annually by 2031, according to Cybersecurity Ventures.

With how easy it has become for ransomware gangs to deploy ransomware on a multinational scale, businesses need to deploy enhanced cybersecurity solutions to lessen system vulnerabilities, because “when it comes to ransomware attacks, it’s a matter of when, not if.” Read more from the Keepit blog article on how to prepare for ransomware.

It should come as no surprise that ransomware attacks can result in operational downtime. A Statista report stated that the average length of interruption after ransomware attacks is 20 days.

 

Even minor disruptions can decrease employee productivity, impede communications with clients—among other issues such as the significant fines Marriott faced—and impact business continuity. One might struggle to fully comprehend the serious implications that 20 days of downtime would have for businesses.

Zero trust, in a nutshell, is guided by the principle of ‘never trust, always verify.’

Why Zero Trust?

Zero trust, in a nutshell, is guided by the principle of “never trust, always verify.” It’s a modern security architecture which assumes that internal and external threats exist on the network at all times due to the pervasiveness of cybercrime. And as such, it requires all network users to undergo verification and validation processes before they can access the network resources.

Is zero trust really needed?

Generally, employees within a company access multiple networks simultaneously. There are many, many data exchanges between multiple user devices, across potentially numerous networks – of course, depending on the complexity of a company’s IT infrastructure.

 

This architecture boosts productivity through increased collaboration. However, this can come with a hidden risk when not following the zero-trust security model.

Zero trust use cases

What might that risk look like? Let’s suppose that one employee working on a single device is validated as “trusted.” But that device has become infected with malware by the user opening a dangerous email. (Learn how to identify a dangerous email.)

Since this user’s device was previously validated and is now assumed harmless, it still has access to all the users and networks as before being infected without having to provide or verify any credentials.

The result is unrestricted access to spread malware from this “trusted” device to other users within the network and to other devices within overlapping networks, allowing the malicious actor to expand their reach and damage, gaining access to more and more of a company’s business-critical data.

This example is the main reason zero trust architecture rejects assuming any device is safe. Rather, the system reduces risks through continuous authentication, thereby enhancing protection for your company’s network system by always verifying and authenticating. According to TechTarget:

This protects your organization in ways other models can’t. It stops malware from entering your network; gives remote workers more protection without affecting productivity; simplifies management of security operations centers with enhanced automation; and extends visibility into potential threats to improve proactive remediation and response.

TechTarget

How to Adopt Zero Trust  

According to a Microsoft zero trust business plan, “digital transformation forces re-examination of traditional security models.” And as such, there are many companies offering guidance. Microsoft alone has helped aid zero trust deployments in thousands of organizations with insightful (and practical) guides on how to adopt a zero-trust business plan.

Global cybersecurity leader Palo Alto Networks shares that there are three crucial steps you need to follow to deploy zero trust architecture in your business:

  1. Define your protected surface: Zero trust architecture can be costly and complicated. As such, identify your protected surface—including components like company applications and assets— rather than focusing on a large network area.

    If your business utilizes Microsoft 365, then you’ll know that documents, email, SharePoint data, and Teams chat must be secured against cyberattacks. Attackers can breach an account with access to the data or hijack your system admin, making it imperative to find a SaaS data backup solution that can maintain multiple backup copies with the needed granularity of data and metadata.

  2. Map your data flow: Plan your business’ flow of instructions and data as this will provide you with information on overlapping networks.

    For instance, where and in which formats is the data stored? If your employees utilize digital, desktop, mobile, or cloud, identify them so you can see how data is moved and shared.

  3. Design your architecture: Essentially, the network architecture should prevent unauthorized access to individuals who aren’t part of your company.

    This is especially relevant if you want to encrypt data before it moves to cloud storage devices. If you want to back up your company’s Microsoft 365 data, for instance, we offer blockchain-based encryption technology that guarantees your backups will remain immutable to ransomware threats and data loss. At Keepit, we also offer comprehensive coverage for M365 applications such as SharePoint, OneDrive, Groups and Teams, and Exchange Online.

Of course, implementation isn’t as simple as one, two, three: It involves a massive undertaking and a focused effort to implement and maintain. There are many, many other variables and considerations.

 

For instance, you can also adopt multi-factor authentication (MFA) and ensure use of updated devices.

  • MFA is especially relevant for companies who have stored their digital information on cloud computing systems. With MFA, you can prevent unauthorized users from accessing your organization’s resources.
  •  Similarly, encourage your workforce to update their devices with the latest firmware as this typically offers security patches for known vulnerabilities.

Continuously monitor your network and device attributes. Adopting zero trust architecture can prove futile if your workers do not audit and maintain a log for monitoring network traffic.

Do I still need to get backup for my SaaS data?

Ultimately, zero trust makes it much more difficult for external threats to gain access to an organization’s business-critical data – but not impossible. It also does not protect you against internal threats nor from human errors such as accidental overwrites and accidental deletions.

Data protection best practices tell us to always have a backup. That is a fundamental responsibility for you, the data creator and customer of a SaaS service like Microsoft 365, due to the well-documented yet often misunderstood shared responsibility model.  Securing an independent backup is still the best way to ensure 24/7 availability to your data.

With the offerings from specialized third-party backup and data management providers, peace of mind can be had quickly and from a cost-effective service. This is why Keepit was created: Your data, here today, here tomorrow.

Want backup now?

Learn more about Keepit’s SaaS data backup service offerings here.

If you’d like to explore more about backing up a particular SaaS workload like Microsoft 365, find the relevant Keepit blog posts below, as Keepit offers a suite of cloud SaaS data protection services:

  • Read our blog about why you need to back up M365
  • If you’re using Salesforce, read that blog article here
  • Why back up Active Directory (Azure) here
  • And for Google Workspace
  • Finally, read why to back up Zendesk here

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Keepit
At Keepit, we believe in a digital future where all software is delivered as a service. Keepit’s mission is to protect data in the cloud Keepit is a software company specializing in Cloud-to-Cloud data backup and recovery. Deriving from +20 year experience in building best-in-class data protection and hosting services, Keepit is pioneering the way to secure and protect cloud data at scale.

Why adding “End of Life” to your cybersecurity vocabulary is a good idea



Life seems to be moving at a blazingly fast  pace. As so does technology. Maybe even more so. Meaning, it is no wonder we sometimes feel overwhelmed and questioning whether we can keep up. Yes, it is hard to keep up with new technological advances and the threats accompanying them. But the fact is that because technology is moving so fast, that is exactly the reason to stay on top of the latest cybersecurity knowledge and solutions.

The saying “New is always better” is clearly not always true, but when it comes to securing our devices, there is some truth to it. We trust what we know, and with technology changing rapidly, we may prefer to keep on using outdated, but trusted, products. But there are a few things to consider, especially in the field of digital security. There are malicious threats we need protection from that are testing and honing exploitation techniques against software product – especially older versions.
Upgrading to new software can be a difficult decision, especially when a business has invested heavily in a particular product or funds are scarce to ensure continuity after an upgrade. Some businesses may not want to update at all. Yet sometimes the manufacturer or software provider can press the issue by bringing products to their end of life. End of Life. Also known as a product sunset, this date is a communicated conclusion to the manufacturer’s support for a product (or service) and is generally preceded by a period of limited support. In basic terms, this means that change is afoot.

What is EOL?
End of Life is a policy change, applying to platforms or products, that has reached the end of its useful life. This decision is made by the manufacturer and typically occurs many years after the software’s or hardware’s production.

EOL policies evolve with the aim of reducing the number of older product versions that demand constant attention and maintenance. Why do providers do this? To focus time and resources on newer products so that they get the attention they need to protect our customers against new arising threats. Progress cannot be stopped, but attempts are constantly made via new threats to interrupt our journey forward. ESET is here to protect progress, so instead of resisting this momentum, we should ensure we not only appreciate the new technology but also the new threats. The newer the product, the better it is adapted to protect in the current threat environment. This will allow for better protection and make for a smoother experience for our business customers.

It is very important, and we strongly advise our users, to always run the latest version of ESET products. Users should also ensure that other critical software, especially your device’s Operating System (OS), is up to date and fully supported. The status of your OS is very important as it can have many implications to core functions and security too. For example, there have recently been changes to Window´s End of Life policy. To read more click on this link.

The upgrade to the latest ESET product versions has always been at “no cost,” and that is still the case to this day; the fact that access to new product versions is included in the price of your valid license remains unchanged. In this way, updates allow users to employ the most advanced security technologies that are high performing and easy to use, all of which help make our products more effective for you. To check ESET´s End of Life policy click this link.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About ESET
For 30 years, ESET® has been developing industry-leading IT security software and services for businesses and consumers worldwide. With solutions ranging from endpoint security to encryption and two-factor authentication, ESET’s high-performing, easy-to-use products give individuals and businesses the peace of mind to enjoy the full potential of their technology. ESET unobtrusively protects and monitors 24/7, updating defenses in real time to keep users safe and businesses running without interruption. Evolving threats require an evolving IT security company. Backed by R&D facilities worldwide, ESET became the first IT security company to earn 100 Virus Bulletin VB100 awards, identifying every single “in-the-wild” malware without interruption since 2003.