Skip to content

Announcement Pandora FMS CVE-2021-44228: The critical Apache Log4j vulnerability

In response to the vulnerability tagged as   CVE-2021-44228, known as “Log4Shell”, from Artica PFMS we confirm that Pandora FMS does not use this Apache log component and therefore it is not affected.

Discovered by the Alibaba security team, the problem refers to a case of remote execution of unauthenticated code (RCE) in any application that uses this open source utility and affects unpatched versions, from Apache Log4j 2.0-beta9 up to 2.14. 1.

It is true that if we used it, we would be compromised, but fortunately it is a dependency that is not necessary for the operation of our product.

In turn, we must also state that the Elasticsearch component for the log collection feature is potentially affected by CVE-2021-44228.

Recommended solution

There is, however, a solution recommended by the Elasticsearch developers:

1) You can upgrade to a JDK later than 8 to achieve at least partial mitigation.

2) Follow the Elasticsearch instructions from the developer and upgrade to Elasticsearch 6.8.21. or 7,16,1 superior.

Additional solution

In case you can’t update your version here we show you an additional method to solve the same problem:

  • Disable formatMessageLookup as follows:
  1. Stop the Elasticsearch service.
  2. Add -Dlog4j2.formatMsgNoLookups = true to the log4j part of /etc/elasticsearch/jvm.options
  3. Restart the Elasticsearch service.

In the event of any other eventuality we will keep you informed.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About PandoraFMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.

Scoring Security Vulnerabilities: Introducing CVSS for CVEs

Similar to how software bugs are triaged for a severity level, so too are security vulnerabilities as they need to be assessed for impact and risk, which aids in vulnerability management. The forum of Incident Response and Security Teams (FIRST) is an international organization of trusted security scientists and computer researchers that have received the task of creating best practices and tools for incident responses teams, as well as standardizing security methodologies and policies.
One of FIRST’s initiatives is the Special Interest Group (SIG) that is responsible for developing and maintaining the Common Vulnerability Scoring System (CVSS) specification to assist the security team to understand and prioritize the severity of a security vulnerability. 

Scoring Vulnerabilities

CVSS is known as a standard measurement system for organizations, industries and governments that need consistent and accurate vulnerability impact scores. The quantitative model of CVSS ensures accurate and repeatable measurement while allowing users to see the core vulnerability features that were used to generate the scores. CVSS is normally used to prioritize vulnerability remediation activities and to calculate vulnerabilities discovered on one’s systems.

Challenges with CVSS

Missing Applicability Context 

Vulnerability scores do not always count for the right context in which a vulnerable component is used by an organization. A Common Vulnerabilities and Exposures (CVE) system can factor in different variables when determining the score of an organization. However, in some cases others can affect the way in which a vulnerability is handled in spite of the score given to it by a CVE.

For instance, a high severity vulnerability that’s classified by the CVSS which was found in a component used for testing purposes, such as a test harness, might end up receiving little or no attention from security experts. One reason this can happen is that this component is used as a tool and is not in any way exposed in an interface accessible to the public. 

Additionally, vulnerability scores do not extend their context to account for material consequences such as when a vulnerability applies to cars, utility grids and medical devices. Each firm would need to triage and account for specific implications based on relevance to the prevalence in the specific vulnerable components for their products. 

Incorrect Scoring 

A vulnerability score includes a wide range of major characteristics and without supporting information, proper guidance and experience, mistakes can easily be made. It’s not rare to find false positives in a CVE or inaccuracies in scores that are assigned to any of the metrics groups that introduces a risk of losing trust in a CVE or creating panic for organizations.

CVSS has a score range of 0-10 that ranks severity levels starting from low to high. Inaccuracies of variables may lead to a score that maps to an inaccurate CVSS level. CVSS v3.0 can be used for evaluating and communicating security vulnerability features and their impact. The security research team takes part in discovering new vulnerabilities across ecosystems. Additionally, they work to triage CVE scores to properly showcase severities to balance the scoring inaccuracy that’s made by other authorities that issue CVEs.

An organization database provides supporting metadata beyond the CVE details for each vulnerability. The security experts curate each vulnerability with information like details about the type of vulnerability or overview of the vulnerable components that are enriched with reference links and examples to commits, fixes or other matter related to vulnerability.  

How CVSS Works 

There are three versions in CVSS’s history, beginning from its first release in 2004 to the widespread adoption of CVSS v2.0 and to the present working specification of CVSS v3.0. The specification offers a structure that standardizes the way vulnerabilities are scored in a way that’s grouped to showcase individual areas of concerns. 

The Metrics For A CVSS Score Are Allocated In Different Groups:

  1. Base: Impact assessment and exploitability metrics that are not dependent on the times of a vulnerability or a user environment, such as the ease at which the vulnerability can be exploited. For instance, if a vulnerability component is denied total access because of a vulnerability, it will score a high availability impact. 

CVSS base metrics are composed of exploitability and impact metric sub-groups and assess their applicability to a software component, which may impact other components (hardware, software or networking devices).

  1. Temporal: This metric accounts for situations that affect a vulnerability score. For instance, if there is a known exploit for a vulnerability the score will increase. However, if there is a patch or fix available, the score will decrease.  

The main purpose of the temporal score is to offer context according to the timing of a CVE severity. For example, if there are known public exploits for a security vulnerability, this raises the severity and criticality for the CVE because of the considerably easy access to resources for employing such attacks. 

A complete CVSS score is calculated which includes the temporal score part based on the highest risk for a value and will only be included if there is temporal risk. Consequently, any temporal score values that are assigned will keep the overall CVSS score at the lease or lower than the overall score. 

  1. Environmental: This metric enables customizing the score to the impact for a user or company’s environment. For instance, if the organization values the availability that’s related to a vulnerable component, it may set a high level of availability requirement and increase the whole CVSS score. 

In conclusion, the base metrics form the bases of a CVSS vector. If temporal or environmental metrics are available, they are incorporated into the whole CVSS score. 

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Topia
TOPIA is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

What is Role-Based Access Control?

Most of us have visited a hotel at some point in our lives. We arrive at reception, if we request a room, they give us a key; if we are going to visit a guest, they lead us to the waiting room as a visitor; if we are going to have dinner at their restaurant, they label us as a customer; or if we attend a conference on technology, we go to their conference room. It would not be the case that we would end up in the pool or enter the laundry room for a very important reason: we were assigned a role upon arrival.

Do you know what Role-Based Access Control or RBAC is?

In the field of computing too, since its inception, all this has been taken into account, but remember that the first machines were extremely expensive and limited, so we had to settle for simpler resources before Role-Based Access Control (RBAC) arrived.

Access Control List

In 1965, there was a timeshare operating system called Multics (created by Bell Laboratories and the Massachusetts Institute of Technology) which was the first to use access control lists (ACL). I wasn’t even born at that time so I will trust what Wikipedia has to say about this topic. What I do know, first-hand, is the filesystem access control list (filesystem ACL) that Netware Novell® used in the early 1990s and that I already told you about in a previous article on this blog.

But let’s go back to the access control list: What is an access control? This is the easiest thing to explain, it is nothing more and nothing less than a simple restriction on a user regarding a resource. Either by means of a password, a physical key or even your biometric values, such as a fingerprint, for example.

An access control list then is to write down each of the users who can access (explicitly allowed) or not (explicitly prohibited, under no circumstances) something. As you may imagine, this becomes tedious, constantly staying aware of noting the users one by one and also the processes of the operating system or the programs that run on it… You see, what a mess to write down all the entries, known as access control entries (ACEs).

Following the example of rights on files, directories and beyond (such as full resources: optical disks or entire “hard drives”), I came to work, last century, with Netware Novell®. This is a Filesystem ACL (Network File System access control list). Then came the millennium shock, the NFS ACL version 4 that picked up and expanded, in a standardized way, everything we had used since 1989 when RFC 1094 established the Network File System Protocol Specification. I think I have summarized a lot and should name, at least, the use that MS Windows® gives to ACLs through its Active Directory (AD), the Networking ACLs for the cases of network hardware (routers, hubs, etc.) and the implementations that some databases make.

All these technologies, and more, make use of the concept of access control lists, and as everything in life evolves, the concept of groups sharing some similarities emerged, and thus it was possible to save work by keeping the lists of access. Now imagine that you have one, or more access control lists, that only support groups. Well, in 1997 a man named John Barkley demonstrated that this type of list is equivalent to a minimum Role-Based Access Control, but RBAC at the end of the day, which brings us to the core of the issue…

Role-based access control RBAC

The concept of role in RBAC goes beyond permissions, it can also be well-defined skills. In addition, you may have several assigned roles, depending on the needs of the protagonist (user, software, hardware…). Going back to the billing department example. A salesperson, who already has a corresponding role as such, could also have a collection role to analyze customer payments and focus their sales on solvents. With roles this is relatively easy to do.

Benefits of RBAC

• First of all, RBAC dramatically reduces the risks of breaches and data leaks. If the roles were created and assigned rigorously, the return on investment of the work done in RBAC is guaranteed.

• Reduce costs by assigning more than one role to a user. It is unnecessary to buy new virtual computers if they can be shared with groups already created. Let Pandora FMS monitor and provide you with information to make decisions about redistributing the hourly load or, if necessary and only if necessary, acquire more resources.

• Federal, state, or local regulations on privacy or confidentiality can be required of companies, and RBACs can be a great help in meeting and enforcing those requirements.

• RBACs not only help efficiency in companies when new employees are hired, they also help when third parties perform security work, audits, etc. because beforehand, and without really knowing who will come, they will already have their work space well defined in one or more combined roles.

Disadvantages of RBAC

• The number of roles can grow dramatically. If a company has 5 departments and 20 functions, we can have up to a maximum of 100 roles.

Complexity.Perhaps this is the most difficult part: identifying and assigning all the mechanisms established in the company and translating them into RBAC. This requires a lot of work.

• When someone needs to temporarily extend their permissions, RABCs can become a difficult chain to break. For this, Pandora FMS proposes an alternative that I explain in the next section.

RBAC Rules

To take full advantage of the RBAC model, developing the concept of roles and authorizations always comes first. It is important that identity management to be able to assign these roles is also done in a standardized way, for this the ISO/IEC 24760-1 standard of 2011 tries to deal with it.

There are three golden rules for RBACs that must be displayed according to their order in time and enforced in due course:

1. Role assignment:Someone may exercise a permission only if they have been assigned a role.

2. Role authorization:The active role of a person must be authorized for that person. Along with rule number one, this rule ensures that users can only assume the roles for which they are authorized.

3. Permission authorization:Someone can exercise a permission only if the permission is authorized for the person’s active role. Along with rules one and two, this rule ensures that users can only exercise the permissions for which they are authorized.

The Enterprise version of Pandora FMS has an ultra complete RBAC and authentication mechanisms such as LDAP or AD, as well as double authentication mechanisms with Google® Auth. In addition, with the tag system that Pandora FMS handles, you may combine RBAC with ABAC. The attribute-based access control is similar to RBAC but instead of roles, it is based on user attributes. In this case, assigned labels, although they could be other values such as location or years of experience within the company, for example.

But we leave that for another article…

Before finishing this article, remember Pandora FMS is a flexible monitoring software, capable of monitoring devices, infrastructures, applications, services and business processes.

Would you like to find out more about what Pandora FMS can offer you? Find out clicking here: https://pandorafms.com/

If you have more than 100 devices to monitor, you may contact us through the form : https://pandorafms.com/contact/

Also, remember that if your monitoring needs are more limited, you have Pandora FMS OpenSource version available. Learn more information here: https://pandorafms.org/

Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About PandoraFMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.

How Application Management Needs are Driving Edge Computing

Last month, Scale Computing’s CEO and co-founder Jeff Ready joined up with Rob High, IBM Fellow, VP, CTO IBM Network and Edge Computing for a video meetup with Spiceworks.

This past Summer, Scale Computing and IBM announced a collaboration to help organizations adopt an edge computing strategy designed to enable them to move data and applications seamlessly across hybrid cloud environments, from private data centers to the edge.

In this informative and wide-ranging conversation, Jeff and Rob explore some of the trends driving the edge computing market — from the proliferation of connected devices generating voluminous amounts of data and the need to have greater application resiliency to ensuring compliance with an ever evolving regulatory environment — it’s no longer a question of ‘if’ edge computing will transform how we work and live, but when.

What follows are some of the highlights from their conversation. You can watch the video meetup in its entirety here.

What impact has the abrupt shift to remote work had on the edge computing market?

Jeff Ready (JR): First, it’s probably worth defining what we mean by edge computing which we can sum up as simply any place that you’re going to run a mission critical application that’s outside of the data center so the edge just means not in the data center. What’s happened through the pandemic is all of a sudden you have to run these applications in all sorts of different places.

The big challenge here is that the ‘edge’ by that definition just has some fundamental differences from the data center where you have redundant internet connectivity and reliable power and when something breaks, someone can walk into a room and fix it relatively quickly. But what if I have to do that same task across 500 locations and those locations are only online sometimes? This problem of horizontal scalability in which you have to replicate infrastructure tasks across a lot of locations is a serious issue and an area where we’re seeing a lot of very interesting use cases, especially in industries like manufacturing where for instance, industrial robots are generating tons of data.

Gartner says that today less than 10% of all data is generated at the edge, or outside of a data center but over the next four years, they expect 75% of the data to be generated at edge locations, which is a radical shift. This is the big wave that’s coming.

Rob High (RH): Much of what we’ve been talking about lies within the context of knowledge workers where our place of work has traditionally been the office. However, the vast majority of businesses are not about housing knowledge workers – they’re about running factories and retail stores and distribution centers. These businesses are fundamentally physical. And so when we think about the edge, we ought to be thinking about those kinds of places almost as much, if not more than remote office workers.

There’s not only a tremendous amount of data being generated at these locations and all that data is being used to make decisions. And the question becomes, how much data is being generated and how much are we having to transmit across the network? What’s the cost of that transfer? The latency of that transfer? What privacy issues are they being exposed to? All these places where there is an opportunity to take advantage of not only the increased volume of data but to do that locally so we can make better and faster decisions.

Since the cloud is everywhere, why not just go full cloud?

JR: There are a number of reasons why some of these applications are running out at the edge. On a practical level, it just makes more sense – think of a point of sale system in a retail store. You could run it in a cloud but in most retail stores, the internet is one of the least reliable components within that environment. The point of sale system is pretty critical obviously and it’s often linked to an EBT system, which is the food stamp system. And if both systems go down there are two compounding problems.

If cash registers are running slow people will abandon their shopping carts which is bad in its own right. If there are refrigerated items in that cart, by law they can’t be put back onto the shelves and that’s typically the most expensive stuff. The other thing is that if the EBT system goes down, by Federal law in the US, the food is now free so they’re losing money there as well. An hour of downtime across their stores can quickly result in hundreds of thousands of dollars in lost revenue.

Then there’s the issue of latency which comes down to a physics problem of moving packets of data 2,000 miles away to a data center. Until we can figure out how to go faster than the speed of light, the only solution is to move the decision-making closer. Finally there’s the issue of data privacy regulations which we haven’t seen as much here in the US as we are in Europe but will likely become more of an issue in the near future. For instance, there was recently a story in the news in Australia in which a convenience store had a kiosk where you could take a survey and it took a picture of you at the beginning and end of the survey to help the retailer gauge a consumer’s facial expressions. They then sent those images to the Azure cloud to process but that was a big no-no as sending that image with personal data to the cloud is against the law.

We’re moving to a true hybrid kind of world. In this context, hybrid simply means run the application where it makes the most sense to run the applications – whether it’s cloud, at the edge, or in a traditional data center, shouldn’t really matter.

RH: It’s important to remember that the edge is not just one thing. There are multiple potential tiers where you can locate compute which might be in a server in a retail store or on the factory floor. Most IoT equipment these days now includes some kind of general purpose compute embedded in the device itself – we’re seeing this with everything from cameras to industrial robots.

That becomes important to think about as on the other end you’ve got a number of Metro hosting environments, basically data centers located in metropolitan areas where the majority of businesses and users live. So it lives in between because it’s an edge to the data center. So now we can back to the line of business and understand the application requirements and choices about where it makes the most sense to place these applications considering the trade-offs of latency, network throughput, resiliency and privacy issues that they might care about. And it’s not going to be a one-size fits all approach.

We’re moving to a true hybrid kind of world. In this context, hybrid simply means run the application where it makes the most sense to run the applications – whether it’s cloud, at the edge, or in a traditional data center, shouldn’t really matter.

Can you tell us about the partnership between Scale Computing and IBM? How will the combination of your solutions really help some organizations out?

JR: The magic of the Scale Computing platform is in its self-healing capabilities. The challenge as it relates to edge and on-premise computing often comes down to manageability. What the Scale Computing platform does is lets you manage thousands of sites just as easily as a single site, all through a centralized portal. You can see exactly what’s going on, deploy an application to multiple sites at once, update the application, or spin up new locations. Take for example the grocery store chain I was talking about earlier. They don’t have to send a tech on-site when to deploy a new cluster. Someone can just literally plug it in and it will automatically reach back out to the management portal, download configuration files and applications and report back when it’s done. Our goal is to really simplify the management while maintaining that high availability.

The IBM edge application manager is the tool that allows you to manage these applications in the cloud, whether it’s a Kubernetes app or a legacy virtual machine, and deploy them to the location of your choice – whether that’s on-premises, on AWS, or the IBM cloud.

RH: The beauty of this partnership is that we both share this common understanding of the edge marketplace and the needs that are there – particularly, the need to get the right software to the right place at the right time. Scale Computing has been working on this for VM-based applications and we’ve been concentrating on that problem for containerized applications. And so we just brought those two things together and now the Scale Computing platform, you can do both. You can manage both your VM-based applications and your containers as an application from a single, centralized control point. There’s no need for IT specialists to be present at a remote location to manage this process.

Any parting thoughts?

JR: I think there is a natural inclination to think that edge computing is only suitable for a large enterprise or some big deployment. And that is just not the case. It certainly applies there, right? I mean, an 8,000 store deployment is one thing. But then, I’ve got a manufacturing customer that’s just a single location. It’s a large factory that has got about a dozen different edge computing deployments. There are a lot more use cases out there than you might naturally think of

RH: The cost of delaying the automation process far exceeds the cost of actually just putting the automation in place, even for the first one and getting to know it from day one and organizing your practices and processes around using the automation system for managing these edge environments.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Scale Computing
Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Scale Computing HC3 software eliminates the need for traditional virtualization software, disaster recovery software, servers, and shared storage, replacing these with a fully integrated, highly available system for running applications. Using patented HyperCore™ technology, the HC3 self-healing platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime. When ease-of-use, high availability, and TCO matter, Scale Computing HC3 is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate and TrustRadius.

Detecting & Alerting Log4J with the SCADAfence Platform

Until two weeks ago, Log4j was just a popular Java logging framework, one of the numerous components that run in the background of many modern web applications. But since a zero-day vulnerability (CVE-2021-44228) was published, Log4j has made a huge impact on the security community as researchers found that it’s vulnerable to arbitrary code execution. 

The good news is that the Apache Software Foundation has already fixed and rolled out the patch for the vulnerability. On top of the patch, thanks to SCADAfence’s research and R&D team, our latest build supports the detection of Log4j exploit attempts.

Quick Recap of CVE-2021-44228 in Log4j

Log4J is an unauthenticated remote code execution (RCE, code injection) vulnerability in the popular Log4j logging framework for Java. By exploiting it, the attacker can easily execute any code from a remote source on the attacked target. NIST has given this vulnerability (CVE-2021-44228) a score of 10 out of 10, which reflects its criticality.

Over 3 billion devices run Java, and because there are only a handful of logging libraries, many of them are likely to run Log4j. Worse still, many internet-exposed target applications can be exploited by external users without authentication. 

Over the past two weeks, major OT vendors disclosed the security impact of this vulnerability on their software and equipment, and additional disclosures will continue as vendors work to identify the use of Log4j across their product lines. Originally, the Log4j vulnerability made it challenging to identify potentially impacted servers on a given network. For OT networks that have incorporated network segmentation, the risk from these protocols can be mitigated to an extent.

How To Ensure That Your Systems Are Safe

First, it’s important to understand that the root cause of this issue lies within the Log4j library. The Apache Software Foundation released an emergency patch for the vulnerability. You should upgrade your systems to Log4j 2.15.0 immediately or apply the appropriate mitigations.

Our OT security threat intelligence database learns about the different behavior to highlight activities attempting to leverage this vulnerability and to provide remediation guidance. Our customers are notified of log4j exploit attempts, and also on any anomaly detected by our anomalies engine. but our customers are already protected simply based on the efficacy of our anomaly detection.

The SCADAfence Platform, the Governance Portal, and the Multi-Site Portal do not use Log4J or the Apache server, and thus SCADAfence product installations are updated and secure from the Log4J vulnerability. Customers do not need to take action for any of our on-prem or hosted web solutions.

At SCADAfence, we felt network segmentation wasn’t enough to fight off the critical vulnerability. The latest build of the SCADAfence Platform detects and allows SCADAfence customers to leverage our OT security threat intelligence service to ensure they can patch and mitigate this exploit in any of their OT devices.

Log4J (6)

The SCADAfence Platform Detects & Alerts if an OT Asset is Vulnerable to the Log4Shell Vulnerability

We’ve updated our log4shells/log4j exploit detection inside the SCADAfence Platform as we have maneuvered ahead. We added CVE signatures to our database which detect and alert RCE (Remote Code Execution) exploits. 

The following CVEs were added to the SCADAfence database to correlate and alert of vulnerable OT assets: 

  1. CVE-2021-44228   
  2. CVE-2021-45046 
  3. CVE-2021-4104
  4. CVE-2020-9488
  5. CVE-2019-17571
  6. CVE-2017-5645

How Can You Deploy The Latest Version of SCADAfence

The latest version of the SCADAfence Platform which detects the CVE signatures relating to the vulnerability is available in build 6.6.1.167. To get the latest version, please contact your customer success representative.

If your organization is looking into securing its industrial networks, the experts at SCADAfence are seasoned veterans in this space and can show you how it’s done. 

To learn more about SCADAfence’s array of OT & IoT security products, and to see short product demos, click here: https://l.scadafence.com/demo

 

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About SCADAfence
SCADAfence helps companies with large-scale operational technology (OT) networks embrace the benefits of industrial IoT by reducing cyber risks and mitigating operational threats. Our non-intrusive platform provides full coverage of large-scale networks, offering best-in-class detection accuracy, asset discovery and user experience. The platform seamlessly integrates OT security within existing security operations, bridging the IT/OT convergence gap. SCADAfence secures OT networks in manufacturing, building management and critical infrastructure industries. We deliver security and visibility for some of world’s most complex OT networks, including Europe’s largest manufacturing facility. With SCADAfence, companies can operate securely, reliably and efficiently as they go through the digital transformation journey.

Vicarius Offers New Technology To Fix Log4j With No Vendor Involvement

Continue reading

What’s New Pandora FMS 759

What’s new in the latest Pandora FMS release, Pandora FMS 759

Let’s take a look together at the features and improvements included in the new Pandora FMS release: Pandora FMS 759.

NEW FEATURES AND IMPROVEMENTS

State change thresholds based on percentage

It is an improvement that many users had requested. It allows a module changing status to be specified based on a percentage change in the value of the last data received. That way, it is more intuitive to define the situations in which you consider that a module goes into WARNING or CRITICAL status.

New trend modules 

These new modules compare the current average with the average of the previous period and return the difference in absolute value or as a percentage. They are useful to express in a simple way that, for example, the use of your network is currently 25% higher than last week, or that the temperature is 2ºC above normal if you compare it to the previous month.

Capacity planning modules

It makes predictions based on the time frame specified by the user, assuming a more or less linear performance of the target module. This type of predictive module allows you to find out how many days you have left until the disk is completely full, or the number of requests to the database you will have within a month if you continue as before. These modules replace the old prediction modules.

Network Configuration Manager

We improved the NCM features. Now in addition to being able to execute commands manually on different devices, firmwares can be uploaded, configuration changes can be verified and backups can be scheduled.

More IPAM improvements

Following in the wake of the previous release, we added some additional features to our IPAM such as defining SITES, isolated islands that allow duplicate addresses. We also added the possibility of defining a location of a previous list, ordering the lists, editing from the supernet browser and more improvements have also been incorporated.

Editor of special days in Metaconsole

The special days editor has been added and the way to assign special days to alert definition has been improved, making it more intuitive.

Policy auto-enforcement by group 

A system of automatic application of policies has been designed so that groups can be assigned policies and when a new agent is added to the group, the associated policies are applied automatically. This improves current auto-provisioning operations and enables large amounts of agents to be handled quickly.

Metaconsole inventory

Inventory display feature has been added to the Metaconsole.

Secondary IP macros in network components and plugins

From now on, through the _address_n_ macro you may use any of the secondary (or successive) IPs of an agent in network modules or remote plugins.

New alert action history report

It complements the existing event or alert reports, offering the details of each action (send email, SMS, log) for each alert launched on an agent/module. It is based on events (which now save that information when an alert is fired).

If you have to monitor more than 100 devices, you may also enjoy a Pandora FMS Enterprise FREE 30-day TRIAL. Installation on Cloud or On-Premise, you choose!! Get it here!

Last but not least, remember that if you have a reduced number of devices to monitor, you can use Pandora FMS OpenSource version. Find more information here.

Do not hesitate to send us your questions. Pandora FMS team will be happy to help you!

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About PandoraFMS
Pandora FMS is a flexible monitoring system, capable of monitoring devices, infrastructures, applications, services and business processes.
Of course, one of the things that Pandora FMS can control is the hard disks of your computers.

Industry on The Edge: 3 Use Cases That Show How Industry is Putting Edge Computing to Work Today

Industry watchers have signaled Edge Computing as one of the major IT trends to watch over the next decade. What many people don’t fully appreciate is that Edge Computing is not yet one more over-hyped, future-state technology, but rather something that is being embraced in a number of industries today. And perhaps most surprisingly, it’s being embraced in many staid ‘old-school’ industries such as steel manufacturing, brick-and-mortar retailers, and even container shipping that one might not typically associate with the bleeding-edge.

The term ‘Edge Computing’ simply refers to the paradigm of bringing computation and data storage closer to the location where it’s needed as a way to improve response times and mitigate bandwidth constraints. A new generation of Edge Computing platforms can literally be held in the palm of your hand, can be placed practically anywhere since they have no special cooling or power requirements, and can be easily scaled by simply connecting them into clusters to quickly bring more compute and storage resources online as needed.

Back to the Future

Edge Computing represents another swing of the pendulum in the decades-long journey that has shaped the way IT resources are consumed and delivered – from the highly centralized mainframe computing paradigm to distributed models of client-server and the Cloud and now, we are seeing organizations shift the heavy lifting of compute and storage back on-site.

According to a September 2020 forecast report by IDC, the worldwide Edge computing market will reach $250.6 billion in 2024 while Gartner predicts that by 2025, three-quarters of enterprise-generated data will be created and processed at the edge – outside a traditional centralized data center or cloud (up from just 10% in 2018)

There are a number of reasons why we are seeing a renewed interest in moving IT systems closer to home base. Among the most salient benefits, Edge offers greater resiliency, flexibility, and simplified management. And as more businesses introduce new IoT devices and sensors into their environment that produce a high volume of data, the ability to process and feed this data back into local systems can be a major driver of innovation.

Three Edge Use Cases in the Real World

What follows are three use cases from the manufacturing, retail, and shipping industries that showcase how they are applying Edge Computing to not just simplify and improve the efficiency of the IT operations, but also show how Edge is enabling innovation.

1. Manufacturing at the Edge: Uniting Atoms and Bits in Real-Time

While the manufacturing industry has readily embraced automation and other technologies to boost productivity and improve efficiency, many manufacturers continue to struggle under the weight of having to manage complex and unwieldy systems. However, the extreme simplicity of a hyperconverged infrastructure makes it most beneficial in use cases where IT staff is limited – which is often the case for the tens of thousands of small and mid-sized manufacturing businesses operating across the U.S. And many are now investing in the Edge to optimize the performance of their plant machinery.

One needs to look no further than Harrison Steel, an Indiana-based manufacturer of engineered steel castings. Founded over a century ago, Harrison is an industrial manufacturer that operates several massive electric arc furnaces alongside other precision machinery across more than 650,000 square feet of its sprawling factory floor. Because their facility is so large, networking these machines together was cost prohibitive, forcing their IT staff to spend a good portion of their day transferring machine data back on USB drives for analysis. With a small cluster of hyperconverged machines, they were able to put a system in the middle of their shop floor and collect all of this machine data at regular intervals to keep their systems and machines fully calibrated.

2. Retail at the Edge: When Downtime is Not an Option

Traditional retailers across all categories are under increasing pressure to apply technology that improves the customer experience and most importantly, . Unfortunately, the legacy IT architecture typically found in brick-and-mortar – Point of Sale terminals, servers that collect transactions and track inventory — is often rigid, convoluted, and slow.

Jerry’s Foods, a regional chain of 50 retail, grocery, liquor and hardware stores, is one example of how traditional retail is being transformed by the Edge. With 50 storefronts dispersed across three states and no IT staff available within their store locations, the complexity of their IT systems had become a source of persistent disruption that was negatively impacting their customer’s experience. With a centralized IT staff of five supporting all of their branch stores, the majority of their time was spent remotely troubleshooting issues. Implementing an Edge computing strategy has enabled them to deploy hyperconverged clusters within each store, improving the reliability of their existing systems, allowing them to be managed remotely, and in the event of a disruption, seamlessly failover to keep critical applications online.

3. The Edge at Sea: An Extreme Edge Scenario

The global shipping industry represents one of the most important links in the global supply chain, transporting roughly 90 percent of the world’s goods from port to port on a daily basis. While the ships themselves are towering husks of steel and diesel – IT and the specialized applications that they run, are the orchestration engine that make it all work.

Until only recently, once a ship left port, it was more or less isolated from communication with resources at shore. And since these ships are limited by connectivity and don’t typically have an IT expert aboard, when a pivotal IT component goes offline on a ship hundreds of miles from shore, redundancy and resiliency become all the more critical.

Telford Offshore, an international offshore service provider to the oil and gas industry, operates a fleet of vessels that require 24/7 availability — and must do so in some of the world’s most extreme environments. Without reliable Internet connectivity, Telford’s IT leadership understood that significant cost and operational efficiencies would be realized by unifying their IT infrastructure into a single appliance and could be stationed on each individual vessel in its expanding fleet. Now if there is a system failure, they don’t need to spend tens of thousands of dollars to fly an IT support staff to swap out a simple part.

Technology innovations continue to make our world smaller, more connected, and consequently, more vulnerable to disruption when one of those links becomes disconnected. As demonstrated by some of the examples above, bringing converged infrastructure back to local operating environments where more and more data is being generated, and making it easier and more cost-effective to manage, is creating a wealth of new opportunities for innovation. And unlike so many other over-hyped technologies, it’s already here.

About Version 2 Limited
Version 2 Limited is one of the most dynamic IT companies in Asia. The company develops and distributes IT products for Internet and IP-based networks, including communication systems, Internet software, security, network, and media products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 Limited offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About Scale Computing
Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Scale Computing HC3 software eliminates the need for traditional virtualization software, disaster recovery software, servers, and shared storage, replacing these with a fully integrated, highly available system for running applications. Using patented HyperCore™ technology, the HC3 self-healing platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime. When ease-of-use, high availability, and TCO matter, Scale Computing HC3 is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate and TrustRadius.

What is Configuration Management?

Configuration management is an essential foundation for a successful technology platform. Leaders in the tech space will want to know what it takes to implement it. If that’s what you are searching for, we will discuss some important points in this article:

Continue reading

Vicarius & Log4Shell: What You Need to Know

Has Vicarius Been Affected by Log4Shell?

Along with the rest of the cybersecurity community, we have been continuously monitoring for any evidence of Log4Shell exploit attempts in our digital environment. So far, we have found no evidence that TOPIA or any of our systems have been affected by CVE-2021-44228 or CVE-2021-45046. It is also our current understanding that we are not vulnerable to either CVEs according to data gathered from extensive testing.

Continue reading