At the edge, no one can hear your IoT devices scream… • The Register


Sponsored Feature If you’ve ever wondered what edge computing looks like in action, you could do worse than study the orbiting multidimensional challenge that is multi-agency. international space station (ISS).

It’s not really news that communication and computing are difficult in a physically isolated environment 400 km above the earth, but every year scientists continue to give it new, more complex scientific tasks to justify his existence. It quickly becomes a big challenge. Latency is always high and sensor data can take minutes to reach Earth, slowing decision making on any task.

That’s why the ISS was designed with enough computing power on board to survive these time shifts and operate in isolation, with the processing and machine learning power to process the data on board. This is edge computing in its most daring, dangerous and scientifically important form. Although the ISS may seem like an extreme example, it is by no means the only one. The problem of having enough computing power in the right place is becoming fundamental for a growing number of organizations, affecting everything from manufacturing to utilities and cities.

The idea that the edge matters is based on the simple observation that the only way to maintain performance, manageability, and security in modern networks is to bring applications and services closer to the problem, away from a theoretical data center. . While in traditional networks computing power resides in centralized data centers, under edge computing, processing and applications move to multiple locations close to users and where data is generated. The data center still exists but becomes only part of a much larger distributed system operating as a single entity.

The model seems simple enough, but it has a catch: moving processing power to the edge must be achieved without losing the centralized management and control that security and compliance depend on.

“Whatever organizations do, they want data and service to be closer to the customer or the problem at hand,” said Ian Hood, chief strategist at Red Hat. Red Hat’s Enterprise Linux local container platform and Red Hat’s OpenShift ContainerPlatform are used by ISS to support small, highly portable cross-platform applications running on the onboard HP Spaceborne Computer-2.

“It’s about creating a better service by processing data at the edge rather than waiting for it to be centralized in the data center or public cloud,” Hood continues. Edge is touted as the solution for service providers and enterprises, but he believes the concept has the greatest immediate impact in industrial applications.

“This sector has a lot of proprietary IoT and industrial automation at the edge, but it’s not very easy for them to manage. Now they are scaling the application that they got from equipment manufacturers like than ABB, Bosch or Siemens to run on a mainstream computing platform.”

Hood calls this the industrial “device edge”, an embodiment of edge computing in which large numbers of devices are connected directly to local computing resources rather than having to redirect traffic to remote data centers. In Red Hat’s OpenShift architecture, this is supported in three configurations depending on the computing power and resiliency needed:

– A at three knots A “compact” RHEL cluster comprising three servers, a control plane, and worker nodes. Designed for high availability and sites that may have intermittent or low bandwidth.

single node Edge Server, the same technology but reduced to a single server that can continue to operate even if connectivity fails.

remote worker topology comprising a control plane in a regional data center with worker nodes at edge sites. Ideal for environments with stable connectivity, three node clusters can also be deployed as a control pane in this configuration.

The common thread running through all of this is that customers end up with a Kubernetes infrastructure that distributes application clusters to as many edge environments as they want.


Beyond the Data Center

Hood says the challenge of edge computing starts with the devices themselves being exposed on multiple levels. Because they are located remotely, they are physically vulnerable to tampering and unauthorized access to the deployment site, for example, which could lead to loss of control and/or downtime.

“Suppose the customer deploys edge computing in a public area where someone can access it. This means that if someone walks away with it, the system has to shut down and wipe itself. are not in a secure data center.”

Until now, system manufacturers have rarely had to think about this dimension beyond the specialized realm of kiosks, POS and ATMs. However, with advanced computing and industrial applications, this suddenly becomes a major concern. If something goes wrong, the server is alone.

As devices that do their work out of sight in remote locations, it is also possible to lose track of their software state. Industrial operational technology teams must be able to verify that servers and devices are receiving the correct, signed system images and updates while ensuring that communication between devices and the management center is fully encrypted.

JavaScript disabled

Please enable JavaScript to use this feature.

Other potential security risks associated with edge computing are more difficult to assess since the vulnerability extends to all elements of the system. You could call this mental block edge computing. Administrators find themselves migrating from managing a single big issue to a myriad of smaller issues that they can’t always monitor.

“The risks start in the hardware platform itself. Then you have to look at the operating system and ask yourself if it is properly secured. Finally, you have to make sure that the application code you use comes from from a secure registry where it has been verified or from a secure third party using the same process.”

The bigger concern is simply that the proliferation of devices makes it more likely that an edge device will be misconfigured or left unpatched, which punches small holes in the network. An employee could configure containers with too many privileges or root or allow unlimited communication between different containers, for example.

“Today, most customers still rely on multiple management platforms and proprietary systems. This forces them to use multiple tools and automation to set up edge servers.”

Red Hat’s answer to this problem is the Ansible automation platform, which helps build repeatable processes across all environments, including central cloud, data center, or edge devices. This unified approach benefits all aspects of server and edge device management, from operating system configuration and provisioning to patching, compliance routines and security policies. It’s hard to imagine how industrial edge computing could function without such a platform, but Hood says organizations today often take a DIY approach.

“If they don’t use a tool like Ansible, they’ll go back to scripting, hands on keyboards, and multiple OS management systems. And different departments within an organization own different parts of that, for example the division between the IT side and the operational ones who deal with industrial systems.”

For Hood, migrating to an edge computing model is about choosing a single, consistent application development and deployment platform that ticks all the boxes, including the OS-managed software and firmware stack to the applications, communication and deployment systems built on top of that.

“The approach organizations should take, whether they use Red Hat OpenShift or not, is that infrastructure deployment should be a software-driven process that doesn’t require a person to configure it. If it’s not OpenShift, you’ll probably find it’s a proprietary solution to this problem.”


The IoT network of the Swiss Federal Railways

Another Red Hat implementation, Hood, involves a partnership with the Swiss Federal Railways (SBB), a transportation company that is rolling out a growing range of digital services for its 1.25 million daily passengers and timetable. world famous where no train should ever be late. Connected components include in-vehicle technologies such as LED information displays, seat reservation technology, Wi-Fi hotspots, and CCTV and collision detection systems for safety monitoring.

This vast, complex network of devices includes several proprietary interfaces and management routines. Latency quickly became an issue, as did the manual management workload of dealing with numerous sensors and devices for a workforce already busy with trains, signaling and track.

Instead, SBB turned to Red Hat’s Ansible automation, which allowed the service to centrally manage IoT devices and edge servers without having to send technicians to visit every train and edge server a at a time. Using Ansible, SBB was also able to solve the problem of exposing too many SSH keys and passwords to employees by centralizing these credentials for automated use. According to Hood, what SBB could not contemplate was reducing its management overhead at the expense of a heavier and potentially less secure security infrastructure.

According to Hood, SBB demonstrates that it is possible for a company with a complex device base to embrace edge computing without inadvertently creating a new level of vulnerability for itself on top of everyday defense issues. in cybersecurity. Observe Hood:

“Edge computing is just another place for attackers to go. If you leave the door open, eventually someone will walk through it.”

Learn more about Red Hat’s approach to edge computing and security here.

Sponsored by Red Hat.

Previous VA fails veterans, highlighting government-run single-payer system »Publications» Washington Policy Center
Next Physics and poetry in radical collaboration