Chapter 73: Cyber Vulnerabilities

The United Nations (UN) defined cyber as “The global system of systems of Internetted computers, communication infrastructure, online conferencing entities, databases, and information utilities generally known as the Net”.

According to the Department of Defense (DOD) the cyberspace can be interpreted as a global domain within the information environment which consists of the interdependent networks of information technology infrastructures including the Internet, telecommunication networks, computer systems and embedded processors and controllers.

Cyber vulnerability is referred to as a flaw in a computer system that can leave it open to attack.

Contemporary nuclear power plants rely extensively on a large and diverse array of computers for a host of tasks. Some computers may play a role in monitoring or controlling the operation of the reactor itself or of ancillary systems.  The nuclear power plant operating as well as technical support staff commonly uses computer networks, and connections may exist between these systems and plant control systems, sometimes known, sometimes not known.  If the hardware or software used is modified or replaced, the reactor might be forced into an accident and the emergency response systems may fail to prevent calamity.

For nuclear power plants, the digital computers and communication systems and networks are a strategic asset. Some key cyber security challenges in maintaining the safety and availability of nuclear plants are in the areas of interoperability, scalability, performance, usability, and manageability.

From a security perspective, many current security solutions lack the ability to adapt and respond proactively in real time to constantly evolving unintentional incidents and intentional threats such as hacking, scanning, denial-of-service (DoS) attacks, new exploits in applications, worms, and viruses, to name a few. Further, the lack of tight integration between security products such as firewalls and Intrusion Prevention Systems (IPS), in combination with disparate features, creates a challenge to the job of adequately addressing security incidents.

Security and IT administrators are also faced with the challenge of making sure that the products they deploy scale to support ever-increasing network traffic and a diverse user population, while at the same time maintaining fast, reliable, and secure access to applications and network resources. Often this results in a compromise, where adding extra security degrades performance of the service or vice versa. Performance, interoperability, and scalability are also an issue, especially when service is required under heavy traffic load such as during an off-normal event or plant scram.

Because existing solutions consist of a mix of different products and technologies that are not tightly integrated, administrators are faced with the challenge of understanding and managing multiple security products and management systems.  This challenge grows exponentially when one tries to identify the root cause of an incident, where reports and logs need to be viewed from multiple systems and several hundred devices that are spread over many locations.  This makes forensic analysis difficult at best, and leaves the network extremely vulnerable.


Computers are used extensively in nuclear power plants around the world.  The various applications include:

  • Take data from sensors in the   field and display trends and ongoing system data, e.g. on a simplified system   flow path diagram;
  •   Provide alarms when sensors   indicate abnormal conditions occur, e.g. low water or oil pressures in   supporting systems;
  •   Take data from field sensors   and calculate true nuclear power on continual basis (referred to as a   calorimetric). This takes into account factors as heat lost by discharging   water from the steam generator blow-down system, variation in steam pressure   and its effect on steam flow rates;
  • Accumulate data and printout automatically if the reactor shuts down (trips) in order to aid the operators in quickly determining the cause of the shutdown (called a post-trip report);
  • Track all work status (testing and maintenance) at the plant;
  • Track the status of all equipment isolated for maintenance or other reasons;
  • Provide equipment technical information for ~ 150,000 components;
  • Provide control of non-safety systems, and some limited safety-related applications at the plant. Such control is similar to that used extensively in fossil power plants;
  • Provide calculations of input from various reactor protection parameters as input to control systems, e.g. control rods;
  • Allow prediction of off-site effects of radiological releases during emergency conditions, if these occur. Input includes local meteorological tower data;
  • Simulation of the performance of plant systems on full scope models of the control room for training and re-qualifying operations, engineering, and management personnel; and
  • Calculations used in nuclear safety analyses, e.g. probabilistic safety assessment and transient and accident analysis.

Computers are not usually used in reactor protection applications to shut-down the reactor, except as an alternate to other, usually analog, systems.  Because many lines of computer program code are used, and the primary concern of regulators in each State is to ensure that such applications could never fail.  A limited number of applications of this type have been authorized in US plants.


A primary source of cyber-attacks against control systems originates via the Wide Area Network (WAN), the Internet, and trusted third-party or remote connections. While internal threats are still significant and one of the top areas of concern for plant managers, increasing numbers of targeted threats are originating from external sources. This mirrors the current threat trend in traditional Information Technology (IT) systems.

Internal threats can come from a number of different sources, including attacks by disgruntled employees and contractors or accidental infection from a device accessing the network without the latest protection and unknowingly spreading a virus, worm, or other attack. However, user error and unintended consequences of routine actions actually represent the greatest risks, causing many cyber-related incidents in industrial environments. A local or remote user might access the wrong systems and make changes to them; IT personnel can perform a network penetration test that degrades performance or renders a system inoperable; or a user might download or send large files over the network and impact control traffic performance.

There is also a wide range of external threats to control systems. These range from accidental infection by a guest laptop to deliberate attacks. Today’s hackers are now more often motivated by profit, with groups looking for opportunities for extortion or theft that provide a quick payoff. Such targeted intrusions are increasingly difficult to detect, which is a key reason for achieving complete visibility across the corporate network. These types of threats can include:

2.1        Malicious Code (Malware):

Malware includes the broad range of software designed to infiltrate or damage computing systems without user knowledge or consent. The most well-known forms of malware include the following:

  • Viruses manipulate legitimate users into bypassing authentication and access control mechanisms in order to execute malicious code. Virus attacks are often untargeted and can spread rapidly between vulnerable systems and users. They damage systems and data, or decrease availability of infected systems by consuming excessive processing power or network bandwidth;
  • A worm or self-replicating program uses the network to send copies of itself to other nodes without any involvement from a user. Worm infections are untargeted and often create availability problems for affected systems. They might also carry a malicious code to launch a distributed attack from the infected hosts. There has been at least one case of a worm affecting a nuclear plant; and
  • The Trojan is a type of virus in which the malicious code is hidden behind a functionality desired by the end user. Trojan programs circumvent confidentiality or control objectives and can be used to gain remote access to systems, gather sensitive information, or damage systems and data.

2.2       Denial-of-Service Attack:

DoS attacks have become notorious over the past few years when used by attackers to flood network resources, such as critical servers or routers, in several major organizations with the goal of obstructing communication and decreasing the availability of critical systems. A similar attack can be easily mounted on a targeted control system, making it unusable for a critical period of time. An unintentional denial-of-service incident has occurred to at least one nuclear plant;

2.3       Rogue Devices:

In wireless networks, an unauthorized access point might be inserted into the control system. This can be done in a non-malicious manner, which inadvertently provides an unknown access point. It can also be done maliciously to provide false or misleading data to the controller, which can cause it to issue errant commands such as triggering a fail-safe device or changing operator screens to provide erroneous information;

2.4       Reconnaissance Attacks:

Reconnaissance attacks enable the first stage of the attack life cycle by probing. This serves to provide a more focused life cycle and improve the odds of success in the attacker’s favor;

2.5       Eavesdropping Attacks:

The goal of an eavesdropper is to violate the confidentiality of communications by “sniffing” packets of data on the control network or by intercepting wireless transmissions. Advanced eavesdropping attacks, also known as “Man-in-the-Middle” (MITM) or path insertion attacks, are typically leveraged by a hacker as a follow-up to a network probe or protocol violation attack;

2.6       Collateral Damage:

This type of impact is typically unplanned or materializes as an unforeseen or unplanned side effect of techniques being used for the primary attack. An example is the impact that bulk scanning or probing traffic can have on link and bandwidth availability. Or, if a network is not properly configured, unintended traffic—such as large downloads, streaming video, or penetration tests—can consume excessive bandwidth and result in unacceptable levels of network “noise” and slowed performance. Jitter is a significant, and usually undesired, factor in the design of almost all communications links. Since field controllers are sensitive to jitter, network noise can be detrimental to performance;

2.7       Unauthorized Access Attacks:

These are attempts to access assets that the attacker is not privileged or authorized to use. This implies that the attacker has some form of limited or unlimited control over the system;

2.8       Unauthorized Use of Assets, Resources, or Information:

In this type of incident, an asset, service, or data is used by someone authorized to use that particular asset, but not in the manner being attempted.

The faster a threat can be recognized, the more quickly with which it can be dealt. Preventing the behavior of the attacks and intrusions once the hacker is inside is the key to network security. There are many “Back Doors” and potential weak links in industrial control system networks. Typically, these include misconfigured devices, undocumented connections, wireless networks without proper security configurations, and open unguarded ports. A primary vector of concern is the compromise of data that can alter the operation of field devices or mislead an operator into taking inappropriate action.

Perhaps the greatest threat of all is the lack of understanding within the industrial organizations—in both operations and IT departments—as to the seriousness of the problem.  Even control system vendors still are not designing technologies for security.  In fact, many are instead including vulnerable applications and technologies such as Microsoft IIS, Bluetooth wireless communications, and wireless modems in their latest offerings.


The popular attacks over the cyber space can be classified as below:

  • Zero-day Attack:  It takes advantages of computer vulnerabilities that do not currently have a solution.  It named “zero day” because the attack occurs before the first day the vulnerability is known;
  • Distributed Denial of Service (DDoS): It attacks form a significant security threat marking networked systems unavailable by flooding with useless traffic using large number of “zombies”. When this attack happens in network-configured form, sometime we say this network to be “botnet”;
  • Stuxnet Attack: It’s a large, complex piece of known malware (zero-day exploits, Windows rootkit, AV evasion, etc.) with many different components and functionalities which primarily was written to target an industrial control system (ICS) or set of similar systems.  Its goal is to reprogram ICS by modifying code on programmable logic controllers (PLCs) to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment; and
  • Advanced Persistent Threat (APT): Usually refers to a group, such as a foreign government, with both the capability and the intent to persistently and effectively target a specific entity.  The term is commonly used to refer to cyber threats, in particular that of Internet-enabled espionage, but applies equally to other threats such as that of traditional espionage or attack. Other recognized attack vectors include infected media, supply chain compromise, and social engineering.  Individuals, such as an individual hacker, are not usually referred to as an APT as they rarely have the resources to be both advanced and persistent even if they are intent on gaining access to, or attacking, a specific target.

In spite of the fact that the capability of an adversary has been improved in a combined form of previous attacks rapidly, the capability of its defence is always behind that of the attackers.  The objective should be to reduce the window of vulnerability as small as possible and to be proactive against new attacks.

The question is can technical security controls block cyber-attacks?

According to Langner, readers familiar with cyber security and in some way associated with industrial control systems will have come across a plethora of cyber security solutions that allegedly protect critical infrastructure against cyber-attacks.  In fact it has become more difficult to spot solutions that would not pretend to do the trick. Yet most of what is advertised is unsubstantiated marketing vapor.  For instance:

  • Anti-virus software doesn’t help against a Stuxnet-like attack for a simple reason.  It is based on identifying and blocking known malware that is listed in the AV solution’s signature database. Unfortunately there will be no signature for custom-built malware that doesn’t display any strange behavior on average computer systems.  As a case in point, the first Stuxnet variant was kind of rubbed into the face of the AV industry in 2007 but was identified as malware not earlier than six years later, using the knowledge gained from analyzing later variants.  Malware designed like this first version is pretty much indistinguishable from a legitimate application software package and thereby flying below the radar of anti-virus technology.  Even the next version with the rotor speed attack, loaded with zero-day exploits, travelled at least a year in the wild until discovered by the anti-virus industry;
  • Network segregation by firewalls, data diodes, air gaps and the like is a good thing per se, but not sufficient to solve the problem.  In respect to recommending air gaps as a remedy, one cannot but be stunned about such ignorance of one of the most basic lessons learned from Stuxnet.  Stuxnet actually demonstrated how air gaps of high-value targets can be jumped, namely by compromising mobile computers of contractors who enjoy legitimate physical access to the target environment.  Since such access is often achieved locally by walking down to the respective control system cabinet, or benefits from proper authorization if performed via networks, filtering and blocking network traffic is insufficient to protect high-value targets;
  • The same must be said about Intrusion Detection and intrusion prevention systems.  From a technical point of view, the intriguing idea to detect sophisticated cyber-physical attacks in network traffic is completely un-validated.  In this respect, the US Department of Defense’s claim of defending the nation at network speed certainly does not extend to cyber-physical attacks.  Defending against them cannot be done in milliseconds, it requires years of organizational and architectural changes in potential target environments; and
  • Application of Security Patches doesn’t necessarily do the trick either, at least when it comes to industrial control systems.  While the operating system vendor was quick to deliver security patches for the zero-day vulnerabilities exploited at the OS level, the same strategy cannot be expected at the Industrial Control Systems (ICS) application level.

For example, the vendor of the ICS engineering software initially disputed any vulnerability in his software.  Two years later, a vulnerability report was filed (CVE-2012-3015) and a patch was provided for one of the vulnerabilities that Stuxnet’s dropper had exploited, namely the ability to execute arbitrary code at admin privilege by exploiting a legitimate configuration functionality of the software package. Two years may be a little bit late for exploits that don’t just affect singular targets in hostile countries but thousands of targets at home.  For other vulnerabilities that had been exploited by Stuxnet, such as faking sensor values by overwriting the input process image, or hijacking a driver DLL in order to inject malicious code on controllers, still no “Patch” is available.  In the ICS space, a culture to identify and correct security vulnerabilities, no matter if they are programming bugs, design flaws, or just legitimate program features introduced for convenience, waits to be adopted as best practice.


In principle, a plant employee acting alone might accomplish a cyber-attack either acting on his/her own volition or under duress.  Or, fabricated hardware or software introduced into the plant might contain surreptitious instructions that might be activated according to preset conditions, once in use.  Or, an attempt may be made to hack into the protective systems making it possible to take over the plant controls externally, from within the plant, within the State or virtually anywhere in the world.

Here are three cyber incidents which occurred at nuclear power plants:

  • Davis Besse Nuclear Power Plant, Ohio, United States:

The Slammer worm infected computer systems at the Davis-Besse nuclear power plant on January 2003.  The worm travelled from a consultant’s network, to the corporate network of First Energy Nuclear, the licensee for Davis-Besse, then to the process control network for the plant.  The traffic generated by the worm clogged the corporate and control networks.  For four hours and fifty minutes, plant personnel could not access the Safety Parameter Display System (SPDS).  Slammer did not affect analogue readouts; plant operators could therefore still get reliable data.  Davis-Besse had a firewall protecting its corporate network from the wider internet, and its configuration would have prevented a Slammer infection.  However, a consultant had created a connection behind the firewall to the consultancy’s office network.  This allowed Slammer to bypass the firewall and infect First Energy’s corporate network.  From there, it faced no obstacle on its way to the plant control network. In response, First Energy set up a firewall between the corporate network and the plant control network.

The Davis-Besse incident highlighted the fact that most nuclear power plants, by retrofitting their Supervisory Control and Data Acquisition (SCADA) systems for remote monitoring from their corporate network, had unknowingly connected their control networks to the internet.  At the time, the US Nuclear Regulatory Commission (NRC) did not permit remote operation of plant functions;

  • Browns Ferry Nuclear Power Plant, Alabama, United States:

The August 19, 2006, shutdown of Unit 3 at the Browns Ferry nuclear plant near Athens, Alabama, demonstrates that not just computers, but even critical reactor components, could be disrupted and disabled by a cyber-attack. Unit 3 was manually shutdown after the failure of both reactor recirculation pumps and the condensate demineralizer controller.  The condensate demineralizer used a Programmable Logic Controller (PLC); the recirculation pumps depend on Variable Frequency Drives (VFD) to modulate motor speed.  Both kinds of devices have embedded microprocessors that can communicate data over the Ethernet LAN.  However, both devices are prone to failure in high traffic environments.  A device using Ethernet broadcasts data packets to every other device connected to the network.  Receiving devices must examine each packet to determine which ones are addressed to them and to ignore those that are not.

It appears the Browns Ferry control network produced more traffic than the PLC and VFD controllers could handle;  it is also possible that the PLC malfunctioned and flooded the Ethernet with spurious traffic, disabling the VFD controllers; tests conducted after the incident were inconclusive.  The failure of these controllers was not the result of a cyber-attack.  However, it demonstrates the effect that one component can have on an entire process control system network and every device on that network; and

  • Hatch Nuclear Power Plant, Georgia, United States:

Due to the growing network connections between control systems and office computers, even seemingly simple actions can have unexpected results.  On March 7, 2008, Unit 2 of the Hatch nuclear power plant near Baxley, Georgia, automatically shut-down after an engineer applied a software update to a single computer on the plant’s business network.  The computer was used to collect diagnostic data from the process control network; the update was designed to synchronize data on both networks.  When the engineer rebooted the computer, the synchronization program reset the data on the control network.  The control systems interpreted the reset as a sudden drop in the reactor’s water reservoirs and initiated an automatic shutdown.  This innocent mistake demonstrates how malicious hackers could make simple changes to a business network that end up affecting a nuclear reactor—even if they have no intent to interfere with critical systems.  It also demonstrates that plant operators in this case did not fully understand the dependencies between network devices.  This would make it difficult to identify and protect all the vulnerabilities in a process control system.

Nuclear power plants, like many sorts of industrial facilities, involve the use of hazardous materials.  But unlike other facilities, the nuclear power plants around the world are required to have detailed emergency plans to protect the public in the event of an accident that could result in the release of radioactivity.

In case of the US, emergency plans for nuclear power plants are regulated by the Nuclear Regulatory Commission and the Federal Emergency Management Agency, which supervise graded exercises of those plans. The Division of Homeland Security and Emergency Management are responsible for maintaining New Hampshire’s nuclear plant emergency plans. Those plans are reviewed and updated annually.

Emergencies at nuclear power plants are classified at four levels (from least serious to most serious):

  • Unusual Event, Alert, Site Area Emergency and General EmergencyUnusual events are in process or have occurred which indicate a potential degradation of the level of the safety of the plant. No releases of radioactive material requiring offsite response or monitoring are expected unless further degradation of safety systems occurs;
  • An alert indicates that events are in process or have occurred which involve an actual or potential substantial degradation of the level of safety of the plant. Any releases are expected to be limited to small fractions of the Environmental Protection Agency Protective Action Guideline exposure levels;
  • During a site area emergency events are in process or have occurred which involve actual or likely major failures of plant functions needed for protection of the public.  Any releases are not expected to exceed Environmental Protection Agency (EPA) Protective Action Guideline exposure levels except near the plant site boundary; and
  • In a general emergency events are in process or have occurred which involve actual or imminent substantial reactor core degradation or melting with potential for loss of containment integrity. Releases can be reasonably expected to exceed EPA Protective Guideline exposure levels offsite for more than the immediate plant site area.


Hacking in general and attacks on “Protected” computer systems are becoming increasingly common and more sophisticated.  All of these concerns above demand robust proactive countermeasures to prevent successful cyber- attacks – the cost of inadequate protection may be disastrous.  While reported nuclear cyber-attacks events are rare and so far not cataclysmic, the threat trajectory suggests that ignoring cyber security may place individual nuclear power plants at risk, some more seriously than others.

Moreover, in addition to the direct consequences of a successful attack, the axiom that ‘an accident in any nuclear power plant is an accident in all nuclear power plants,’ would likely extend to a security event – including a cyber- attack.  A successful cyber-attack on a nuclear reactor with substantial consequences would undermine global public confidence in the viability of nuclear power.

Some states are apparently establishing the ability to engage in such attacks, probing defensive barriers, exercising tests of cyber weapons or simply protecting their security by creating the ability to engage in cyber warfare in case the need arises.


  1. US Department of State – Cyber Security;
  2. Juniper – Nuclear Plant Control System Cyber Vulnerabilities and Recommendations;
  3. Computers;
  4. US Department of State – Cyber Security for Nuclear Power Plants;
  5. Nuclear Plant Control System Cyber Vulnerabilities and Recommendations;
  6. Nuclear Power Plants Cyber-Security Incidents (Extracts from Brent Kesier, “The Vulnerability of Nuclear Facilities to Cyber-attack.”);
  7. INSTEP – Safety Parameters Display System (SPDS);
  8. Supervisory Control and Data Acquisition System for Nuclear Power Plants;
  9. Homeland Security and Emergency Management;
  10. Challenges of Cyber Security for Nuclear Power Plants; and
  11. Langner – To Kill a Centrifuge – A Technical Analysis.

Chapter 74