Imagine two universes. In the first one, information systems are impervious to manipulation from the outside. No unauthorized person one can change the instructions that systems run and systems react to all input as they are supposed to (that is, consistent with the common understanding of designers and users). This is a world in which the potential effects of cyberwar are, at most, weak and largely irrelevant to the conduct of violent (i.e., kinetic) conflict. In the second universe, cyberwar is potentially consequence; it where we live today.
In which universe is violent conflict – war, if you will – more likely? In recent years, two scholars who agree on little else, Tomas Rid and John Arquilla, have put forward the case that the second universe, today’s is less likely to be warlike than the first universe, where cyberwar is impossible or at least inconsequential. Cyberwar, in effect, is good for peace. Both scholars pivot their argument over Stuxnet, a piece of malware that was responsible for the destruction of roughly a thousand centrifuges in Iran’s uranium-enrichment facility, Natanz. Had the means of cyberwar not been available, they argue, there is a serious likelihood that Israel or even the United States would have tried to destroy Natanz using air raids. Such air raids would likely have produced casualties. Furthermore, the use of violence might have begotten violence in return – more likely terrorism (given Iran’s modus operandi) than open war, but the latter could not have been ruled out. Thus, the existence of Stuxnet allowed the problem created by Iran’s desire to enrich uranium (in order to making nuclear weapons) to be dealt with in a nonviolent way. To the extent that Stuxnet is archetypical, cyberwar is good for peace.
This paper argues that the case that cyberwar is good for peace is by no means so clear, something that can be demonstrated by a more rounded analysis of what cyberwar does and what the alternatives to cyberwar really are. To start, consider another attack on the (supposed) nuclear facility being constructed in Syria with North Korean help.  In 2007, Israeli jets destroyed this facility (Operation Orchard). According to Richard Clarke (and others), the air strike was aided by a cyberattack on Syria’s air defense system which essentially erased radar images of incoming aircraft allowing all the aircraft to make it back safely. Now suppose that a cyberattack really was employed (other accounts say that more conventional electronic warfare was used). Further suppose that, without the confidence that cyberwar would have disabled the radar, the raid’s planners would have been deemed it too costly in terms of lost aircraft and pilots and hence called it off. In the first universe, where the tools of cyberwar were unavailable, no raid would have taken place, no destruction would have ensued, and no casualties (Operation Orchard left one to two dozen dead) would have resulted. In that calculus, the cyberwar capabilities lead to more rather than less violence. Had Syria retaliated, a further cycle of violence might have commenced.
This example suggests that the picture is a little more complicated than the notion that cyberwar allows nonviolent means to replace violent means. A full analysis must ask (1) what choices are available with and without cyberwar, (2) what responses are likely to a cyberwar attack, and (3) what does the threat of cyberwar say for the likelihood of conflict, qua stability. Such analyses also have to be sensitive to different contexts. What may hold for a conflict among comparably strong countries may not hold for a conflict among countries, one of which is clearly stronger than the other. Furthermore, the possibility of operational cyberwar (used against an opponent’s military forces) may lead to different answers than the possibility of strategic cyberwar (used against an opponent’s infrastructure and society, usually away from the battlefield).
But first: why does this question matter? Normally, comparing the world as it is to a counterfactual world in which something had not taken place is a completely academic exercise. Un-invention never takes place (even if people do walk away from some technologies – nuclear power comes to mind). Weapons control is oft pursued and only occasionally succeeds (chemical and biological weapons provide the best example, but such weapons have low military utility, anyway). Although people worry about cyberattacks, the fact that they are not heinous – and, indeed, have yet to hurt anyone directly – coupled with the technical difficulties of enforcing arms control on virtual instruments of war suggests that they are unlikely to be banned. Norms against certain uses of cyberwar may have better chances, but they may also be worth less in terms of assurance.
However, cyberweapons are fairly unique among weapons in that their effectiveness depends entirely on the system being hacked. Conversely, one can imagine a world without cyberwar (or at cyberwar with serious consequences) even if nothing has to be un-invented for such a world to exist. If systems were perfect, almost all forms of cyberwar would be impossible. Even though perfection is impossible in this mortal world, it is possible for systems to become more impervious to cyberwar given a combination of resources and correct choices (e.g., which systems are isolated by what means, how easy it is to insert rogue instructions into a system). A world in which the existence of cyberwar increases the predisposition to war is one that favors increasing attention to cybersecurity (notwithstanding the many other benefits to cybersecurity in terms of reduced crime). One in which the existence of cyberwar reduces the prospect for violent conflict is one in which such efforts might be relaxed without making the world a far worse place.
Finally, the standard caveats bear repetition. Cyberspace evolves rapidly; what is true today (e.g., the cost of generating a cyberattack with nontrivial effects) may not be true tomorrow. People are still learning what cyberwar can and cannot do. Military operations, whether virtual or kinetic, vary greatly in their parameters and nothing below can even attempt to cover more than a handful of hypothetical cases. With that out of the way, we continue.
As the juxtaposition of Stuxnet and Operation Orchard suggests, cyberwar may be a replacement for the use of physical force but it may also enable it. To (over)simplify, without the option of using cyberwar a country in a particular situation can choose between doing nothing and using force. With cyberwar an option, a country can choose among doing nothing, carrying out a cyberattack without the use of force, or using force with the possibility of an accompanying or prefatory cyberattack.
If the country would have otherwise done nothing in a particular situation, then adding the cyberattack option increases the likelihood of violence (something is never less than nothing). If the country concludes that an accompanying cyberattack makes the use of force more attractive, then the odds of generating violence certainly increases. Even if the country carries out a cyberattack unaccompanied by force, then the risk of violence increases to the extent that the cyberattack sets off a chain of tit for tat that results in the use of force.
In running the probability distribution, one needs a sense of when to stop counting. Over the long run, the calculus might factor in the possibility that taking action against another country may reduce the overall violence. Stuxnet, in that regard, may have resulted in less violence to the extent that it retarded Iran’s nuclear program leading to a Middle East that contained a nuclear-armed Iran would be more violent. One might then argue that an air strike, which likely would have done a much more thorough job in stalling Iran’s nuclear program than Stuxnet (which only broke ten percent of Iran’s centrifuges) did, would have resulted in even less violence, in the long run, than Stuxnet did. By this logic, the option of a cyberattack, by forestalling an air strike, led to more violence. Others argue that nuclear weapons, by raising the price of mischief, enforces peace (an easier argument to make in retrospect for Cold War Europe, but one harder to make, so far, in the context of the current India-Pakistan dispute). But by this time, one is debating nuclear weapons not what the potential for cyberattacks on nuclear production facilities might do. Tractability suggests stopping the count before such considerations come into play.
We now lay the choices out formally as follows, by positing five states. Two are associated with a world without cyberattack: do nothing or do A: attack (kinetically). Three are associated with a world with cyberattack: do nothing, do B (carry out an attack using kinetic means with or without a cyberattack), or do C (carry out a cyberattack but not an attack using kinetic means.
In a world without cyberattack capabilities, we define the value, to the deciding country, of doing nothing as zero. If A is preferable (to 0, that is, doing nothing) the country attacks (kinetically); otherwise, it does nothing.
In a world with cyberattack capabilities, we also define the value of doing nothing as zero (this does not have to be the same zero as in a world with no cyberattack capabilities; the only thing that matters is whether B is preference to 0 and C is preferred to 0. If B is preferred to both C and 0 then the country attacks (stated thus: BC0 or B0C depending on whether C is or is not preferable to 0). If C is preferable to both B and 0 then the country carries out a cyberattack (CB0 or C0B). If neither B nor C is greater than zero then it does nothing (0BC or 0CB). This takes care of all six permutations.
Between A, B, C and 0, there are 24 different permutations. However, we can reduce this down to twelve by assuming that B is always preferable to A: in other words, whenever A is preferable to 0, B is preferable to 0. The presence of cyberattack capabilities, we argue, always enhances the value of an attack (compared to doing nothing); at least it does not detract from the value. This might seem obvious were it not for the fact that in a world with cyberattack capabilities, the target may also have the ability to counter a kinetic attack with its own cyberattack – for instance, by disabling the targeting radar in the attack aircraft thereby frustrating its ability to drop its bomb-load accurately. Nevertheless, it would seem that a cyberattack capability favors the offense, at least if the issue is something like a raid (rather than a full-fledged conflict). The attacker gets chooses the time and the equipment for the attack; for the defender to forestall or frustrate such an attack by using its own cyberattack may require the attack work at any time and on every piece of attacking equipment. Furthermore, the attacker is more likely to use mobile assets; the defender is often defending fixed assets. Mobile assets are probably harder to attack through cyberspace because their network configurations are more variable and less predictable vis-à-vis fixed assets. Thus, the assumption that B is preferable to A seems reasonable even if not axiomatic.
So, we are left with 12 permutations (BAC0, BCA0, BA0C, BC0A, B0AC, B0CA, 0BAC, 0BCA, 0CBA, CBA0, CB0A, and C0BA). Consider each (separately or grouped) in turn.
- Any option in which doing nothing is preferable to an attack or a cyberattack results in nothing taking place, irrespective of whether cyberattack capabilities exist (3 permutations).
- Any option where B is superior to all alternatives and A is superior to 0 (BAC0, BA0C, BCA0) results in a kinetic attack taking place, irrespective of whether cyberattack capabilities exist (3 permutations).
- BC0A, B0AC, B0CA: In a world with cyberattack capabilities, an attack takes place; in a world without such capabilities, no attack takes place. Therefore cyberattack capabilities increase the odds of force (3 permutations).
- CB0A, and C0BA: In a world with cyberattack capabilities, a cyberattack takes place; otherwise nothing takes place. Therefore cyberattack capabilities increase the odds of a cyberattack and thus, somewhat increase the odds of force being used; it depends on what happens after the cyberattack. (2 permutations)
- CBA0: In a world with cyberattack capabilities, a cyberattack takes place; otherwise a kinetic attack takes place. Here, cyberattack capabilities reduce the likelihood of force being used (again, how much, depends on what happens next).
In sum, of the twelve permutations, in a world where cyberattack capabilities exist, either:
- Nothing changes: 6 permutation.
- An attack takes place, where it otherwise would not have: 3 permutations.
- A cyberattack takes place, where otherwise nothing would have taken place: 2 permutations.
- A cyberattack takes place, where otherwise an attack would have taken place.
Granted that we have not evaluated the relative odds of each permutation – CBA0 may be more likely than the combination of BC0A, B0AC, B0CA, CB0A, and C0BA. Otherwise the result does not necessarily confirm the premise that (if aftereffects are disregarded) cyberwar is good for peace.
The calculus of cyberwar should also take the possibility of rogue actors into account.
For many forms of warfare, worries about rogue actors – individuals or groups that make war without authorization – are theoretical threats, suitable for Hollywood (e.g., Dr. Strangelove), but of little practical moment. War is a dangerous business, the risk of being caught is nontrivial, and isolated units (e.g., an unsupported fighter squadron) are generally ineffective against states.
Militias may be an exception, particularly those that prey on unarmed populations. The dangers are low, the risk of getting caught is modest, and they can be militarily effective operating in guerilla mode. The possibility of militias is a constant worry in states where policing is corrupt or ineffective; they can start fights and make it very difficult to conclude them.
Cyberspace may have considerable scope for militias. The direct risks of combat to combatants are zero (in the usual case that physical proximity is not necessary). The risks of getting caught are small if such groups and their systems practice operational security. Finally, cyberattacks can do damage against countries that have not fully secured their own systems. In contrast to militias, they can have global effects, and, if the hackers are particularly skillful or lucky (e.g., by finding an exploitable vulnerability in a critical system) they can have serious ones. Although states continue to have advantages over nonstate actors in employing hackers (and states with ample resources can usually outdo states without resources), it does not take a particularly large team to generate effects, as long as the members of this team are sufficiently talented. Because the work of cyberwar is closely aligned to nations’ intelligence communities (the vast majority of system penetrations by governments are to collect information not bring such systems down), they arise from a culture that prizes and usually practices secrecy. Intelligence agencies that pursue courses (seemingly) antithetical to declared state policies do occur: Pakistan’s ISI is a case in point.
Attacks by rogue operators create a path to conflict that carry a far lower risk to itself than would be the case if the only option were kinetic warfare. Indeed, there are good reasons to believe that if, say, Russia wanted to convey its ire at one or another U.S. action, then it could convince itself that the risks to its own well-being were low if it simply empowered its mafiya to carry out such attacks on the state’s behalf, perhaps in return for winking at other mafiya activities taking place within Russia itself. The relationship between the state and hackers in China is still unclear even after the Mandiant report. Similar ambiguity exists with Iran. Perhaps attackers would believe that risks are low because attribution is difficult. The argument that Russia will be forced to assist the United States in catching the actual attackers vies with the observation Russia denied Estonia’s request for assistance after the 2007 attack (or the previous attempt by the United States to trace the origins of the 1998 intrusion into DoD computers subsequently labeled Moonlight Maze). Conversely, a target country’s attempts to trace responsibility for a cyberattack consequential enough to risk escalation to the kinetic level may be harder to turn aside. Faced with the decision of admitting that one of its own carried out the attack outside official command-and-control, or brazening through a crisis, the attacking country may well double down, setting the stage for a confrontation.
Military capabilities may also affect crisis dynamics in ways that predispose countries to slide into or away from warfare. Even those disinclined to believe that cyberwars alone are likely to be consequential must admit that kinetic conflict can be consequential and that cyberattack capabilities might affect the latter’s onset or course. Of note is that cyberwar capabilities may increase the likelihood of a full-fledged kinetic conflict (as distinguished from a kinetic attack discussed above) either by presenting opportunities for attackers or by making defenders think that their opponents are creating such opportunities.
Cyberattacks might be used to cripple conventional capabilities at the outset of conflict giving the attacker a decisive, albeit fleeting, opportunity to carry out a successful kinetic attack while its foe has been blinded (by attacks on its ISR capabilities), confused (by attacks on its command-and-control), or immobilized (by attacks on its logistics and deployment system). The last may be the least consequential (if forces are pre-equipped with a week’s worth of supplies), but may also be most accessible. If the U.S. military is a model, logistics systems are much more likely to be connected to the Internet while the command and control of military units is more likely to sit on air-gapped networks; ISR systems as befits their intelligence origin, are even more isolated.
The difficulty of attributing cyberattacks, and the near-impossibility of seeing a well-executed cyberattack coming, coupled with the short duration of their immediate effects make a bolt-from-the-blue in cyberspace prefatory to kinetic war more insidious. First, attackers may convince themselves that a bolt-from-the-blue is relatively riskless. If such attacks shift the correlation of forces enough, the shooting starts because the prospects of victory by the country that carries out the cyberattack are that much brighter. If such attacks fail to do enough to change the odds of victory, the attacker holds off on shooting, and the target may not necessarily respond as if war had started – or so the cyber attacker may reason. Such reasoning is much less plausible if the bolt from the blue were a kinetic attack whose provenance was much harder to deny. Second, the well-founded presumption that the effects of a cyberattack are likely to last only until such systems are restored to where they are usable (hours to weeks?), means that the decision to exploit the opening has to be made more quickly than if the damage were permanent (as it might be if the bolt from the blue, for instance, disabled satellites in orbit). Strong confirmation biases (thinking fast) may induce countries to capitalize on what they hope was success, even when they might have held back given more time to consider (thinking slow) what a rush to war might produce.
Instability may also arise from the defender’s reaction to a possible bolt from the blue. Granted, if the defender noticed that its military systems had been crippled by a cyberattack, then responding with alacrity is appropriate. But the possibility of a bolt-from-the-blue suggests that in a crisis (as the victim of the cyberattack sees it) any cyberattack may have to be treated as prefatory to a kinetic attack. The target country may then turn up its warnings-and-indications sensors to catch the minute ripplings of its foes on the match. Unfortunately, the higher the gain, the greater the likelihood of reading artifacts as though they were indicators – and so off to war.
Unfortunately, this scenario understates the problem. Both acts of cyberwar and cyber-espionage canonically start with the penetration of the target to insert malware. This malware then calls out for instructions, which variously can instruct the infected machine to do something damaging (cyberwar) or to send back information of a particular type (cyber-espionage). Discovering the penetration, particularly if it can be attributed to a potential adversary, may convince the target country that it will soon face attack. It may then turn up its indicator-and-warning sensors with the same violent result.
Either way, the possibility of a cyber bolt-from-the-blue coupled with the difficulty of ascertaining who carried it out or even whether what looks like preparations for one are, in fact, such preparations or just apparitions adds the possibility of instability to crisis.
In all fairness, the wars that such cyberattacks might predispose would have to be those where the outcome of the first few days fighting is particularly decisive: a quick high-intensity conflict is ideal for such treatment. The prospect for success in long wars or low-intensity conflicts is scarcely affected by opening-day hijinks; logically therefore the possibility of cyberwar should have little effect on starting such conflicts.
The assumption that cyberwar is a cool war also rests on the presumption that what starts in cyberspace will stay in cyberspace; there will be no escalation into kinetic conflict. Clearly the chance of escalation that crosses domains is greater than zero, but for cyber war to lose its cool status requires that the risks of escalation into kinetic conflict for a cyberattack be substantially less than similar risks associated with a comparable kinetic attack.
The thin history we have of cyberattacks does not suggest that a cyberattack will necessarily be followed by much of anything at all. The Russian 2007 attacks on Estonia which crippled public and major private web sites was followed by Estonia’s complaints and NATO’s unwillingness to deem this an Article V attack (triggering collective self-defense measures) but it led to nothing violent or even close. If Georgia had reacted kinetically to the cyberattacks on it in 2008, it would have been difficult to distinguish such actions from the war Georgia was forced to fight following its invasion by Russian forces. The 2007 Israeli air strike on a purported nuclear facility in Syria may have been facilitated by an opening cyberattack on Syrian air defenses but Syria did not respond at all to the cyberattack or the raid itself. Iran did not react kinetically to Stuxnet, even if it created cyberwar cadres that may have been implicated in carrying out denial-of-service attacks on banks in the United States (from whence, supposedly, Stuxnet), but also attacks which trashed computers in Saudi Arabia (specifically, Aramco) and Qatar (specifically, RasGas), neither of which could be plausibly accused of complicity in creating Stuxnet. Similarly, the United States carried out no kinetic attack in response to the aforementioned denial-of-service attacks on banks that its intelligence community ascribed to Iran.
To be fair, cyberattacks unaccompanied by the outbreak of war are easier to liken to a raid than a war. In a raid, forces cross borders, wreak their mischief, and go home. In a war, they intend to stay permanently or turn what they have taken (be it territory or the entire country) over to those they deem their allies. It is very difficult of conceive of a cyberattack that can change the head of state and even harder to conceive of one that can conquer all or even part of another country. In worst-case scenarios, a cyberattack can disrupt life and maybe even break some machines. But they do not persist unless the cost of eradicating them – for instance, by doing a system reboot, or replacing infected machines with uninfected machines – exceeds the cost of tolerating their presence. It is worth remembering that there is no forced entry in cyberspace. Almost all wars tend to be two-side engagements because the attacked side has no option but to fight or surrender. In a raid, there is a third option to offer, at most, some resistance but not pursue the attacker for fear of worse. Thus, not all raids lead to counter-raids. The aforementioned 2007 Israeli raid on Syria did not. The many U.S. drone strikes have not, so far. China invaded Vietnam in 1979, wreaked damage, caused casualties, and departed having, in its mind, taught Vietnam a lesson. Vietnam did not return the favor by invading China. Neither did India in 1962 under similar circumstances. Granted, some nations do respond. Arabs and Israelis traded raids in the decade or so after Israel declared independence (1948); Palestinians and Israelis traded attacks over the last three decades, as well. Both Koreas sent raiding parties across the 38th parallel in the years prior to North Korea’s 1950 invasion. The history of raids escalating into open conflict (as distinguished from raids preceding open conflict as was the Korean case) is also thin.
Two other difficulties associated with attribution and the difficulties of disarming the attacker are likely to reduce the pressure to retaliate, much less, escalate in response to a cyberattack. Difficulties of attribution are likely to have two related effects. The first is that the target may not be so certain about who did it – or at least not be certain of its ability to convince third parties such as other countries who did it – to validate a response. The second is that if it takes too much time to analyze the attack to the point where it can determine (and make the case about) who did it with the requisite confidence, the political pressure for vengeance may have cooled and the politico-military situation that warranted retaliation may have changed (e.g., yesterday’s foe might be today’s partner).
The impetus to respond can also be reduced if the public has little idea about the identity of the attacker and even the fact of the attack (e.g., the failure to function is not obvious to the outside). Until the New York Times reported on Stuxnet, the public did not know that Iran had been attacked (it is not clear whether anyone in Iran actually understood that they were being attacked before it was reported). If no one knows that two parties are trading blows in the dark, there is much less requirement to appear strong as a way of establishing third-party deterrence.
The difficulty of disarming the other side’s cyberwar capabilities removes another reason for responding to a cyberattack. A kinetic response to a kinetic attack can be justified, not only as a way to reinforce deterrence, but also as a way to reduce the attacker’s ability to carry out further attacks; it does so by killing opposing forces and destroying military equipment, ancillary supplies and infrastructure, especially staging areas. A cyber response can only be justified in terms of deterrence because it is very difficult for a cyberattack to permanently or even temporarily damage the other side’s ability to carry out cyberattacks, which require little more than hackers, information, computing equipment, software, and network connections. Granted, the target country may conclude that it may win some relief from cyberattack by carrying out a kinetic attack on the attacker’s cyberwar corps. Such actions cannot be ruled out — but suffice it to say that at least the tools of a cyberattack cannot be identified from afar in the same way that the tools of a kinetic attack can be. Alternatively, the target can convince itself that the only way to rid itself of the cyberattack menace is to change the regime that governs the attacking country. If the sole aim of such logic is to minimize the likelihood of future damage to the target country, it can be convincing only by substantially underestimating the cost and risk of war or substantially overestimating the inconvenience associated with adopting other measures to improve cyber-security.
Finally, and in lieu of regime change, the escalation path from a cyberattack into a kinetic response also crosses a threshold that does not come up when the original provocation and the response were both kinetic. It is unclear whether this threshold is more like a speed bump or a yawning abyss, but it is clearly present. It should therefore seem obvious that a cyberattack is less likely to result in a kinetic response than an equivalent kinetic attack would have. However, this raises the question of what constitutes equivalence. Assessing kinetic damage when it is damage to you is a straightforward exercise. Assessing the damage from a cyberattack that leads to the widespread corruption of information systems requires knowing what systems have, in fact, been corrupted (something that, ironically, the attacker may have a better handle on). A target country that has been spooked by a cyberattack into imagining that the real damage is a multiple of the visible damage may well overreact (at least initially until it realizes over time which of its systems is or is not behaving as if they had been corrupted).
In sum, although the risks of violent escalation following a cyberattack are nonzero, the odds are against it, in isolation and particularly in comparison to a kinetic attack of similar magnitude.
New ways of carrying out conflict would, intuitively, seem to increase the likelihood of conflict. They create new ways to fight without necessarily lessening existing ways. We can easily imagine two countries carrying out cyberattacks on one another, when they would have had no such option were cyberwar not a possibility. To the extent that the reverberations of such a conflict escape beyond cyberspace they would seem to increase the likelihood of violence. But that rule does not always apply. Nuclear weapons, for instance, were not only not used during the Cold War, they are credited with having reduced the odds of conventional conflict in Europe. And cyberattacks, if they can substitute for kinetic attacks, may also reduce the odds of violence.
Do they? Although it is too soon to tell for sure; logic suggests otherwise. First, there are more ways in which countries, possessed of cyberattack capabilities, would use them or use cyberattack-enhanced conventional operations than there are ways in which cyberattacks would substitute for kinetic attacks. Second, cyberattacks may be used by rogue elements operating against distant countries in ways harder to imagine with conventional warfare capabilities. Third, cyberattack capabilities may exacerbate crises by creating the possibility of a disabling strike, or because the preparations for such a strike are hard to distinguish from cyber espionage. The saving grace is that the escalation potential, particularly a kinetic response, following a cyberattack, while nonzero, is suppressed by many factors.
 John Arquilla, “Cool War: Could the Age of Cyberwarfare Lead us to a Brighter Future?” www.foreignpolicy.com/articles/2012/06/15/cool_war. Tomas Rid, “Cyberwar and Peace: Hacking Can Reduce Real-World Violence” Foreign Affairs, November-December 2013. Tomas Rid also makes a broader point about the related cyberspace capabilities in his book Cyberwar will not Take Place (Oxford, 2013). Cyber-espionage can substitute for using human spies, the latter of which can be caught and harmed. Protest in cyberspace is less likely to lead to violence than protest in the streets  David Snager and Mark Mazzetti, “Israel Struck Syrian Nuclear Project, Analysts Say,” http://www.nytimes.com/2007/10/14/washington/14weapons.html  Richard Clarke and Robert Knake, Cyberwar, New York NY (HarperCollins), 2010; this description is found in the first few pages of the book.  See David Makovsky, “The Silent Strike: How Israel Bombed a Syrian Nuclear Installation and Kept it Secret,” New Yorker , September 17, 2012.  Unless the cyberattack has effects that go well beyond the use of force, it seems reasonable to assume that for purposes of moral calculus or escalation forecasts, the use of force is primary and the cyber accompaniment is secondary.  E.g., BAC0 means that B is preferred to C, C is preferred to 0, and A is also preferred to 0.  For instance, see Declan Walsh, “Whose side is Pakistan’s ISI really on?” www.theguardian.com/world/2011/may/12/isi-bin-laden-death-pakistan-alqaida.  Elinor Abreu, “Cyberattack Reveals Cracks in U.S. Defense,” http://www.pcworld.com/article/49563/article.html.  Daniel Kahneman, Thinking Fast and Slow, New York NY (Farrar, Straus and Giroux) 2011.  In 1983 the Soviet Union almost mistook the launch of a Norwegian sounding rocket for a NATO nuclear attack; see Geoffrey Forden, “False Alarms in the Nuclear Age,” www.pbs.org/wgbh/nova/military/nuclear-false-alarms.html (November, 2001).  It is unclear whether the attacks were instigated by the Russian state, by Russian citizens, or by ethnic Russians in Estonia (or some combination thereof). See, for instance, Stephen Blank “Web War I: Is Europe’s First Information War a New Kind of War?”, Comparative Strategy,27:3,227 — 247 (2008).  See Ian Traynor, “Russia accused of unleashing cyberwar to disable Estonia” http://www.theguardian.com/world/2007/may/17/topstories3.russia  Nicole Perloth and Quentin Hardy, “Bank Hacking was the Work of Iranians, Officials Say,” http://www.nytimes.com/2013/01/09/technology/online-banking-attacks-were-work-of-iran-us-officials-say.html  Nicole Perloth, “In Cyberattack on Saudi Firm, U.S. Sees Iran Firing Back,” http://www.nytimes.com/2012/10/24/business/global/cyberattack-on-saudi-oil-firm-disquiets-us.html.  BBC News, “Computer virus hits second energy firm,” http://www.bbc.co.uk/news/technology-19434920 of August 31, 2012.  Difficult does not mean impossible. Conceivably, the ability to carry out a cyberattack may depend on information which is stored only in one location and is wiped out – but it begs amazement that anyone sophisticated enough to penetrate today’s increasingly well-protected systems is insufficiently sophisticated to back up their information.