Enough Law of Horses and Elephants Debated…, …Let’s Discuss the Cyber Law Seriously

Posted on

International Journal of Advanced Research in Computer Science, ISSN No. 0976-5697, Volume 8, No. 5, May-June 2017

Sandeep Mittal, IPS
LNJN National Institute of Criminology & Forensic Science
Ministry of Home Affairs, New Delhi, India
Prof. Priyanka Sharma
Professor & Head
Information Technology & Telecommunication,
Raksha Shakti University, Ahmedabad, India


Abstract: The unique characteristic of cyberspace like anonymity in space and time, absence of geographical borders, capability to throw surprises with rapidity and potential to compromise assets in virtual and real world has attracted the attention of criminal minds to commit crimes in cyberspace. The law of crimes in the physical world faces challenge in its application to the crimes in cyberspace due to issues of sovereignty, jurisdiction, trans-national investigation and extra-territorial evidence. In this paper an attempt has been made to apply routine activity theory (RAT) of crime in physical world to crime scene cyberspace. A model for crime in cyberspace has been developed and it has been argued that the criminal law of crime in physical world is inadequate in its application to crimes in virtual world. To handle crime in cyberspace there is a need to address issues of ‘applicable laws and ‘conflicting jurisdiction by regulating the architecture of the internet through special laws of cyberspace. A case has been put forward for having an International Convention of Cybercrime with Council of Europe Convention on Cybercrime as yard stick.

Keywords: Cybercrime; Cyber Law; Cyberspace; Routine Activity Theory (RAT); Cyber-criminology; EU Convention on Cybercrime; Law of Horse


The ‘Internet’ has today become an essential part of our lives and revolutionised the way communication and trade take place far beyond the ambit of national and international borders. It has, however, also allowed unscrupulous criminals to misuse the Internet and exploit it for committing numerous cybercrimes pertaining to pornography, gambling, lottery, financial frauds, identity thefts, drug trafficking, and data theft, among others [1]. Cyberspace is under both perceived and real threat from various state and non-state actors [2] [3] [4]. The incidence of cyber-attacks on information technology assets symbolises a thin line between cybercrime and cyber war, both of which have devastating outcomes in the physical world [5] [6]. The scenario is further complicated by the very nature of cyber space, manifested in its anonymity in both space and time, and asymmetric results that are disproportionate to the resources deployed, and the fact that the absence of international borders in cyber space makes it impossible to attribute the crime to a tangible source [7]. In the context of these characteristics of cyberspace, ‘the transnational dimension of cybercrime offence arises where an element or substantial effect of the offence or where part of the modus operandi of the offence is in another territory’, bringing forth the issues of ‘sovereignty, jurisdiction, transnational investigations and extraterritorial evidence’; thus necessitating international cooperation [8]. The evolution of cybercrimes from being simple acts perpetrated by immature youngsters to complex cyber-attack vectors through the deployment of advanced technology in cyberspace has necessitated the development of a distinct branch of Law, The Law of Cyberspace. However, the question of whether ‘the law of cyberspace’ can evolve into an independent field of study or would remain just an extension of the criminal laws of the physical world in the virtual world has become the subject of an interesting debate among legal and social science scholars. The scope of this essay is to critically analyse and compare traditional crimes with cybercrimes to assess if a new set of laws is required for tackling crimes in cyberspace or otherwise.


In his poem, ‘The Blind Men and the Elephant’, John Godfrey Saxe describes the dilemma of six blind men while trying to describe the elephant (which) “in (this) sense represents reality, and each of the worthy blind sages represents a different approach to understanding this reality. In all objectivity, and in line with the poem of John Godfrey Saxe, all the sages (blind men) have correctly described their piece of reality, but fail by arguing that their reality is the only truth.” [9] To quote,

“And so these men of Indostan,
Disputed loud and long,
Each in his own opinion,
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong!”[10]

In the context of this article, cyberspace can be compared with the elephant, which is understood and described differently by different stakeholders in the realms of sociology, criminology, law, technology, and commerce, among other disciplines. However, each of the stakeholder largely ignores the perspective of the others while also understating or overstating the complexity inherent in the physical and virtual processes manifested through the interplay of ‘technology with technology’ and ‘technology with humans’ in virtual space, which, in turn, is not constrained by the barriers of geography, culture, ethnicity and sovereignty of state, but still has manifestation in the physical world. A few legal scholars have also explored the concept of the cyber elephant for determining the principles needed to regulate cyberspace [11].

In 1996, Judge Frank Easterbrook delivered a lecture [12] at the University of Chicago where he discussed his ideas on ‘property in cyberspace’. He explained that coalescing two fields, without knowing much about either, in the name of ‘cross-sterilisation of ideas’ is putting [lawyers] at the ‘risk of multi-disciplinary dilettantism’. He argued that there are a large number of cases relating to various aspects of dealing with horses such as the sales of horses, people being kicked by horses, theft of horses, racing of horses or medical care of horses, but this alone cannot be the reason for designing a course on “The Law of Horses”, as that would signify shallow efforts towards understanding the unifying principles of such a law [13]. This led to the current debate on the need for a separate law of cyberspace [14]. However, scholars have strongly challenged the position taken by Judge Easterbook [15] [16] [17].


Acquiring a deep understanding of the theories of traditional crime in the physical world and their application to crimes in cyberspace would help us in identifying the factors that might govern the regulation of cyberspace. The basic components of acts of crime in the real world and how they intrinsically differ from crimes in cyberspace have been discussed and summarised in Table 1 [18]. Brenner concludes that “cybercrime differs in several fundamental respects from real-world crime and the traditional model is not an effective means of dealing with cybercrimes” [19] and that the “matrices for the real world crime do not apply to cybercrime, as it differs in the methods that are used in its commission and in the nature and extent of the harms it produces” [20]. Interestingly, Brenner had earlier adopted a more conservative stand on the law applying to cybercrime [21].
Theories of criminology have been applied to cyberspace to explore its interaction with the human dimension, as perceived by criminologists (potential dilettante) [23] [24]. The Routine Activity Theory (RAT) relating to crime in the real world has been studied by scholars to analyse if it can be transposed to cybercrime or otherwise [25]. RAT assumes that the minimum three factors required for a crime are an ‘opportunity’ in the form of a suitable target (victim), a ‘motivated offender’ with criminal inclination, and the ‘absence of a capable guardian’ (a law enforcement agency, the neighborhood, etc.). Lack of any one of these factors would prevent the occurrence of the crime [26] [27]. The different controls in traditional crimes and cybercrimes seen in the context of RAT have been depicted in Figure 1 [28] [29] [30].

The three constituents of RAT, viz. the Victim, Offender and Guardian, have been represented by the three vertices of the largest triangle. Each of these three controls is further dependent on sub-factors, which, in turn, are represented as three triangles (for each of these sub-factors, a low value is assigned to the Centre and a high value to the vertex) placed respectively, at each of the vertices of the main triangle. The distinction between traditional crime (Red) and cybercrime (Blue) due to the complex interplay of multiple factors is obvious. Last but not the least, the blue triangle in the Centre characterises cybercrime. The basic tenets of RAT thus fit in well with the paradigm of cybercrimes.

Table 1: Traditional Crimes versus Cybercrimes [22]

1. Proximity—the perpetrator and the victim are physically proximate at the time of committing of the crime. No physical proximity is required between the offender and the victim.
2. The crime is a ‘one-to-one’ event involving the perpetrator(s) and victim(s). A perpetrator can automate the process of victimisation and commit thousands of cybercrimes with high speed at the same time.
3. The committing of the crime is subject to ‘physical constraints’ governing all activities in the physical world. Real-world constraints do not affect perpetrators of cybercrimes, as they can be committed with anonymity, at lightning speed, and traverse beyond transnational borders.
4. The demographic contours and geographical patterns of the incidence of crime are identifiable. It is difficult to identify the patterns and contours of cybercrime due to the lack of uniformity in the definition of cybercrimes, absence of laws, technologies evolving at a faster pace, the anonymity that the perpetrator of the cybercrime enjoys in space and time, and the under-reporting of cybercrimes due to the fact that it poses a risk to many reputations.

It has been argued that the routine activity approach has both significant continuities and discontinuities in the configuration of terrestrial and virtual crimes. “While motivated offenders are likely to be almost homogeneous in both environments, the construction of suitable targets is complex, with similarity on value scale but significantly different in respect of inertia, visibility and accessibility.” [31] The concept of the ‘capable guardian’ fits in well in both settings but the degree of fitness varies. However, the spatio-temporal environment of routine activities is organised in the real world but organically disorganised in the virtual world [32]. Thus, these features of cyberspace make it a domain-distinct from the real world,[33] resulting in noticeably low level of reporting of cybercrimes as compared to that of traditional crimes, as depicted in Figure 2 [34].

Figure 1: RAT and Interplay of Different Controls in Traditional Versus Cyber Crimes


Figure 2: A Comparison of Traditional Property Crimes versus Cybercrimes over a Period of Five Years in India
(Source of Statistics: Crime in India Statistics, NCRB)


Thus, the various factors that incite an individual to commit a cybercrime include the lack of deterrents, increased anonymity, and repressed desires to offend in the real world [35]. While the issue of repressed desires can be handled in traditional ways, the other two issues need to be handled through regulation of both the law and technology, or one of the two facilitating regulation of the other. The absence of any perimeter in cyberspace also makes it easily permeable, thereby making it difficult to assign an appropriate capable guardian for overseeing activities in cyberspace [36].

Thus an individual commit cybercrime due to the lack of deterrents, Some economists have averred that people are actively involved in “transforming their relationships into social capital and their experiences into human capital (conventional or criminal)” and that these economic considerations are more compelling than the criminologist’s simple theory that a crime occurs in response to ‘associations’ and ‘events’ [37]. In fact, altering the criminal’s economic choice pattern may also help alter his behavior [38] [39]. The model of cybercrime portrayed in Figure 1 does not contradict this contention.


After analysing and understanding the various factors that contribute to the commission of a crime in cyberspace, it may be suggested that any law enacted to regulate cyberspace would have to address the following three unique features of cyberspace [40]:

(a) As ‘computer-assisted’ low-cost efforts produce asymmetric results disproportionate to the resources deployed, the law should thus develop mechanisms for increasing the cost entailed in the crime and decrease the probability of its success. For example, there should be a thorough investigation of the crimes wherein victims implemented security measures to make their systems fool proof and exercised due diligence, whereas an enhanced-sentencing regime should be employed where dual-use technology like encryption techniques or anonymity has been used to commit the crime.

(b) There is a need to add third parties (such as Internet Service Providers or ISPs) to the traditional ‘offender-victim’ scenario of the crime. The law could consider imposing responsibilities on these third parties though it may be difficult to implement in view of the costs and liabilities implied in such actions. For example, in the United States, the Digital Millennium Copyright Act (DMCA) specifies the liability of ‘online-intermediaries’ in case of intellectual property right violations but no liability of ‘online-intermediaries’ is provided for defamation under The Communications Decency Act (CDA).

(c) The invisibility of the action in cyberspace and anonymity of the offender limit the capability of the guardian to regulate. It is possible for the law to address this issue. For example, the law may make implementation of IPV06 mandatory for the more specific attribution of acts in cyberspace or the law may mandate a change in the Internet architecture to include controls that would help in the identification of the perpetrators. As most of the Internet architecture is designed, maintained, controlled and governed by private bodies, the law would have to factor in the responsibilities and liabilities of these private stakeholders through either state regulation or self-regulation. Another example would be to make the use of digital signatures (using PKI) mandatory for communication in cyberspace, which in itself would not only prevent the occurrence of many crimes but also assist in the detection of crimes that still manage to be perpetrated despite the imposition of stringent checks.

Therefore, technology-intensive cybercrimes compel us to revisit the role and limitations of criminal law, just as criminal law forces us to reinvent the role and limitations of technology [41]. However, there is a symbiotic relationship between the two.

The adage, “On the Internet, nobody knows that you’re a dog” [42] is as true today as it has been throughout the history of the Internet, but the problem plaguing law enforcement agencies today is that, “on the Internet, nobody knows where the dog is” [43]. This is because the functionality of the Internet and its architecture are technologically indifferent to geographical location [44], leaving no scope for coherence in real space and cyberspace, wherein the latter is characterised by ‘geographical indeterminacy’ [45]. This gives rise to the legal issue of ‘appropriate jurisdiction’ or even ‘conflicting jurisdiction’ for cybercrimes. Criminal law is territorial in its applicability, and as territory itself is indeterminate in cyberspace, the applicable law and the appropriate jurisdiction would need to be determined in accordance with the principles of private international law, as is being done in the resolution of e-commerce disputes. But, do the principles of the civil liability transpose well into the realm of criminal liability? Although this is procedurally possible, the answer would still be substantively ‘no’, particularly when the definition of cybercrime itself may not be known in many jurisdictions. These legal issues need to be addressed for detection, investigation, prosecution and conviction of the criminals in cyberspace. And international cooperation is imperative in order to find where the ‘dog’ is, as it involves issues of sovereignty, jurisdiction, transnational investigations and examination of extraterritorial evidence.


Lawrence Lessig, in his theoretical model of cyberspace regulation [46], argued that behaviour is regulated by four constraints, viz., laws, social norms, markets, and nature [47]. The law, however, indirectly regulates behaviour while directly influencing the other three constraints, namely, social norms, markets, and nature. Applying this concept to cyberspace, Lessig postulated that in cyberspace, the equivalent of ‘nature’ is ‘code’ [48], with the latter being a more pervasive and effective constraint in cyberspace. The code is also more susceptible to being changed by law than the nature. Therefore, both the ‘code’ and ‘law’ have the potential of regulating the behaviour in cyberspace [49]. It has been argued that regulation in cyberspace would be more efficient and effective if the law regulates code rather than individual behavior [50].

The ‘code’ being expounded by Lessig was meant to include merely the software. With the advent of advanced technology in cyberspace, however, it is obvious that code would have to include not only the software, but also the concomitant hardware, Internet protocols, standards, biometrics, and privately controlled governance structures. All these components collectively contribute to the character and peculiarities of the Internet, making it the way it is. The code could then be safely given a new name, viz., ‘cyberspace architecture’ [51], with every component of this architecture having the potential of being regulated by law.

However, as pointed out earlier, even if various national Governments have enacted some type of law pertaining to cybercrimes, inconsistencies and disharmony remain in their application in transnational environments as criminal law is territorial. This necessitates international cooperation in either an informal or formal manner. Further, evidence gathered through the former is not admissible in courts, while evidence gathered through the latter is delayed due to the prevalence of long-drawn procedures, resulting in the escape of the ‘dog’. The solution could thus lie in the creation of an ‘International Framework on Cybercrime’ for addressing various legal issues relating to cyberspace.

The Council of Europe Convention on Cybercrime (the Convention) [52] is the first comprehensive framework on cybercrime which puts forth ‘instruments to improve international cooperation’ [53] and ‘duly takes into account the specific requirements of the fight against cybercrime’ [54]. The Convention has the potential of becoming an International Cyber Law like the Private International Law that has evolved over a period of time, but would have to be used in harmony with the substantive criminal law of the territory. The complex interaction between the two underscores the necessity for the enactment a separate set of laws to handle cybercrime.


Cyberspace is increasingly becoming a favourite domain for criminals for not only committing crimes but also for maintaining secret global criminal networks. This is because the organic nature of cyberspace is manifested in anonymity in space and time, immediacy of effects, non-attribution of action, and the absence of any international borders. Due to the unique nature of cyberspace, it is difficult to apply the laws of criminal liability for traditional crimes to cybercrimes. An examination of the traditional theories reveals that cybercrime is fundamentally different from crimes in the real world, and the traditional models are not effective in dealing with cybercrime. However, the dynamics of cybercrime was explained by transposing the factors operating in Routine Activity Theory (RAT) to cyberspace. It was demonstrated that the higher levels of anonymity, confidence and technological skills enjoyed by the offender motivate him to choose and target a victim who has been rendered vulnerable by the prevalent low level of security, trust and crime-reporting emanating from poorly defined laws, poor technical skills, and deficit of trust in the law enforcement machinery. The detection, investigation, prosecution, and successful conviction of the perpetrator of a cybercrime require the law to address the specific features of crime in virtual space. Anonymity and invisibility of action in cyberspace and its ‘geographic indeterminacy’ give rise to the legal issues of ‘applicable laws’ and ‘conflicting jurisdiction’. The architecture of the Internet needs to be governed by law, which has the potential to improve the behaviour of criminals in cyberspace. This would also entail international cooperation to address the issues of sovereignty, jurisdiction, transnational investigations, and extraterritorial evidence. It is suggested that the Council of Europe Convention on Cybercrime could be a yardstick for initiating measures in this direction. However, all this does not preclude the need for a separate set of laws for handling cybercrimes and providing legal remedies against them.


[1] Sandeep Mittal, ‘A Strategic Road-map for Prevention of Drug Trafficking through Internet’ (2012) 33 Indian Journal of Criminology and Criminalistics 86
[2] Marco Gercke, Europe’s legal approaches to cybercrime (Springer 2009)
[3] Marco Gercke, ‘Understanding cybercrime: a guide for developing countries’ (2011) 89 International Telecommunication Union (Draft) 93
[4] David L Speer, ‘Redefining borders: The challenges of cybercrime’ (2000) 34 Crime, law and social change 259
[5] Sandeep Mittal, ‘Perspectives in Cyber Security, the future of cyber malware’ (2013) 41 The Indian Journal of Criminology 18
[6] Sandeep Mittal, ‘The Issues in Cyber- Defense and Cyber Forensics of the SCADA Systems’ (2015) 62 Indian Police Journal 29
[7] Sandeep Mittal, ‘A Strategic Road-map for Prevention of Drug Trafficking through Internet’
[8] Open-ended Intergovernmental Expert Group on Cybercrime, Comprehensive Study on Cyber Crime, 2013)
[9] (Accessed on 13/04/2017)
[10] (Accessed on 13/04/2017)
[11] Martina Gillen, ‘Lawyers and cyberspace: Seeing the elephant’ (2012) 9 ScriptED 130
[12] Frank H Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) U Chi Legal F 207
[13] Ibid at 207, para 3
[14] Joseph H Sommer, ‘Against cyberlaw’ (2000) Berkeley Technology Law Journal 1145
[15] Lawrence Lessig, ‘The law of the horse: What cyberlaw might teach’ (1999) 113 Harvard law review 501
[16] Andrew Murray, ‘Looking back at the law of the horse: Why cyberlaw and the rule of law are important’ (2013) 10 SCRIPTed 310
[18] Susan W Brenner, ‘Toward a criminal law for cyberspace: A new model of law enforcement’ (2004) 30 Rutgers Computer & Tech LJ 1
[19] Ibid at page 104
[20] Susan W Brenner, ‘Cybercrime Metrics: Old Wine, New Bottles?’ (2004) 9 Va JL & Tech 13
[21] Susan W Brenner, ‘Is There Such a Thing as’ Virtual Crime’?’ (2001)
[22] Brenner, ‘Toward a criminal law for cyberspace: A new model of law enforcement’
[23] Miltiadis Kandias and others, An insider threat prediction model (Springer 2010)
[24] Sandeep Mittal, ‘Understanding the Human Dimension of Cyber Security’ (2015) 34 Indian Journal of Criminology and Criminalistics 141
[25] Majid Yar, ‘The Novelty of ‘Cybercrime’ An Assessment in Light of Routine Activity Theory’ (2005) 2 European Journal of Criminology 407
[26] Ibid
[27] Lawrence E Cohen and Marcus Felson, ‘Social change and crime rate trends: A routine activity approach’ (1979) American sociological review 588
[28] Nir Kshetri, ‘The simple economics of cybercrimes’ (2006) 4 IEEE Security & Privacy 33
[29] Yar, ‘The Novelty of ‘Cybercrime’ An Assessment in Light of Routine Activity Theory’
[30] Majid Yar, Cybercrime and society (Sage 2013)
[31] Yar, ‘The Novelty of ‘Cybercrime’ An Assessment in Light of Routine Activity Theory’. at page424
[32] Ibid
[33] Mittal, ‘A Strategic Road-map for Prevention of Drug Trafficking through Internet’
[34] Statistics Source: Crime in India Statistics, NCRB, Ministry of Home Affairs, Government of India, New Delhi.
[35] Karuppannan Jaishankar, ‘Establishing a theory of cyber crimes’ (2007) 1 International Journal of Cyber Criminology 7
[36] Susan W Brenner, ‘Toward a criminal law for cyberspace: Product liability and other issues’ (2004) 5 Pitt J Tech L & Pol’y i
[37] Bill McCarthy, ‘New economics of sociological criminology’ (2002) 28 Annual Review of Sociology 417
[38] JR Probasco and William L Davis, ‘A human capital perspective on criminal careers’ (1995) 11 Journal of Applied Business Research 58
[39] Kshetri, ‘The simple economics of cybercrimes’
[40] Neal Kumar Katyal, ‘Criminal law in cyberspace’ (2001) 149 University of Pennsylvania Law Review 1003
[41] Ibid
[43] Alexandre López Borrull and Charles Oppenheim, ‘Legal aspects of the Web’ (2004) 38 Annual review of information science and technology 483
[44] Though every computer or smart device has a machine address which can be easily spoofed, we are talking here specifically about geographical location. The remote access, incognito logins, encrypted platforms for communication, anonymous remailers and availability of ‘cached’ copies of frequently accessed internet resources further complicate and make impossible to attribute actions in cyberspace.
[45] Dan L Burk, ‘Jurisdiction in a World without Borders’ (1997) 1 Va JL & Tech 1
[46] Lessig, ‘The law of the horse: What cyberlaw might teach’
[47] In real space nature is represented by architecture.
[48] That includes software that makes internet to behave as it is.
[49] Graham Greenleaf, ‘An endnote on regulating cyberspace: architecture vs law?’ (1998)
[50] Lessig, ‘The law of the horse: What cyberlaw might teach’
[51] Greenleaf, ‘An endnote on regulating cyberspace: architecture vs law?’
[52] Council of Europe, Convention on Cybercrime, 23 November 2001, available at: [accessed 26 February 2017]
[53] Ibid. Articles 23-35
[54] Ibid. Preamble

Click for PDF view

A Review of International Legal Framework to Combat Cybercrime

Posted on Updated on

International Journal of Advanced Research in Computer Science, ISSN No. 0976-5697, Volume 8, No. 5, May-June 2017

Sandeep Mittal, IPS
LNJN National Institute of Criminology & Forensic Science
Ministry of Home Affairs, New Delhi, India
Prof. Priyanka Sharma
Professor & Head
Information Technology & Telecommunication,
Raksha Shakti University, Ahmedabad, India


Abstract: Cyberspace is under perceived and real threat from various state and non-state actors. This scenario is further complicated by distinct characteristic of cyberspace, manifested in its anonymity in space and time, geographical indeterminacy and non-attribution of acts to a tangible source. The transnational dimension of cybercrime brings forth the issue of sovereignty, jurisdiction, trans-national investigation and extra territorial evidence necessitates international cooperation. This requires and international convention on cybercrime which is missing till date. Council of Europe Convention of Cybercrime is the lone instrument available. Though it is a regional instrument, non-members state like US, Australia, Canada, Israel, Japan etc. have also signed and ratified and remains the most important and acceptable international instruments in global fight to combat cybercrime. In this paper, authors have argued that Council of Europe Convention on Cybercrime should be the baseline for framing an International Convention on Cybercrime.

Keywords: Cybercrime, International Convention on Cybercrime, Cyber Law, Cyber Criminology, International Cooperation on Cybercrime, Internet Governance, Transnational Crimes.


Information Societies have high dependency on the availability of information technology which is proportional to security of cyber space [1] [2]. The availability of information technology is under continuous real and perceived threat from various state and non-state actors [3]. The cyber-attack on availability of information technology sits on a thin line to be classified as cybercrime or cyber war having devastating effects in the physical world. The discovery of ‘cyber-attack vectors’ like Stuxnet, Duqu, Flame, Careto, Heart Bleed etc. in the recent past only demonstrates the vulnerability of the confidentiality, integrity and availability of information technology resources [4] [5]. The scenario is further complicated by the very nature of cyber space manifested in anonymity in space and time, rapidity of actions resulting in asymmetric results disproportionate to the resources deployed, non-attribution of actions and absence of international borders [6]. By virtue of these features, ‘the transnational dimension of cybercrime offence arises where an element or substantial effect of the offence or where part of the modus operandi of the offence is in another territory’, bringing forth the issues of ‘sovereignty, jurisdiction, transnational investigations and extraterritorial evidence’; thus necessitating international cooperation [7]. In this essay, international efforts and their efficacy in combating cybercrimes would be analysed.


Although several bilateral and multilateral efforts have been attempted to combat cybercrime, the European Union remains at the forefront in creating a framework on cybercrime [8] [9] [10] [11]. Going beyond the European Union by inviting even non-member States, incorporating substantial criminal law provisions and procedural instruments, the Council of Europe Convention on Cybercrime (the Convention) [12] puts forth ‘instruments to improve international cooperation’ [13]. The Convention makes clear its belief ‘that an effective fight against cybercrime requires increased, rapid and well-functioning international cooperation in criminal matters’ [14]. As on December 2016, 52 States have ratified the Convention and 4 States have signed but not ratified. As of July 2016, the non-member States of Council of Europe that have ratified the treaty are Australia, Canada, Dominican Republic, Israel, Japan, Mauritius, Panama, Sri Lanka and US. The Convention is today the most important and acceptable international instrument in global fight to combat cybercrime [15] [16] [17] thereby limiting the scope of discussion to the Convention for the purpose of this essay.

The Convention seeks to harmonise the substantive criminal law by defining ‘offences against the confidentiality, integrity and availability of computer data and systems’ [18], ‘computer related offences’ [19], ‘content related offences’ [20], ‘offences related to infringement of copyright and related rights’[21] and ‘ancillary liability and sanctions’ [22]. The convention also seek to harmonise the procedural law by providing scope, conditions and safeguards to procedures [23], expedited preservation of stored computer data, traffic data and partial disclosure of traffic data [24]; the search and seizure of stored computer data [25] and collection of real time data [26]. The jurisdiction over the offences established by the Convention is also sought to be harmonized [27]. However the strength of the Convention is the details in which general and specific principles relating to international co-operation including extradition and mutual assistance are enumerated [28]. To sum up, the Convention intends to provide ‘a swift and efficient system of international cooperation, which duly takes into account the specific requirements of fight against cybercrime’ [29]. However, a few scholars [30] have raised doubts about the effectiveness of the Convention, in improving the international co-operation thus enabling law enforcement agencies to fight cybercrime, and thereby terming it merely a symbolic instrument. The Convention ‘is an important step in right direction’ [31] and remains as ‘the most significant treaty to address cybercrimes’ [32].


A number of contentious legal and procedural issues generally arise while investigating cybercrimes involving transnational dimension, thus acting as impediment to the very process of investigation [33] [34] [35]. The cyber space has evolved exponentially since the Convention was drafted. The deployment of ‘military-grade precision-vectors’ and the advanced persistent threats (APTs) to attack infrastructure in virtual and real world are the order of the day. The internet of things has beginning to become botnet of things. The Nation-states also have realised that the cyber-space has almost become the fifth domain of war.[36] In view of this escalated scenario, while the formal channels like extradition and mutual assistance are delayed to the extent of killing the investigation, the informal requests between law enforcement agencies (LEAs) are viewed with suspicion.

The Convention only seeks to harmonize the domestic law but many nation-states have no cybercrime legislation. This combined with heterogeneity of skills, capacity, technology access and sub-culture of LEAs, cybercriminals and victims forms a ‘vicious circle of cybercrime’ [37]. The role of consent, having cognitive and cultural limitations, for accessing stored computer data in accordance with Article 32 of the Convention, is not well defined and therefore open to the interpretation of courts making this provision rather an instrument of international non-cooperation. Moreover, EU Primary Law viz., Charter of Fundamental Rights (CFR) of the European Union of 2000 [38], Treaty on European Union [39] and the jurisprudence of the CJEU [40], now recognise data protection as a fundamental right. The shield of human rights is very effectively used to prevent international co-operation. The domestic laws of some nation-states, e.g., Section 230, CDA [41] in US, have become judicial oak in hampering international co-operation in cybercrime investigations as it provides blanket immunity to search engines like Google.

The very nature of the internet-governance structure, tilted heavily toward private players, leaves very little in the hands of the States. The efforts for strengthening international co-operation to combat cybercrime, including the Convention, have miserably failed to tap this private element of the governance mainly due to conflict of private and public interests.


As cyber space is rapidly evolving with the advent of new technologies, the cybercrime is assuming new dimensions in space and time impeding its investigation in ways never before contemplated. The law and the capacity building of LEAs are not able to keep pace with these new developments. While the cyber space has no borders for the cybercriminals, the law enforcement agencies would have to respect the sovereignty of other nations. The national disparities in ‘law’, ‘legal systems’ and ‘capacity’ to combat cybercrimes are so wide that the international co-operation remains the only hope to combat crime. The Convention on Cybercrime is, though symbolic, a great effort to identify issues and provide solution to the existing legal and procedural gaps in fighting cybercrime. As the laws were and would always remain inadequate for enforcement, it would only be a concerted effort to achieve international co-operation to make cybercrime a very high cost and high risk proposition. The UN has recently woken up to the situation [42] and would do well to take the Convention on Cybercrime as the baseline to frame an International Convention on Cybercrime.


[1] M. Gercke, “Europe’s legal approaches to cybercrime,” in ERA forum, 2009, pp. 409-420.
[2] M. Gercke, “Understanding cybercrime: a guide for developing countries,” International Telecommunication Union (Draft), vol. 89, p. 93, 2011.
[3] D. L. Speer, “Redefining borders: The challenges of cybercrime,” Crime, law and social change, vol. 34, pp. 259-273, 2000.
[4] S. Mittal, “Perspectives in Cyber Security, the future of cyber malware,” The Indian Journal of Criminology, vol. 41, p. 18, 2013.
[5] S. Mittal, “The Issues in Cyber- Defense and Cyber Forensics of the SCADA Systems,” Indian Police Journal, vol. 62, pp. 29- 41, 2015.
[6] S. Mittal, “A Strategic Road-map for Prevention of Drug Trafficking through Internet,” Indian Journal of Criminology and Criminalistics, vol. 33, pp. 86- 95, 2012.
[7] O.-e. I. E. G. o. Cybercrime, “Comprehensive Study on Cyber Crime,” UNODC2013.
[8] “COMMUNICATION FROM THE COMMISSION TO THE COUNCIL, THE EUROPEAN PARLIAMENT, THE ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS Creating a Safer Information Society by Improving the Security of Information Infrastructures and Combating Computer-related Crime,” ed, 2001.
[9] “Communication from the Commission to the Council, the European Parliament, the Economic and Social Committee and the Committee of the Regions: Creating a safer information society by improving the security of information infrastructures and combating computer-related crime [COM(2000) 890 final – not published in the Official Journal].”
[10] “Council Framework Decision 2005/222/JHA of 24 February 2005 on attacks against information systems,” vol. OJ L 69, 16.3.2005, p. 67–71, ed.
[11] Council of Europe, Convention on Cybercrime, 23 November 2001, available at: [accessed 26 February 2017].
[12] ibid.
[13] ibid.. Articles 23-35
[14] ibid. Preamble
[15] “Communication from the Commission to the Council, the European Parliament, the Economic and Social Committee and the Committee of the Regions: Creating a safer information society by improving the security of information infrastructures and combating computer-related crime [COM(2000) 890 final – not published in the Official Journal].”
[16] O.-e. I. E. G. o. Cybercrime, “Comprehensive Study on Cyber Crime,” UNODC2013.
[17] “United Nations, UN General Assembly Resolution 55/63: Combating the Criminal Misuse of Information Technologies (Jan. 22, 2001),” ed.
[18] Council of Europe, Convention on Cybercrime, 23 November 2001, available at: [accessed 26 February 2017].. Articles 2 – 6.
[19] ibid.. Articles 7, 8.

Click for PDF view

Risks and Opportunities provided by the Cyber- Domain and Policy- Needs to address the Cyber- Defense

Posted on Updated on

International Research Journal On Police Science, ISSN 2454-597X Volume 2, Issue 1&2

Sandeep Mittal, I.P.S.,*


International Research Journal On Police Science. ISSN: 2454-597X, Issue 1&2, December 2016


The term ‘Cyber Domain’ has been used widely by various experts, sometimes interchangeably with ‘Cyber Space’, to imply – “the global domain within the information environment that encompasses the interdependent networks of information technology infrastructures, including the internet and telecommunication networks” (Camillo & Miranda, 2011). Today it has become “the fifth domain of warfare after land, sea, air and space and its a challenge to have a common definition of cyber Domain” but for the purpose of this essay the definition given above would suffice. Any entity, whether it is a Nation State or an Enterprise, who operates in cyber domain need to maintain confidentiality, integrity and availability of its deployed resources. The dynamics of cyber domain is complex and complicated in time and space. The humans, machines, things and their interaction is evolving continuously to pose risks and opportunities in the cyber domain. The risk to someone becomes opportunity for the other. In this essay, the ‘risks presented by’ and ‘opportunities available in’ the cyber Domain would be identified, discussed and analyzed to consider key strategic policy elements to defend the cyber domain.

Risks and Opportunities in Cyber Domain

The ‘very low cost efforts’ giving asymmetric results coupled with anonymity in space and time makes the cyber domain attractive (Cyber Security Strategy of UK, 2009) for use by various actors for malicious objectives. This faceless and boundary less domain is highly dynamic and throwing surprises with rapidity and having the potential of causing damages (real and virtual) which are disproportionate to the resources deployed. Let us have a look at various realms in terms of risks associated with them.

  1. The information system platforms and the equipment supporting the cyber ecosystem is susceptible to conventional physical attacks. The electronic equipment can be subjected to destruction by generating High Energy Radio Frequencies and Electromagnetic Pulses.
  2. The services in the cyber- space may be disrupted by direct attack e.g. DoS, DDoS etc. This is the most common attack and has the potential to paralyze the lines of communication, bring down banking services and sabotage military operations. It has been deployed over the years not only by novice script kiddies but also sophisticated state sponsored agencies successfully. Botnets working round the clock have become a serious challenge.
  3. The sensitive data (in storage and on the move) may be accessed, stolen or manipulated to have the desired effect immediately or at a subsequent date. The technology and deployment methodology is evolving with time and simple malware tools have been replaced with complex, intelligent and well-crafted attacks generally known as Advanced Persistent Threats (APTs). The stealth, patience and dedicated consistency of APTs has the capability to bypass the best firewalls (including New Generation Firewalls) and Intrusion Detection and Prevention Systems to exploit the Zero- Day- Vulnerabilities (Fire Eye White Paper, 2014).

The risks  associated with various realms as discussed above may manifest themselves in various dimensions of the society like Civic Infrastructural Breakdown (e.g., failure of electric power grids, disruption of fuel pipelines, disruption of water supply chain etc.), Economy Disruption (e.g., disruption of banking services, business continuity and maintenance related costs), Social Behavioral Effects on Society (e.g., gambling, spamming, pornography, drugs supply, propagation of extremist ideology) and last but not the least hacking and intrusion into privacy, compromising the Nations Morale through  use  of social media leading  to civic unrest and hampering diplomatic relations (e.g. Wiki Leaks ) and thus finally setting the stage for Cyber Warfare. Eventually, the Cyber Domain becomes a ‘means’ of most serious ‘end’, that is, the Cyber Warfare (Cornish et al, 2009). The ‘research-tool of yester- years’ has evolved into a strong medium of mass communication. In the Chatham Report titled ‘Cyberspace and the National Security of the United Kingdom, 2009, the concept of Cyber Threat Domains is introduced.

Let us have a look at the challenges and opportunities in Cyber security in terms of four ‘Cyber- Threat- Domains” (Cornish et al, 2009).

  1. ‘State-sponsored Cyber-attacks: The complete dependence of a Nation’s economy and critical infrastructure presents an opportunity to the ‘Nation States’ to deploy cyber- tools to gain information-dominance in cyber-domain to transmit information and denial/ restriction of such information to enemy state, as also the collection of tactical information. Going further, crippling a nation by paralyzing its critical infrastructure through deployment of stealthy and well-crafted tools to exploit ‘Zero-day-vulnerability’ is a matter of hours, and not even days. The use of Cyber attacks in raising the temperatures of furnaces in nuclear power plants and increasing the flow-speed of liquids in fuel pipelines may be used as weapons of mass- destruction.
  2. The concepts of war-maneuvering have been compared with cyber-maneuver (Applegate 2012), where it is realized that blatantly hostile acts in cyber space are characterized by rapidity, anonymity and difficulty in attribution and are dispersed in space and time. Even the territory of enemy or one of his allies can be used to achieve desired asymmetric results.

  3. Cyber-Terrorism /Extremism –There is no other medium which is more powerful and anonymous than cyberspace, where asymmetric results can be achieved by deploying minimal resources with ease. The internet is an anarchic play ground or an ungoverned space, which can be exploited by extremists for communication and information sharing, designing strategies, conducting training for its members, procurement of resources, infiltrating State’s assets and forming alliances with organization having common objectives but different motivations. The use of social media by political extremists to propagate their ideology and take on the government machinery may spearhead insurgency by exploiting public sentiment.
  4. Serious and Organized Criminal Groups are exploiting the cyber space not only to maintain their criminal networks but also for money laundering, drug-trafficking, extortion, credit card frauds, industrial espionage etc. “In the cyber space, physical strength is insignificant […….] , strength is in  software , not in numbers of individuals“ (Brenner, 2002). It poses a great challenge to the Law Enforcement Agencies to tackle Cyber- criminality. The need of operational level coordination with international LEAs can not be under stated as the existing mechanisms of MLAT etc have not given desired results. The thrust LEAs is on acquisition of hardware and software and the training of human resources is lacking.
  5. Lower –level Individual Attacks: are acts of individuals and may give results disproportionate to the skills deployed. These attacks may not be technologically advanced but have the capabilities to create panic and day to day disruptions. Sometimes fools pose great questions. Free availability of a number of   hacking and penetration testing tools on internet assist the script kiddies to venture in the world of hacking.

Thus it is amply clear form the foregoing that the cyber domain presents unimaginable opportunities spread over space and time with rapidity, anonymity and almost no investments.

Policies to Address Cyber Defense

Any policy for cyber- defense has to be multipronged, tiered and dynamic. There are many approaches to decide upon the strategic policies. One is the systematic approach while the other is to keep the national security as the central theme and then weave other defenses around it. What should be the strategy for a secure Information Society?  For the purpose of this essay we may define it as    “the ability of a network or an information system to resist, at a given level of confidence, accidental events or malicious actions that compromise the availability authenticity, integrity and confidentiality of stored or transmitted data and the related services offered by or accessible via these networks and systems” (Commission of the European Communities, 2006). Though this is a network- system- centric definition, it is felt by author that, if this approach is taken care of, by the strategic policy, the other considerations would fall in line. The approach should not be like the example of the “elephant and the five blind men’ rather it should be an integrative approach to address various risks, issues and opportunities in the cyber domain.  We would try to build up the key elements of the strategy which a strategic policy should address to defend the cyber domain. “The integrated application of cyberspace capabilities and processes to synchronize in real- time, ability to detect, analyze and mitigate threats and vulnerabilities, and outmaneuver adversaries, in order to defend designated networks is part of cyber defense strategy and includes proactive network operations, defensive counter cyber operations and defensive countermeasures” ( U.S Department of Defense, 2010 ). As policy should be general and broad, it would be beyond the scope of this essay to discuss procedures, details of technologies and processes associated with them and mechanisms to deploy them.  We would be focusing rather on the key elements; a security policy should incorporate to achieve the objective of defending the cyber domain. It should incorporate the ground realities present in the scenario where policy would be applied. In the lighter vein, I am incorporating three cartoons, based on three real incidents in India, conceptualized by the author.

The author has perused the summaries of  the National Cyber Security Strategies of nineteen countries (Luijf, Besseling & Graaf, 2013) and based on them, tried to identify the key elements of the strategic policy to defend the cyber domain.

  1. Legislation/Legal Framework:

    The cyber domain has no boundary. The various stakeholders and players may be spread all round the globe irrespective of national jurisdictions. Hence, a law which is progressive and aligned with international conventions on cyber-crime and Laws of the other nation states would be a basic requirement to defend the cyber domain. Additionally, the judiciary needs to be sensitized on various aspects of cyber law for better appreciation while dealing with such cases.

  2. Mandating the Security Standards:

    Mandating the minimal security standards in information security is like preparing the ground before the seeds are sown. Security assurance measures for products ( ISO/IEC 15408), security assurance measures for development process  (ISO /IEC 21827) , measures for Security Management (ISO/IEC 27001) etc should  be implemented with Zero tolerance for non-compliance. Personnel expertise and knowledge should be mandated through professional certifications.

  3. Secure protocols, Soft wares and Products:

    At present there is no system in place for ‘cyber-supply-chain-security-ratings’. This is a big loophole as these hardware and software ,  have to be frequently changed and have the potential of getting compromised thus putting the cyber- security at stake. These software and hardware become the gateway to attacks in the cyber domain.

  4. Active-Dynamic Security Measures for Prevention, Detection and Response Capabilities:

    The technology of the malware and the methodology of its deployment in cyber-domain has radically evolved over the years. “The attacks are advanced, targeted, stealthy and persistent and cut across multiple threat vectors [web, email, file shares, and mobile devices ] and unfold in multiple stages, with calculated steps to get in , signal back out of the compromised  network,  and get  the valuables out (Fire Eye White  Paper, 2013).  While firewalls, new generation firewalls , Intrusion Prevention Systems etc. are important security defenses, they can not stop  dynamic attacks  that exploit zero-day vulnerabilities. Hence integrated platforms having the capability to identify and block these sophisticated attacks,   and thus safeguard their critical and sensitive assets. Attack  Attribution Analysis should be deployed to  identify the attackers (Lewis, 2014) . Zero Trust Model of Information Security also helps in reducing the attacks from digitally- signed-malware (IBM Forrester Research Paper, 2013).

  5. Threat and vulnerability Analysis:

    A detailed threat and vulnerability analysis of the resources should be maintained and updated periodically as per minimum At least a broad 3×3 matrix  as per NIST FIPS 199 Standards is suggested.  A risk- profile- dashboard should be kept ready. The assets which are critical need to be identified clearly and SOPs for their protection be put in place.

  6. Continuity and contingency Plans should be prepared and kept ready. Many nations are deploying in house “Government- off- the- shelf“ (GOTS) technology for sensitive defense and critical infrastructure systems. The attacks are inevitable but if the services are maintained, the confidence and trust of the stakeholders is vindicated. The Governments should also work towards a mechanism of Cyber Liability and Cyber Insurance which at present is generally lacking.

  7. Information Sharing: In most of the countries there is a mechanism to share information on security breaches and related developments by establishing Computer Emergency Response Teams (CERTs). These national CERTs also interact with each other at international level. However , the author’s personal experience shows that many of the enterprises  don’t share information on breaches  in order to protect corporate image. Sometimes the security breaches may not be even known for months. There is an urgent need for devising a mechanism where reporting of security  breaches should be made mandatory with penalties for non-compliance.

  8. Awareness, education and training: Practice makes a man perfect. Continuous awareness and educational campaigns for various stakeholders on dos and don’ts have to be run repeatedly. The training workshops  for the workforce  should be organized. We should always remember that the human behavior is the greatest risk to security and this risk can only be minimized by education and training only.

  9. Reforms in school and Collegiate Education: If cyber security as a subject is included in the school and college curricula, a ready cyber work force would be available to be deployed across various sectors. The online training courses in cyber security should be designed and incentives offered to workers, if they attend and successfully complete these courses.

  10. International Collaboration: The cyber domain has no boundaries. The attacker sitting in one country using the system and resources of a second country may compromise a sensitive database in a third country. If there is no international collaboration, what  ever strategy we may design, it is bound to fail. Although, there is a Regional Convention on Cyber Crime but unfortunately there is no such convention on cyber security [The Council of Europe (Budapest) convention on Cyber Crime, 2004]. There is a necessity for comprehensive international cooperation to sort-out issues regarding Jurisdiction, Mutual Assistance, Extradition , 24 / 7 Network etc ( Clough, 2013). However , personal experience of the author is that there is need to galvanize international cooperation, which is presently almost ineffective at operational level.

However, to achieve the desired objectives, the strategies need to be implemented through acquirement and effective allocation of sufficient resources through accountable responsibilities ( Ward & Peppard, 2002). But even if all this is done, the things will not turn out as desired ( Johnson & Scholes, 2002 ) as demonstrated in the following figure. Therefore a strategic management process that can adapt to changing scenarios during the implementation of original strategy is not a substitute for the original strategy but it’s a way of making it work.


The Cyber Domain by virtue of its unique characteristics of anonymity, availability and maneuverability in space and time, having no international borders ,  and capacity to give asymmetric results hugely disproportionate to the resources deployed, offers tremendous risks and opportunities for various stakeholders. It is rapidly expanding its scope from internet of human beings and machines to internet of things. It has the potential of disrupting a Nations economy, polity, civic and military infrastructure and last not the least, may lead to the cyber-warfare. Any policy and strategy to defend the Cyber Domain should be dynamic enough to adjust to the rapidly changing nature of attacks and technology. The futuristic scenarios  like “Botnet of Things” have the potential of disrupting the normal life of humans. The strategic policy explained in this essay,  if implemented,  should take care of various aspects of defending the cyber domain. However, as the attacks, technologies and attackers evolve, the policy should also evolve with the same rapidity. The ‘unknown- unknown’ of the cyber domain is yet to be seen by the world.

Note:      The views expressed in this paper are of the author and do not necessarily reflect the views of the organizations where he worked in the past or is working presently. The author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study. The author is also thankful to his student Ms. Avinash Kaur @ NICFS who skillfully converted the given situations depicted by the author into the cartoons included in this paper.


Applegate,S. 2012, “ The Principle of Maneuver in Cyber Operations  accessed on 14/03/2014.

Brenner, S.W. 2002, “Organized Cybercrime? How Cyberspace May Affect theStructure of Criminal Relationships  (Vol. 4, Issue 1, Fall 2002), p. 24.”, Journal of Law & Technology, North Carolina, vol. 4, no. 1, pp. 24.

Clough , J. 2013, “The Budapest Convention on Cyber Crime: Is Harmonisation Achievable in a Digital World.
Accessed on 13/03/2014.”, 2nd International Serious and Organised Crime Conference, ed. Presentation, Monash University, Brisbane, 29-30 July 2013.

Cornish, P., Livingstone, D., Clemente, D. & and Yorke, C. 2009, Cyber Security and the UK’s Critical National Infrastructure. Accessed on 13/03/2014, A Chatham House Report, United Kingdom.

Cornish, P., Hughes, R. & and Livingstone, D. 2009, Cyber space and the National Security of the UnitedKingdom : Threats and Responses. Accessed on 14/03/2014, A Chatham House Report, United Kingdom.

Cornish, P., Livingstone, D., Clemente, D. & and Yorke, C. 2010, On Cyber Warfare on: 11/03/2014, A Chatham House Report, United Kingdom.

Federica Di Camillo and Vale’rie Miranda 2011, Ambiguous Definitions in Cyber Domains: Costs, Risks and the Way Forward., Istituto Affari Internazionali, Roma.

FireEye White Paper 2014, Advanced Attacks Require Federal Agencies to Reimagine IT Security, online publisher, Accessed 11/03/2014.

FireEye White Paper 2013, Thinking Locally, Targetted Globally- New Security Challenges for State and Local Governments accessed on 11/03/2014,

IBM 2013, Supporting the Zero Tr ust Model of Information Security:The Important Role of Today’ s Intrusion Prevention Systems on 13/03/2014, IBM Forresster Research Paper, Online.

Luiijf, E., Besseling, K. & and de Graaf, P. 2013, “Nineteen national cyber security strategies’, , Vol. 9, Nos. 1/2, pp.3–31.”, Int. J. Critical Infrastructures, vol. 9, no. 1/2, pp. 3–31.

NIST 800- 39, Managing Information Security Risk: Organization Mission and Information System View.  , NIST Special Publication., USA.

NIST “Guide for Applying Risk Management Framework to Federal Information Systems. NIST Special Publication 800- 37.   “, NIST, vol. 800- 37.

NIST Recommended Security Controls for Federal Information Systems and Organizations. NIST Special Publication 800- 53., 800- 53 edn, NIST, USA.

NIST FIPS Standards for Security Categorization of Federal Information and Information Systems., NIST FIPS, USA.

NIST FIPS Standards for Security Categorization of Federal Information and Information Systems.  NIST FIPS Publication 199  , 199th edn, NIST FIPS, USA.

Purser, S. 2004, A practical guide to managing information security, Artech House, Boston, Mass. ; London.

Stevens, T. 2010, , ‘US Cyber Command achieves “full operational capability,” international cyberbullies be warned’, 5 November 2010, Accessed 11/03/2014, November edn,

The Joint Chiefs of the Staff 2010,, Memorendum for Chief of Military Services edn, US Department of Defense, Washington D.C.

UK Cabinet Office 2010, Securing Britain in an Age of Uncertainty: The Strategic Defence and Security Review , p. 47. Accessed 11/03/2014, Cm7948 edn, The Stationary Office, London.

UK Cabinet Office 2009, Cyber Security Strategy of the United Kingdom: Safety, Security and Resilience in Cyber Space, p. 12., Cm7642 edn, The Stationery Office, London.

Reputational Risk, Main Risk Associated with Online Social Media

Posted on

IJCC, Volume XXXIV No. 2 July-Dec.,2015 ISSN 09704345

Sandeep Mittal, I.P.S.,*


The Indian Journal of Criminology & Criminalistics,
Volume 35 (2) July – Dec. 2015


Social media is undoubtedly a revolution in the business arena blessing the organizations with the power to connect to their consumers directly. However, as the saying goes nothing comes without a cost; there is cost involved here as well. This article examines the risks and issues related to social media at the time when the world is emerging as a single market. Social networking and online communications are no more just a fashion but an essential feature of organizations in every industry. Unfortunately, inappropriate use of this media has resulted in increasing risks to organizational reputation threatening the very survival in the long-run and necessitating the management of these reputational risks.

This article attempts to explore the various risks associated with social media. The main aim of this study is to particularly focus on reputational risks and evaluate it’s intensity from the perspectives of public relations and security staff of an organization. The article is structured to firstly explain the concept of social media followed by identification of various social media risks and the analysis of reputational risk from perspectives of public relations and organizational security staff. The article then based on the analysis provides various recommendations in order to help the contemporary organizations to overcome such risks and thus, enhance their effectiveness and efficiency to gain competitive advantage in the long-run.

Keywords: Reputational Risk, Online Social Media, OSM Security, OSM Risk, Organizational Reputation, Cyber Security, Information Assurance, Cyber Defence, Online Communication.


With changing times, the concept of socializing has been transforming. Globalization and digitalization to a large extent are responsible for the same. With internet, it is possible to stay connected with people located in various regions of the world. One such medium of socializing is the social media. In todays time, online social media services have been one of the most vibrant tools adopted not only by individuals but also corporate and government organizations (Picazo-Vela et al., 2012). Corporates in fact have been abiding social media extensively as it is one of the cheapest ways of communicating with the masses. The importance of social media can be understood from the fact that at present there are more than 100 million blogs that are highly operational and connect people from across the world (Kietzmann et al., 2010). Further there has been a surge in social media members for websites like Facebook or Twitter with over 800 million active users in Facebook in 2012 and 300 million users of Twitter (Picazo-Vela et al., 2012). In spite of being a very powerful mode of communication it is subjected to a large number of risks.

Organizations do not operate in vacuum, thus, management of reputation is crucial for them, as it affects their markets as well as the overall environment. Organizational reputation not only impacts its existing relations but also affects the future courses of action (McDonnell and King, 2013). In this article, an attempt is made to understand the various reputational risks associated with social media that affects an organization’s working and also suggests some ways to overcome them.

Concept of Social Media

The foundations of social media have been laid by the emergence of Web 2.0 (Kaplan and Haenlein, 2010). It is with the help of this technological development that social media is accessed at such a wide scale and is available in devices like cell phones and tablets, other than personal computers and laptops. Social media is gaining importance in the corporate world as decision makers and consultants are exploring its various aspects to exploit its potential optimally (Kaplan and Haenlein, 2010). Social media is an online communication system through which information is generated, commenced, distributed and utilized by a set of consumers who aim to aware themselves regarding various aspects related to a product, service, brand, problems and persona (Mangold and Faulds, 2009). It is also known as consumer-generated media. In simple terms, it can be explained as a platform to create and sustain relationships through an Internet based interactive platform.

Social media is categorized under collaborative projects, blogs, content communities, social networking sites, virtual game worlds, and virtual social worlds (Kaplan and Haenlein, 2010). The examples of various communication systems under social media are provided in the Table 1 for ready reference.

Organizations have realized the importance of social media and have been using it along with other integrated marketing communication tools to converse with target audience effectively and efficiently (Michaelidou et al, 2011). This is mainly because the modern day consumers are shifting from traditional promotional sources to such modernized sources. Social media has a very strong hold and is influencing consumer behavior to a large extent. Out of all the above few examples, Twitter has emerged as one of the most powerful social media tools. In the present day scenario, approximately 145 million users communicate by transferring around 90 million ‘tweets’ per day, of 140 characters or less (Kietzmann et al, 2010). Another example is of Youtube in which videos can go viral in few seconds and can attract more than 9.5 million views for a single video (Kietzmann et al, 2010).

Table 1: Example of Social Media Types

Social Media Type Example
Social networking websites MySpace, Facebook, Faceparty, Twitter
Innovative sharing websites Video Sharing (Youtube), Music Sharing (, Photo Sharing (Flickr), Content Sharing (, General intellectual property sharing (Creative Commons),
User-sponsored blogs The Unofficial AppleWeblog,
Company-sponsored websites/blogs, P&G’s Vocalpoint
Company-sponsored cause/help sites Dove’s Campaign for Real Beauty,
Invitation-only social networks
Business networking sites LinkedIn
Collaborative websites Wikipedia
Virtual worlds Second Life
Commerce communities eBay,, Craig’s List, iStockphoto,
Podcasts For Immediate Release: The Hobson and Holtz Report
News delivery sites Current TV
Educational materials sharing MIT OpenCourseWare, MERLOT
Open Source Software communities Mozilla’s,
Social bookmarking sites which permit browsers to suggest online news stories, music, videos Digg,, Newsvine, Mixx it, Reddit

Source: Mangold and Faulds, 2009.


Risks Associated with Social Media

Before discussing the various risks associated with social media, it is essential to understand the various risks faced by an organization while using the internet. This can be depicted with the help of a diagram provided as Figure 1.

Figure 1: Internet Related Risks for Organizations
Source: Lichtenstein and Swatman, 1997

In Figure 1, other internet participants imply other members from the internet society. These risks are very general and are experienced by organizations even in cases where they are not connected to the internet like the risks associated with corrupted software (Lichtenstein and Swatman, 1997).

The horizon of risks have expanded to a larger extent by things becoming more critical and complicated with extensive popularity and usage of social media (Armstrong, 2012). Organizations are challenged with new and unique risks which need to be catered proactively. These risks threaten the effectiveness of this mode and thus organizations fail to reap its benefits completely. It is due to such risks that many organizations have either limited their approach towards usage of social media or do not resort to such measures. Such risks range from data outflow and legal complications to risks associated with reputation (Everett, 2010).

These risks can be categorized under two heads namely; those related to user and security related issues (Chi, 2011). User related risks are inadequate certification controls, phishing, information seepage, and information truthfulness (Chi, 2011). The security related risks are Cross Site Scripting (XSS), Cross Site Request Forgery (CSRF), injection defects, deficient anti-automation (Chi, 2011).

Out of all the risks related to social media, an organization is mainly threatened with risks related to information confidentiality, organizational reputation and legal conformities (Thompson, 2013). Issues related to information confidentiality emerge mainly because information is shared digitally using social media. Thus, there are chances of such information getting hacked or shared unintentionally. This may raise risks related to privacy thus affecting information integrity.

Legal issues while using social media are bound to take place mainly because this media is used for global approach and is therefore affected by international rules and regulations. It is challenging for an organization to understand varied legal obligations of differing countries and then determine a universally accepted legal protocol. Risks related to organizational reputation are discussed in detail in the next section.

Reputational Risk

Reputation of an individual or organization is related to one’s reliability and uprightness. Thus, managing and securing the reputation becomes highly critical. With organizations resorting to social media extensively, they are bound to experience such reputational risks thus affecting their goodwill negatively. Reputational risks arise from the fact that organizations share all-embracing information with customers and browsers (Woodruff, 2014). This information in many circumstances is misused which damages organizational reputation. The various depressing effects from reputational damage are negative impact on goodwill in the real world, restricting development of social contacts and contracts, detrimental impact on attracting potential customers (Woodruff, 2014). In one of the research studies, 74 per cent employees accept the ease of causing reputational damage to organizations through social media (Davison et al., 2011). It is due to this reason that organizations to a large extent scrutinize the use of social networking sites by their employees.

Public Relations

Public relations depict organization’s relations with its various stakeholders. Organizations use the social media platform to interact with their stakeholders and thus develop a strong and positive public image. In fact the social media, organizations and stakeholders together interact within the dynamic business world (Aula, 2010). These interactions are shaped by organizational public relations objectives and the extent of social media usage for developing organizational reputation. But developing and sustaining a positive public relation is not easy as they are hampered to a large extent when subjected to reputational risks. Organization’s personal identity is at stake as it can be plagiarized and used without authentication (Weir et al, 2011).

Reputational risks are related to organizational credibility and results from security risks like identity theft and profiling risks. These risks challenge organizational reputation by questioning its compliance with societal rules and regulations (McDonnell and King, 2013). Organizations to a large extent fail to integrate social media with organizational and stakeholders objectives resulting into ineffective reputation management.

Social media has made organizations global, due to which even minor incidents get highlighted internationally. Local issues get international fame resulting in a negative reputation for the organization globally. Further with social media being active, organizations cannot escape from the clutches of negative publicity (Kotler, 2011). One example of failure of reputation management that resulted in earning negative fame across the world is Nestle. In 2010, Greenpeace uploaded a video on YouTube against KitKat by Nestle (Berthon et al, 2012). The video went viral and resulted in negative publicity for the organization. Though the advertisement was made mainly for consumers in Malaysia and Indonesia for conserving rainforests but it was acknowledged by the world at large.

Another risk that is faced by the organizations is the creation of a public image through standardized marketing programs. Differing stakeholders from different countries use different social media platforms which make it essential for organizations to clearly analyze and understand their usage requirements and patterns. This is where most of the organizations fail and thus are unable to use social media appropriately.

Below is a graph that depicts usage of differing social media platforms in different countries as per statistics in 2011 (Berthon et al, 2012).

Figure 2: Relative Frequency of Search Terms from Google Insights: Social Media by Country

Source: Berthon et al, 2012

Organizational Security Staff

Organizational employees are indispensable for the success. But these employees can also be a threat to the organization. It is mainly possible as employees have access to organization’s confidential and important information which they can leak to outsiders. With social media’s growing popularity, the line between personal and professional conversations on web has become blurred. Further inspite of keeping this information under security they can evade such systems through illegal measures. Further research has proved that only in USA approximately 83per cent staffs use organizational resources to contact their social media (Zyl, 2009). Other than using these resources for personal messages exchange over social media, 30 per cent employees in USA and 42 per cent employees in UK also exchanged information related to their work and organization (Zyl, 2009). This depicts the intensity of problem of security risks related to social media. Thus, the organizational security staff has to be on its toes to ensure that such information is highly secured and not utilized inappropriately.

In 2002, an employee of an international financial services organization in the USA infiltrated the organizational digital security systems and used ‘Logic Bomb’ virus to delete approximately 10 billion files from 1300 organization’s servers. This resulted in a financial loss of around $3 million and it also had to suffer due to negative publicity. This depicts failure of organizational society staff to combat risks. Such issues have become very common in the social networking world. Employees have the freedom to generate nasty and unsecured comments or links that harms organizational reputation, finances and creates security related risks (Randazzo, 2005).

With the help of social media, social engineering attacks are possible due to easy admission to hefty information by hackers, spammers and virus creators. They can easily misuse the same by creating fake profiles, stealing identity and collect details with regards to job titles, phone numbers, e-mail addresses. Further they can also corrupt systems using malwares that ultimately are a threat to organizational data. Data infiltration and loss ultimately impact organizational reputation negatively as these leaked data are used for unauthentic and illegal activities.


Organizations who are either unaware of these risks or are unable to defend themselves can face dire consequences at times. Organizations are aware of the gains that they would derive from using social media networking and thus take such risks readily. These risks cannot be avoided completely,organizations need to work out measures through which they can manage these risks and mitigate their negative influences.

In order to overcome issues related to privacy that ultimately results in hampering one’s reputation, the organizations should take proactive measures before using social media. During the sign-up phase or creation of social networking profiles, specific concerns related to privacy and confidentiality should be resolved and proper regulations designed (Fogel and Nehmad, 2009). These rules and regulations should be very clearly communicated to organizational employees so that they have complete information regarding social media dos and don’ts. Further the organization should not only design strict punishments but also execute them against those who break such rules (Hutchings, 2012).

One of the ways to overcome reputational risks related to social media is by appointing an efficient social media manager. These managers are specialists and would be responsible for determining the social media related protocol based on organizational top secret information, contemporary issues and prospective plans (Bottles and Sherlock, 2011). The social media manager should have a responsibility towards the organization and various stakeholders and thus intermingle with them sincerely and empathetically (Brammer and Pavelin, 2006). The manager should also have a vigilant eye and an analytical attitude to identify various fact, figures and events that can impact organizational reputation and thus take corrective actions. As security staff play crucial role in determining organizational security standards, the organizations should be very specific in recruiting and selecting them. Besides, there should be a greater emphasis in the organization development of culture, values, and ethics within an organization.

Organizations should also understand that management of reputational risks requires collaborative and innovative approach. The organization needs to develop a social media involvement protocol by consulting and taking advice from differing sources like legal experts, marketing experts, international business experts, media experts and other stakeholders (Montalvo, 2011). The organization should also be innovative in selecting and distributing the content through social media so that it can responsibly deal with issues.


Organizations today prefer to use social media in comparison to traditional media (Hutchings, 2012). It is mainly due to the various benefits associated with the same but they cannot also overlook various associated risks. It takes ages for an organization to develop a positive reputation and thus careful measures needs to be taken to maintain and sustain it. Organizations are unable to exercise control on social media completely but they can take restrictive measures to ensure that reputational risks are minimized and their ill effects are combated.

The article identified that the major reputational risks related to social media for organizations arise due to data outflow, identity theft, profiling risks, inappropriate choice of public relation strategy, inability to control external environmental factors, inappropriate information management and security policy and failure to have efficient and effective security staff. In order to overcome such issues, organizations need to appoint social media managers and hire employees skilled in social media management. Further, it should be a collaborative and creative approach and design social media protocol to mitigate such risks.

To conclude, it can be stated that the organizations need to be proactive and have a vigilant eye on environmental factors to secure themselves and benefit from online social media.

Note: The views expressed in this paper are of the author and do not necessarily reflect the views of the organization where he worked in the past or is working presently, the author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study.


A. Kaplan, and M. Haenlein, “Users of the world, unite! The challenges and opportunities of Social Media. “Business horizons, vol: 53, iss: 1, 2010, pp. 59-68. log/pics/sdarticle.pdf. [Accessed on 07/08/2014]

A. Woodruff, Necessary, unpleasant, and disempowering: reputation management in the internet age. ACM, In Proceedings of the 32nd annual ACM conference on Human factors in computing systems,2014, pp. 149-58. woodruff.pdf?ip= OA&key=4D4702B0C3E38B35% 2E4D4702B 0C3E38B 35 %2E4D4702B 0C3E38B 35 % 2E362513C443C43C7A&CFID= 381960244&CFTOKEN=18798755&__acm__=1404667886_023e822660bflb 4433893921552068cc [ Accessed on 06/07/2014 ]

A. Zyl, “The impact of Social Networking 2.0 on organisations. “Electronic Library, vol. 27, iss: 6, 2009, pp. 906-18. social_networking.pdf. [Accessed on 07/08/2014]

C. Everett, “Social media: opportunity or risk?” Computer Fraud & Security, vol: 2010, iss: 6,2010, pp. 8-10. d16c69fe23c071cc363e2a967ce68e4e. [Accessed on 06/07/2014].

C. Hutchings, “Commercial Use of Facebook and Twitter: Risks and Rewards.” Computer Fraud & Security, vol: 2010, iss: 6,2012, pp. 19-20.¬S 1361372312700659-main.pdf?_tid=ed89f016-0528- 11 e4-8935 -00000aab0f27 &acdnat= 14046635870ebcbda0807a69b549a7dfa0a62430c1. [Accessed on 06/07/2014]

G. Weir, F. Toolan, and D. Smeed, “The threats of social networking: Old wine in new bottles?”. Information Security Technical Report, vol: 1, 6, 2011, pp. 38-43. S1363412711000598/1-s2.0-51363412711000598-main.pdf?_tid=fe220808-052a-l1e4-80b4- 00000aacb361&acdnat=1404664473_4ac2c6946ec5ac14beeaf9f567432b0d. [Accessed on 06/07/ 2014]

H. Davison, C. Maraist and M. Bing, “Friend or Foe? The Promise and Pitfalls of Using Social Networking Sites for HR Decisions”. Journal of Business Psychology, vol: 26, 2011 pp. 153-9. JBP_2011_Social%20Networking %20and%2OHR.pdf [Accessed on 06/07/2014].

I. Ahmed, Fascinating #SocialMedia Stats 2015: Facebook, Twitter, Pinterest, Google+, 2015. (Accessed: 24/05/2016)

J. Fogel and E Nehmad, “Internet social network communities: Risk taking, trust, and privacy concerns.” Computers in Human Behavior, vol: 25, 2009,pp, 153-160. S0747563208001519/1-s2.0-50747563208001519-main.pdf?_tid=36c1e884-052d-l1e4-bd79- 00000aacb35d&acdnat=1404665427_ecb8f0d08d037b033d3e8c901bf2d27f. [Accessed on 06/ 07/2014].

J. Kietzmann, K. Hermkens, I. McCarthy and B. Silvestre, “Social media? Get serious! Understanding the functional building blocks of social media.” Business horizons, vol: 54, iss: 3, 2011,pp. 241-51. [ Accessed on 07/08/2014 ]

K. Bottles, & T. Sherlock, “Who should manage your social media strategy”. Physician executive, vol: 37, iss: 2, 2011, pp: 68-72. WhoShouldManageYourSocialMediaStrategy.pdf [ Accessed on 06/07/2014 ]

K. Armstrong, “Managing your Online Reputation: Issues of Ethics, Trust and Privacy in a Wired, “No Place to Hide “World.” World Academy of Science, Engineering and Technology, vol: 6, 2012, pp. 716-21. [ Accessed on 06/07/2014 ]

M. Chi, Security Policy and Social Media Use,2011 The SANS Institute. reading-room/whitepapers/policyissues/reducing-risks-social-media-organization-33749 [ Accessed on 07/07/2014]

M. Langheinrich and G. Karjoth, “Social networking and the risk to companies and institutions.” Information Security Technical Report, vol: 1 5, 2010,pp.51-6. 51363412710000233/1-s2.0-51363412710000233-main.pdf?_fid=880db588-052d- 1 1 e4-9416- 00000aacb361&acdnat=1404665564_4c01c9309cedc188fe4fc0888009c66e.[ Accessed on 06/07/ 2014 ]

M. McDonnell and B. King, “Keeping up Appearances Reputational Threat and Impression Management after Social Movement Boycotts.” Administrative Science Quarterly, vol: 58, iss: 3, 2013, pp. 387-419. [ Accessed on 07/08/ 2014]

M. Randazzo, M. Keeney,E.Kowalski, D. Cappelli, and A. Moore, Insider threat study: Illicit cyber activity in the banking and finance sector,2005(No. CMU/SEI-2004-TR-021). Carnegie-Mellon University Pittsburgh Pa Software Engineering Institute. fulltext/u2/a441249.pdf. [Accessed on 01/08/2014]

N. Michaelidou, N. Siamagka and G. Christodoulides, “Usage, barriers and measurement of social media marketing: An exploratory investigation of small and medium B2B brands”. Industrial Marketing Management, vol: 40, iss: 7, 2011, pp. 1153-9. S0019850111001374/1-s2.0-50019850111001374-main.pdf?_tid=c846abe2-1e5e- 1 1 e4-8a82- 00000aacb35f&acdnat=1407435496_aflecOcd05602467585a29dcc4394261.[ Accessed on 07/08/ 2014 ]

P. Aula, “Social media, reputation risk and ambient publicity management”. Strategy & Leadership, Vol. 38 Iss: 6, 2010, pp. 43 — 9 journals.htm?articleid=1886894. [Accessed on 07/08/2014]

P. Berthon, L. Pitt, K. Plangger and D. Shapiro, “Marketing meets Web 2.0, social media, and creative consumers: Implications for international marketing strategy.” Business Horizons, vol: 55, iss: 3, 2012, pp: 261-71. [ Accessed on 07/08/2014 ]

P. Kotler, “Reinventing marketing to manage the environmental imperative. “Journal of Marketing, vol: 75, iss: 4, 2011, pp. 132-5. 2.1.%20Reinventing%20 Marketing%20to%20Manage%20the%20Environmental%20Imperative.pdf. [ Accessed on 07/ 08/2014]

R. Montalvo, “Social Media Management. “International Journal of Management & Information Systems. vol. 15, No.3,2011, pp. 91-6. article/download/4645/4734.[ Accessed on 06/07/2014]

S. Brammer and S. Pavelin, “Corporate reputation and social performance: The importance of fit.” Journal of Management Studies, vol: 43, iss: 3, 2006,pp: 435-55. http:// _Social_Performance_The_Importance_of Fit/file/60b7d522d9749b6686.pdf. [ Accessed on 07/08/2014 ]

S. Picazo-Vela, I. Gutierrez-Martinez and L. Luna-Reyes, “Understanding risks, benefits, and strategic alternatives of social media applications in the public sector.” Government Information Quarterly, vol: 29, 2012 pp. 504-11. s2.0-S0740624X12001025-main.pdf?_tid=a206f2a2-0527-11e4-a7ea-00000aacb362&acdnat=140466303198b29673394d658f23bd31968c72aefd. [Accessed on 06/ 07/2014]

T. Thompson, J. Hertzberg and M. Sullivan, Social media risks and rewards,2013 Financial Executive Research Foundation, Retrieved from content-page-files/advisory/pdfs/2013/ADV-social-media-survey.ashx [Accessed 30/07/ 2014]

W. Mangold and D. Faulds, “Social media: The new hybrid element of the promotion mix.” Business horizons, vol: 52, Iss: 4, 2009, pp. 357-65. http://www.iaadiplom.d1c/Billeder/ MasterClass07/07-1SocialMedia-inthePromotionalMix.PDF. [Accessed 07/08/2014]

Role of Perception, Collaboration and Shared Responsibility among various Stake-holders in Critical Infrastructure Risk Management

Posted on

Sandeep Mittal, I.P.S.,*


Indian Journal of Criminology, Volume 42 (1) & (2) January & July 2014


The well-being of a nation depends on its critical infrastructure and how secure and resilient it is to sustain the services to its citizen and maintain normal life and activity. In today’s world the critical infrastructure is so widely distributed in time and space that the entire process of establishing, maintaining, securing and making it resilient involve a number of stake holders like the governments at the federal, state and local levels; specialised technical organizations in public and private sectors; private vendors, security agencies and last but not the least the citizens or the society. Each one of them has to play a role in close collaboration with other stakeholders. Moreover the critical infrastructure is increasingly becoming more dependent on cross- sectorial processes governed by technology and humans. All of them closely interact with each other, e.g., the humans interact with humans, humans interact with technologies, and this interactive process is highly complex, complicated and biased due to their cultural values, judgements and perceptions which in turn are dynamic in space and time(Ramamurthy, 2012). In this essay, we would examine how the process of building the security and the resilience in critical infrastructure can be achieved through a collaborative approach and neutralising the cultural perceptions.

The Collaborative Approach to Critical Infrastructure and Cultural Perception

Let us have a look at the case study of a disturbing incident regarding a Critical Infrastructure facility in the southernmost State of India, Tamilnadu viz., the public agitation against the Nuclear Power Project at Koodankulam.The Nuclear Power Corporation of India under Department of Atomic Energy, Government of India was in the advance stage of commissioning two 100 MW nuclear power reactors in a coastal village of Tamilnadu at a cost of about GBP 1600 Million.


Recently one of the reactors had started producing the electricity at commercial scale. These reactors are under planning and construction for more than a decade but there are repeated public uproars regarding the safety of these nuclear power plants, more vigorously after the Fukushima Nuclear Disaster in Japan. Offlate, the ‘safety-concerns’ regarding the commissioning of this critical-infra-structure project have itself taken the shape of a ‘security-threat’ to this critical-infrastructure project, due to disturbed perception dynamics of various stakeholders the local community which went berserk posing serious threat to the critical-infrastructure including long time disruptions to the critical operations necessitating intervention by police authorities to diffuse the situation as per the rule of the law. The Tamil Nadu Police did an extremely trying job of restoring the peace and public order in an outstanding professional manner while maintaining, at the same time,utmost restraint, patience and respect for the human rights of the agitators.

This is when the ‘safety-fear’ became the ‘security-threat’ to critical infrastructure itself…The following narrative based on information gathered from various sources would explain the scenario.

“On September 11, 2011 the protestors began an indefinite fast. Efforts were made by police and administration to peacefully settle the issue. Group of senior Ministers (15th September, 2011), Hon’ble Union Minister (20th Sep-2011), Hon’ble Chief Minister of Tamil Nadu (22nd September, 2011), Hon’ble Prime Minister of India (7th October, 2011) met with representatives of the protestors. On 22ndSeptember, 2012 a resolution was passed by the State Cabinet to halt work at KKNPP till fears of people are allayed.On 13th October, 2011, during local body election campaign, the protestors laid siege to KKNPP and blocked all roads.The protestors later withdrew on Oct 16thOctober, 2011 and the local body elections were conducted peacefully. A Central Committee conducted several rounds of discussions with representatives of protestors and concluded that the plant was safe. A State Committee also examined the safety aspects and concluded that the nuclear plant was safe.On 18th March, 2011 work was fully resumed at KKNPP with police security.Declaration of the prohibitory order under Sec.144 Cr.P.C. was challenged by 3 public interest litigations in Writ Petitions No.7520, 7633 and 7634/12 before the Division Bench of the Hon’ble High Court of Judicature at Madras wherein the order was passed on 26th March, 2012 by the Hon’ble High Court upholding the prohibitory, and is reproduced in part as follows,

“……In view of the above, we hold that the impugned order is only a regulation and not a prohibition altogether for avoiding breach of peace. Therefore, we are not inclined to interfere with the impugned prohibitory order, passed by the second respondent. However, it is made clear that the District Administration shall ensure uninterrupted supply of essential commodities like milk, water and electricity etc., and bus facilities and take all steps against the persons indulging in activities like digging of the roads, blocking the roads with boulders, pillars etc., by taking action in accordance with law. It is needless state that any persons aggrieved by the impugned prohibitory order would be at liberty to avail the remedy available under section 141 (5) of Cr.P.C, if so advised…..”

While disposing a batch of Writ Petitions in W.P.Nos.24770 and 22771 of 2011, 8262 and 13987 of 2012 and W.P.(MD) Nos.14054 and 14172 of 2011, 1823 and 2485 of 2012 by the order dated 31st August, 2012 the Hon’ble High Court at Madras observed which is reproduced in part as follows,

“……By taking note of the overall situation explained in detail, we are of the view that the KKNPP in respect of Units 1 and 2 do not suffer from any infirmities either for want of any clearance from any of the authorities, including the MOEF, AERB, TNPCB, and the Department of Atomic Energy, and there is absolutely no impediment for the NPCIL to proceed. ……..”

Even after the Hon’ble High Court order, the protestors decided to lay siege to the nuclear plant. There was a law and order flare up on September10, 2012 when the protestors forcefully and violently tried to proceed towards the plant. Police dispersed the unlawful assembly observing utmost restraint and with the use of minimum force. The police remained highly disciplined, professional and tactful and quickly de-escalated the tense situation. The Hon’ble High Court of Judicature at Madras considered the law and order incidents at Koodankulam in W.P. (MD).No. 12093 of 2012 and W.P. (MD) No. 12091 of 2012 and observed as follows,

“…..Certainly each citizen has got every right to raise his objection to any public issue. But there is a method to raise objection and there is a manner in which that has to be raised, but not as adopted by these agitators by attacking the police, etc. The stand of the petitioners is that they raised their objection peacefully. We are unable to understand whether causing damage to public properties, threatening the people to close down their business establishments and causing damage to vehicles and forcing the general public to yield to their view can be said conducting the agitation in a peaceful manner. Having disobeyed the prohibitory orders and totally ignoring the Division Bench Judgement of this court and taking the law into their own hands and even after the requisition made and notice given to them for the dispersal of the unlawful assembly, if the agitators can continue with this in their own way, causing all irregular and illegal activities, this Court is not able to understand how they are entitled to ask for an enquiry and how this relief can be given, when a citizen himself has taken the law into his own hands?…..”

From the very beginning police and district administration realised that this is a very emotivepublic issue with widespread ramifications. All efforts were made to peacefully resolve the issue by negotiations and restraint. Effective intelligence gathering and tactful handling were key to defeat the evil designs of few people with vested interests who were actively misguiding innocent villagers, women and children; and were trying to give communal and extremist colour to the agitation with intent to destabilise the law and order situation in the entire coastal belt of the state. Further an interesting insight into the perception of law-makers in to radiation hazards related to such critical infrastructure projects is revealed when the following unstarred question (Rajya Sabha Unstarred Question No. 485) was asked on the floor of the Upper House of Parliament of India,

“Will the PRIME MINISTER be pleased to state:

(a) in what manner India look at the reported first radiation linked cancer case due to Fukushima disaster;
(b) whether a few similar cases of cancer are still awaiting confirmation of a link to accident; and
(c) in view of above, whether the Ministry reconsider its decision about nuclear energy, if not, the reasons therefor?”


“(a) According to Reuters news dated October 20, 2015 the dose received by the deceased worker is 19.8 millisievert (mSv) of which 15.7 mSv was received between October 2012 to December 2013 during post Fukushima clean-up operations. World over occupation workers involved in radiation jobs are governed by the International Commission on Radiological Protection (ICRP) recommendations for dose limits by regulatory bodies. The dose limit for an occupational worker is 20 mSv/year averaged over a period of 5 years and in a year, the limit is 30mSv as per the guidelines of Atomic Energy Regulatory Board (AERB). The dose received by the worker in the present case is within the safe limit stipulated by the respective regulatory body. Although radiation is considered as a possible cause of cancer, according to literature survey, and based on the experience, the cancer cannot be conclusively attributed to radiation at this low dose. The dose received by the worker is well within the safe limit being practised world over.

(b) These cases of exposure are not directly resulting from release of radioactivity to environment from Fukushima disaster, but they are from the planned exposure situation during post clean-up operations at Fukushima. At low doses (within the safe limit), it cannot attribute radiation as the only cause of cancer if detected only in few individuals. The scanning of large number of population anywhere in the world can find cases of cancer like leukemia, lung cancer, thyroid cancer etc., even if they are not exposed to radiation. There is no scientific evidence of confirmed cancer incidences for exposure to less than 100 mSv and the exposure reported from Fukushima is much below this dose.

(c) Indian nuclear power programme believes in protection of the worker, public and their environment from potential radiation hazards, while at the same time making it possible for advancing the nation to enjoy all the benefits resulting from use of nuclear energy. There is no reason to reconsider the decision of going ahead with nuclear energy programme in India. Fukushima accident was caused by an unexpected severe tsunami followed by a massive earthquake. Such major nuclear accident is not anticipated in any of the Indian Nuclear Power Plants due to their location as well as engineering design and operating condition. The Indian Nuclear Power programme follows stringent guidelines on safety at all stages such as siting, design, construction and operation of nuclear power plant and strict regulatory control and compliance. The safety of the workers and public is ensured during normal operation as well as under off-normal conditions. Hence, the Government does not see any detrimental impact on worker and public due to nuclear energy programme.”

A perusal of the above case study would reveal the varying perceptions of the Law Enforcer himself, the politicians, the courts and the civil society on the same issue.These perceptions- related fault-lines in the risk management of critical-infrastructure are mainly due to varying perceptions of governments, politicians, public etc. This case-study amply demonstrate that for security and resilience in Critical Infrastructure, a partnership between the federal and state governments, local, tribal and territorial entities and public and private owners and operators of critical infrastructure proving Douglas and Wildavsky right when they said that, as we are dealing with “‘known’ and ‘known’- unknowns” and no one person can be aware of all the risks and therefore calculate it (Douglas and Wildavsky, 1982). However, before each one collaborates, they need to understand the risk environments affecting critical infrastructure which is complex, complicated and uncertain as the threats and vulnerabilities have evolved over a period of time. The evolving threats to critical infrastructure are climatic condition, technical failures, accidents, acts of terrorism and last but not the least cyber threats (US Department of Homeland Security, 2013).

The Web of Complexity

The increasing interdependence across the sectors and reliance on Information and Communication Technology (ICT) has heightened the potential vulnerabilities to the critical infrastructure.The interdependency and inter-connectivity of the operating environment necessitates collaboration in both planning and action to shape the security and resilience environment of critical infrastructure. The increasing use of cloud computing, mobile devices and wireless connectivity has dramatically changed the operational aspect of critical infrastructure. The use of COTS-products (Commercial- off- the- shelf) has exposed the system to greater risks. The perception of various stake holders in using the COTS-products would differ in time and space thus affecting the operational environment. The critical infrastructure assets in regards to various aspects like ‘location of Physical Assets Versus Location of Services’, ‘Ownership of Assets Versus User of Asset’ are distributed in space and time. This necessitates the partnership across the sectors, across the jurisdictions and across the national borders to build security and resilience in the operating environment. However the cultural perceptions in space and time would affect the mechanism of collaboration thus affecting the resilience mechanism.

The partnership structure of the collaboration between private sectors (owners, vendors, associates, partners etc.) and their government counterpart is a primary mechanism for building security and resilience in critical infrastructure. The concerns and perceptions regarding data privacy and protection have taken a turn-around after the Snowden episode. The Government sector is not willing to trust the private partners. The nature, complexity and interdependency of critical infrastructure operations and risk environment don’t allow any of the entity to manage risk of its own (US Department of Homeland Security, 2013).

In today’s, well connected world, where critical infrastructure is geographically distributed over large areas spacing 100 KM communicating between field units and master tools, the security of the system integrating information technology and operations technology like SCADA and their communication with SOC(Security Operation Centre) assumes significance. A number of frameworks have been designed to assess and mitigate privacy impact of information system, processes, or programmes. Fair Information Practices Principles based on US Department of Homeland Security (2012)is one ofthesuch frameworks. The application of these principles, however, would be influenced by the individual perception of various stockholders at national, regional, local and owner/operator levels when planning for critical infrastructure security and resilience. Theseprinciples are enumerated below, (modified after US Department of Homeland Security, 2013)

  1. Identification and management of risk in such a coordinated and comprehensive manner across the Critical Infrastructure Components (CIC) so as to ensure optimum allocation of security and resilience resources.
  2. Understanding and addressing risks from cross-sector dependencies and interdependencies is essential to build security and resilience.
  3. Sharing information across CIC is imperative to comprehensively address critical infrastructure security and resilience in an increasingly interconnected environment.
  4. Partnership approach to address varying perceptions of various stakeholders of CIC.
  5. Regional and State /Local partnerships are crucial to handle perspectives, mostly misplaced due to lack of information or misinformation campaigns.
  6. International collaboration mutual assistance agreements. The perceptions of national Governments may vary. It is generally perceived that such mechanisms are not very effective.The recent judgement of an US Court directing Microsoft to handover information held in Dublin proves the point (PCWorld, 2014).
  7. Building security and resilience during the design of assets, systems and networks. This issue looks simple but very difficult to implement as perceptions of owners, developers, project managers and security professionals involved during the product development lifecycle vary and security is generally, if not always, ignored. This is the best example of the ‘perception-fault-lines’.

Designing a Critical Infrastructure Risk Management Framework

To overcome complexity, biases and perception fault-lines, it would be advisable to have a Critical Infrastructure Risk Management Framework (CIRMF). The US Department of Homeland Security (2013), proposed CIRMF to strengthen the collaborative efforts in building Security and resilience and in managing the risk by CIC by taking informed decisions. “While individual risk entities are responsible in managing risk to their organisations, collaborating partners improve understandings of threats, vulnerabilities and each-other’s perception” (USDHS, 2013).

The various security elements of critical infrastructure whichin physical, cyber, human should be explicitly identified and integrated at each steps of process. A model of cyber security getting influence by humans has been proposed where it has been demonstrated by incorporating various measures, users behaviour in information system security can be improved by strengthening the factors that have a positive impact and reducing the factors that have a negative impact on the information system security.

“One must keep in mind that risk analysis of critical infrastructureisdependent upon interactions of ‘human with human’ and ‘human and technology’ which areindeed highly complex cognitive processes. The human by nature are not rational and the risk analysis done by them is not value- free and depends on judgementshaving intuitive biases of risk perception. The risk analysis, therefore should take into account the understanding of public concerns in the context,of cultural meanings and value judgements. The culture here is not meresocial; rather it includes historical, political, national, organizational and individual’s own personal experience. Therefore, some scholars have emphasised that politics (they includeperception, values, culture etc. in it) is an important dimension in Risk Analysis” as shown in following (Slovic& Weber, 2002).

Figure 1.Relationship between Risk Assessment, Politics & Management (after Slovic& Weber, 2002)

The risk analysis is like an artefact, an object showing human workmanship, encapsulating practitioner’s values and knowledge and revealing the nature of culture in which artefact was produced. Both are valuable.As Adams said, “Its anxious work. In undertaking it, the modern risk manager should strive to avoid behaving like the drunk who looks for his keys not where he dropped them, but under the lamp-post because that is where it is light” (Adams, 2007).


The Critical Infrastructure Security and Resilience is a complex, complicated and uncertain process trying to reduce and manage the risks that are ‘known’, ‘known- unknown’, and ‘unknown- unknown’. The social, political andeconomic cultures;inter-disciplinary biases of academicians and practitioners (e.g., Scientist Vs. Social Scientist or Academician Vs. Practitioner); organisational sub-cultures within and across the sectors; cognitive decision making processes by human beings (who are biased because of their sub- conscious intuitions and personal experiences and emotions); all play an important role in the collaborative dynamics of the critical infrastructure security and resilience building matrix. To overcome these undesirable attributes is easier said than done. However, the systematic, continuous, repeated and consistent efforts to build a collaborative approach in minimising these so called limitations for critical infrastructure security and resilience would go a long way to achieve better and more widely acceptable results. Ultimately, it is the society who has to decide its own fate.

Note: The views expressed in this paper are of the author and do not necessarily reflect the views of the organizations where he worked in the past or is working presently. The author convey his thanks to CheveningTCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study.


Adams, J.’.2007, Complexity & Uncertainty in a Risk Averse Society, Online, Omega Conference, London.

Boin, A. & McConnell, A. 2007, “Preparing for critical infrastructure breakdowns: the limits of crisis management and the need for resilience”, Journal of Contingencies and Crisis Management, vol. 15, no. 1, pp. 50-59.

Mittal,S. 2016 “Understanding the Human Dimension of Cyber Security”, The Indian Journal of Criminology & Criminalistics, Vol.XXXIV, no.1pp.141-152.

Douglas, M.’.&Wildavsky, A.’.1982,Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. First edn, University of California Press, Berkeley.

Masuda, J.R. & Garvin, T. 2006, “Place, culture, and the social amplification of risk”, Risk Analysis, vol. 26, no. 2, pp. 437-454.

PCWorld 2014, Search Warrants extend to Emails stored Overseas us Judge Rules in Microsoft Case., Onlineedn, PCWorld, Online.

Ramamurthy, V.S.’. 2012, Perception and Acceptance of Public Risks. Accessed online, Science and Society Lecture Series edn, Indian National Science Academy, New Delhi.

Slovic, P.’.& Weber, U.V.’.2002, “Perception of Risk Posed by Extreme Events”, Risk Management Strategies in an Uncertain World., April edn, Pailisades, New York.

US Department of Homeland Security 2013, NIIP 2013: Partenering For Critical Infrastructure Security and Resilience, First edn, USDHS, USA.

US Department of Homeland Security 2008, Privacy Policy Guidance Memorandum Number : 2008-01, Memorandum edn, USA, Washington DC.

US Federal Emergency & Management Agency 2013,
Comprehensive Preparedness Guide 201: Threat and Hazard Identification and Risk Assessment Guide , US Department of Homeland Security, Washington, D.C.

Rajya Sabha Unstarred Question No.485, Radiation Linked Cancer Cases, (Answered on 03.12.2015) Accessed:

*Sandeep Mittal, I.P.S., Deputy Inspector General of Police, LNJN National Institute of Criminology and Forensic Science, Ministry of Home Affairs, Government of India, Sector-3, Rohini, Delhi -110085, Office No. 011-27521104, Fax No.011-27511571

Understanding the Human Dimension of Cyber Security

Posted on Updated on

 Indian Journal of Criminology & Criminalistics (ISSN 0970 - 4345), Vol .34 No. 1 Jan- June,2015, p.141-152
Indian Journal of Criminology & Criminalistics (ISSN 0970 – 4345), Vol .34 No. 1 Jan- June,2015, p.141-152

Sandeep Mittal, I.P.S.,*



It is globally realized that humans are the weakest link in cyber security to the extent that the dictum ‘users are the enemy’ has been debated over about two decades to understand the behavior of the user while dealing with cyber security issues.Attempts have been made to identify the user behavior through various theories in criminology to understand the motive and opportunities available to the user while he interacts with the computer system. In this article, the available literature on interaction of user with the computer system has been analyzed and an integrated model for user behavior in information system security has been proposed by the author. This integrated model could be used to devise a strategy to improve user’s behaviour by strengthening the factors that have a positive impact and reducing the factors that have a negative impact on information system security.


Most of the system security organizations work on the premise that the human factor is the weakest link in the security of computer systems, yet not much research has hitherto been undertaken to explore the scientific basis of these presumptions. The interaction between computers and humans is not a simple mechanism but is instead a complex interplay of social, psychological, technical and environmental factors operating in a continuum of organizational externality and internality.1 This article tries to examine various aspects of interaction between humans and computers with particular reference to the ‘users’.The taxonomy adopted for understanding who is actually a user is based on the available literature.It is also imperative to explore the following questions: Why do users behave the way they do? Is there a psychological basis for the specific behaviour of users during the human’ computer interaction, and if yes, how does it affect the security of the computer system?Various hypotheses and suggestions offered by different experts are thus being reviewed in order to identify ways to improve both user behaviour and the overall security of computer systems. The debate on this issue was initiated by an article entitled,’UsersAre Not the Enemy’2,where the authors studied the behaviour and perceptions of users relating to password systems, and challenged the conclusion drawn in a previous work3 (DeAlvare,1988 quoted in Adams and Sasse, 1999) that many password users do not comply with the password security rules because ‘users are inherently careless and therefore insecure’.

Adams and Sasse (1999) concluded that the possession of a large number of passwords by users prevents the latter from memorising all of them, thereby also compromising password security,that users are generally not aware of the concept of secure passwords, and that they also have insufficient information about security issues. The earlier perceptions of security managers were thus challenged and users were no longer seen as the ‘enemy’. Since then, a number of studies have been undertaken by researchers who have adopted either of these two positions, viz., ‘The user is the enemy’ or ‘The user is not the enemy’. In this article, we examine various hypotheses before taking either of these two positions.

Taxonomy of Users’ Behaviours

It has been found that the effectiveness of technology is impacted by the behaviour of human agents or users,who access, administer and maintain information system resources4. These users could be physically or virtually situated inside or outside the organisations,thus, bringing into interplay a range of environmental factors that influence their behaviour. Most of the organizations tend to be more concerned with threats from external users even though surveys conducted by professional bodies indicate that three- quarters of the security breaches in computer systems originate from within the user fraternity.5Therefore,it is necessary to foster a systematic understanding of the behaviour of users and how it impacts information security.In this context, researchers have developed taxonomy of the behaviour of information security end-users.6 This taxonomy of security behaviour, comprising of six elements, (as has been depicted in Figure 1) is dependent upon two factors, viz., intentionality and technical expertise. On the one hand, the intentionality dimension indicates whether a particular behaviour was intentionally malicious or beneficial, or whether there was no intent at all. The dimension of technical expertise, on the other hand, takes into consideration the degree of technological knowledge and skill required for the performance of a particular behaviour.

Source: Adapted from Stanton,et al., 2005
Source: Adapted from Stanton,et al., 2005





The taxonomy of end-user behaviour, as delineated in Figure 1, helps in classifying the raw data on users’ behaviours and also in selecting the paths that could be followed for improving the information security behaviour of a particular user within an organization.


Exploring ‘What the Users Do?’

A fundamental postulate is that the users’ behaviour is guided by the risk which they perceive to be associated with their interaction with the information system in everyday situations. However, research has revealed that users normally fail to take optimal or reasoned decisions about the risks concerning security of information systems. The decision-making process of users exhibits the following predictable characteristics, and thereby understanding them would be of great use in positively impacting the decision-making ability of users7:

  1. Users often do not consider themselves to be at risk.In fact, as the users increase
    the security measures for their computer systems, they start indulging in more risky behaviours.
  2. Although users are not, by and large, imbecile or obtuse in their thinking, they
    lack both the motivation and capacity to devote full attention to information processing, especially since they resort to multi-tasking, which prevents them from concentrating fully on a single task at a time.
  3. The concept of safety per se is unlikely to be a persuasive element in determining human behaviour, especially because the argument that safety prevents something bad from happening is a rather abstract one, and consequently, human beings do not perceive adherence to safety norms as a gain or a beneficial exercise.
  4. It has been observed, that adherence to safety and security norms does not always produce instant results. In fact, the results often come weeks or months later, if at all, which prevents human beings from immediately comprehending the positive outcomes of their actions, thereby making them complacent. The same delay in perception of outcomes is also evident in the case of negative actions. Thus, human beings realize the impact of their actions only when the results can be seen instantaneously, as in the case of disasters.
  5. Research on the association between the concepts of risk, losses and gains indicatethat ‘people are more likely to avoid risk when alternatives are presented as gains and take risks when alternatives are presented as losses. When evaluating a security decision, the negative consequences are potentially greater, but the probability is generally less and unknown. When there is a potential loss in a poor security decision as compared to the guaranteed loss of making a pro-security decision, the user may be inclined to take the risk’.8 This study, therefore, shows a strong likelihood of users gambling to offset a potential loss rather than accepting a guaranteed loss in toto. This observation is depicted in Figure 2 (West, 2008, adapted from Tversky and Kahneman, 1986).9
Figure 2: Losses carry more value as compared to gains when both are perceived as equal. For non- zero values, if the value of loss (X) = value of gain (Y), then motivation of loss (A)>motivation of gain (B) (West, 2008, adapted from Tversky and Kahneman, 1986).
Figure 2: Losses carry more value as compared to gains when both are perceived as equal. For non-
zero values, if the value of loss (X) = value of gain (Y), then motivation of loss (A)>motivation of
gain (B) (West, 2008, adapted from Tversky and Kahneman, 1986).

The author is tempted to undertake a detailed literature survey to study the influence of human factors on security of information systemsin order to gain an insight into the entire scenario. However, in view of the limited scope of the present article, the author is restricting himself to presenting only a summary of the important available literature on users’ behaviour vis-à-vis the information system security (Table 1), leaving it to the readers to probe the matter further.

Table 1: Summary of Research on Users’ Behaviour and Information System Security

S.No. Dimension Postulate Reference
1. Users’ Behaviour a) There is a relation between end-user security behaviour and a combination of situational factors.

b) The various factors that are believed to influence security-related behaviour include the users’ perceptions of their own susceptibility and efficiency, and the possible benefits they are likely to derive from security.

c) It is extremely difficult to audit employee behaviour and the reasons thereof as individuals react differently in each situation, depending upon organizational culture.

Stanton et al., 2004

Ng, Kankanhalli and Xu, 2009

Vroom and von
Solms, 2004

2. Familiarity with information security aspects a) Shared knowledge about information security is important as it contributes towards bringing about a change in individual behaviour and eventually in an organization’s behaviour.
b) The following three factors have been identified as barriers to information
* General security awareness,
* Users’ computer skills, and
* Organizational budgets.
Vroom and von Sloms, 2004

Shawet al., 2009

3. Awareness The following factors have been identified among Users as three levels of security
* Perception re-use of potential security risks,
* Comprehensive know-how to perceive and interpret risks, and
* Prevention of the user.s ability to predict future situational events.
Shawet al., 2009
4. Organizational In a positive work environment,users Environment understand their role in the complex information security system,which helps them improve their behaviour. An organization with a positive climate may influence the behaviour and commitment of users. Shawet al., 2009.
5. Work Conditions Unsatisfactory and negative work conditions can contribute negatively to work. Tiredness and fatigue may also lead to failure to follow policies and procedures among users, thereby resulting in their disregarding information security. Kellowayet al., 2010 .


Unfolding Criminology Theories to Understand Users’ Behaviour

The theoretical foundation for several research models designed for studying users’ behavior has been provided by criminology theories. These theories have been categorized according to their focal concepts and aims, as enumerated in Table 2.10 As pointed out in the last column of Table 2, a number of researchers have tried to apply these criminology theories in isolation or in combination with each other to the information security system. These theories explain the behaviour of users as perceived by criminologists, most of whom have deep foundations in psychology.

Table 2: Criminology Theories, Concepts and Principles in Information Security (IS) Literature (afterTheoharidou , et al., 2005)

Criminal theories Focal concept Basic Principles Related Research within IS Security literature
General Deterrence Theory (GDT).

(Blumstein, 1978,1986)

A person commits a crime if the expected benefits outweight the cost of sanction. (Goodhue and Straub, 1991)

(Straub and Welke, 1998).

Social Bond Theory

(Hirschi, 1969)

A person commits a crime if the social bonds of attachment, involvement and belief are weak. (Lee and Lee, 2002),
(Agnew, 1995)
(Hollinger, 1986)
(Lee et al., 2003)
Social Learning Theory
(Sutherland, 1924 ,
quoted in Akers,2011)
Motive A person commits a crime if (s) he associates with delinquent peers, who transmit delinquent ideas, reinforce delinquency, and function as delinquent role models. (Lee and Lee, 2002)
(Skinner and Fream, 1997)
(Hollinger, 1993)
Theory of Planned Behavior (TPB)
(Ajzen and Fishbein,2000)
A person’s intension towards crime is akey factor in predicting his/ her behavior. Intentions are shaped based on attitude, subjective norms and perceived behavioural control (Lee and Lee, 2002)
(Leach, 2003)
Situational Crime Prevention (SCP)


Opportunity A crime occurs when there is both motive and opportunity. Crime is reduced when no opportunities exist. (Willison, 2000)


Models of User Behaviour

Researchers have used theories used in general criminology and literature pertaining to interaction between humans and technology in information security systems for developing theoretical and research models to understand users’ behaviour. Figure 3 depicts an integrated model of this behaviour derived and designed by the present author from two research studies.11

Figure 3: An integrated model for User’s behavior in Information  System Security developed by integrating models proposed by Luchiano et al., 2010 and Herath and Rao 2009.
Figure 3: An integrated model for User’s behavior in Information System Security developed by integrating models proposed by Luchiano et al., 2010 and Herath and Rao 2009.

The findings of these studies can be summarized as follows12:

  1. A constructive organizational environment has a positive impact on the responsible behaviour of users towards information security.
  2. Stressful work conditions would negatively impact the responsible behaviour of users towards information security.
  3. The adoption of responsible behaviour by users in terms of adhering to information security policies and procedures would negatively impact the vulnerabilities of users to information security breaches.
  4. Familiarity with information security policies and procedures among users would:

    a)Positively impact their responsible behaviour towards information security;

    b)Negatively impact their vulnerability to information security breaches; and c)Positively impact their awareness of potential information security threats.

  5. Awareness of potential information security threats among users would:

    a)Positively impact their responsible behaviour towards information security; and

    b)Negatively impact their vulnerability to information security breaches.

  6. Some of the key elements that play a vital role in users’ behaviour include gender, work experience, age, and educational qualifications.
  7. The intentions of users to follow security policies are determined by both internal and external motivating factors.
  8. The security behaviour of users is positively affected by both standard prescriptive beliefs as well as peer influences.
  9. The security-related behavioural intentions of users are positively impacted if detection is certain.
  10. The security-related behavioural intentions of users are negatively impacted if the prospective penalty for neglecting security is expected to be severe.
  11. The perceptions of users regarding compliance by others with security behaviour also play an important role in determining their own behaviour towards security.
  12. The vulnerability of users to any breaches in information security are inversely related to the compliance with security procedures among users. This implies that the stronger the users’ intention to adhere to security behaviour, the lower would be their vulnerability to any security failures.

While the element of technology remains constant during human’computer interaction, it is the human element which remains highly dynamic mainly due to the complexity of human behavior. Suggestions for the relevant implications of human behavioural science in improving cyber security are as follows13:

  1. The implication of the ‘Identifiable Victim’s Effect’ (the tendency of an individual to offer greater help when an identifiable person is observed in hardship as compared to a vaguely defined group in the same need) may lead a user to choose a stronger security system when possible negative outcomes are real and personal, rather than abstract. 14
  2. The ‘Elaboration Likelihood Model’ describes how human attitudes form and persist. There are two main routes to attitude change, viz., the central route (the logical, conscious and thoughtful route, resulting in a permanent change in attitude) and the peripheral route (that is, when people do not pay attention to persuasiv e arguments, and are instead in fluenced by superficial characteristics, and the change in their attitude is consequently temporary). Efforts should thus be made to motivate users to take the central route while receiving cyber security training and education. Fear can also be used to compel users to pay attention to security, but this would be effective only when the fear levels are moderate and simultaneously, a solution is also offered to the fear-inducing situation. The inducement of a strong fear, on the other hand, would lead to ‘fight or flight’ reactions from users.15
  3. Cognitive Dissonance (a feeling of discomfort due to conflicting thoughts) acts as a powerful motivator by evoking the following reactions among users, making people react in the following three ways:

    a)Change in their behaviour

    b)Justification of their behaviour through a rejection of any conflicting attitude; or

    c) Addition of new attitudes for justifying their behaviour.

  4. Cognitive dissonance is hence used to persuade users to change their attitude towards cyber security and then eventually adopt a behaviour that motivates them to choose better security.16
  5. Social Cognitive Theory stipulates that learning among people is based on two key elements—by watching others, or through the effect of their own personality. Thus, by incorporating the demographic elements of age, gender and ethnicity, one could initiate a cyber awareness campaign that would help reduce cyber risk by enabling the users to identify with their recognisable peers and thereby imitate the secure behaviour of the latter.17
  6. Status Quo Bias’the tendency of a person not to change an established behaviour without being offered a compelling incentive to do so’ necessitates the introduction of strong incentives for users to change their cyber behaviour. This can be exploited positively by information system designers.18
  7. The Prospect Theory helps us in framing user choices about cyber security by framing them as gains rather than losses.19
  8. Another factor to be considered is Optimism Bias, which leads users to under- estimate the security risk, thereby making them perceive that they are immune to cyber-attacks. In order to enable users to overcome this attitude, the security system could be designed to incorporate the real experiences of users for effectively conveying the impact of the risk.20
  9. 8. Control Bias or the belief among users that they have a strong control over or capacity to determine outcomes hinders people from following security measures. This bias should be kept in mind while designing systems and training programmes for users.21
  10. Confirmation Bias—looking for evidence to confirm a position—exposes the users’ minds to new ideas. In order to overcome this bias, the system must provide evidence to change their current beliefs (for example, regular security digests may be e-mailed to them).22
  11. While trying to improve the cyber behaviour of users, the Endowment Effect, wherein people place a higher value on the objects they own as compared to the objects they do not own, could be used. Users may thus be persuaded to pay more for security when it allows them to safely keep something that they already have (for example, the privacy of data).23
  12. It is amply clear from the foregoing discussion that human–computer interaction is not a simple process but is instead a complex and dynamic mechanism, characterized by the interplay of a large number of technological, human and environmental factors with each other in space and time. Being humans, users do not have the biological capacity to handle these numerous factors simultaneously in space and time, which is why they behave the way they do, thus, unintentionally or accidentally (and sometimes maliciously) compromising the information system security. In this way, users themselves become the enemy of information security, and are therefore categorized as the weakest link in the information security chain.


The most important and dynamic aspect of the interaction between humans and computers is the behaviour of the user, which varies in space and time. It is also influenced by psychological, intrinsic and extrinsic factors, which in turn, are governed by peer behaviour, normative beliefs, and social pressures, among other things. Therefore, the behaviour of the user is not solely dependent on the user himself, or we could say that he might have little control over his own behaviour while interacting with the security of information systems. The integrated model discussed in this article may thus be used to devise a strategy for improving the users’ behaviour by strengthening the factors that have a positive impact and reducing or even eliminating the factors that have a negative impact on the security of the information system security. However, this is a complex task and should not be considered as simple, as for instance, selling a non-durable consumer item like a soap!


1E.M. Luciano, M.A. Mahmood and A.C.G Maçada, ‘The Influence of Human Factors on Vulnerability to Information Security Breaches’, ‘Proceedings of the Sixteenth Americas Conference on Information Systems, Lima’, Peru, August, 2010, p. 12.
_Maada/file/e0b4952f0d76b267b1.pdf Accessed on 29 June 2014.

2A. Adams and A.M. Sasse, ‘Users Are Not The enemy’, Communications of the ACM, vol. 42, no. 12,1999,pp. 40-6.

3A. Adams and A.M. Sasse, ‘Users Are Not the Enemy’, Communications of the ACM, vol. 42, no. 12, 1999.

4C.Vroom and R.Von Solms,’Towards InformationSecurityBehaviouralCompliance’, Computers & Security,2004, vol. 23, no. 3, pp. 191-8. Accessed on 2 July 2014.

5J.M.Stanton et al.,’Analysis of end user security behaviors’, Computers & Security,vol. 24, no. 2, 2005,pp.124-33.

6J.M. Stanton, et al. ‘Analysis of end user security behaviors’, Computers & Security, vol. 24, no. 2, 2005, pp.124-133.

7 R. West, ‘The psychology of security’, Communications of the ACM, vol. 51, no. 4, 2008, pp. 34-40.

8R. West, ‘The psychology of security’, Communications of the ACM, vol. 51, no. 4, 2008; R. West et al.,’The Weakest Link: A Psychological Perspective on Why’, Social and Human Elements of Information Security: Emerging Trends,2009.

9A. Tversky, and D. Kahneman,’Rational Choice and the Framing of Decisions’, Journal of Business, 1986, pp. S251-S278.
Accessed on 29 June 2014.

10M. Theoharidou et al., ‘The insider threat to information systems and the effectiveness of ISO17799’, Computers & Security, vol. 24, no. 6, 2005, pp. 472-84.

11D.L. Goodhue, & D.W. Straub, ‘Security Concerns of System Users: A Study of Perceptions of the Adequacy of Security’, Information & Management,vol. 20, no. 1, pp. 13-27 Edimara_Mezzomo_Luciano/publication/260012210_Influence_of_human_factors_on_information_security_ breaches_-_Luciano_-_Mahmood_-_Maada/file/e0b4952f0d76b267b1.pdf ; T. Herath and R.H. Rao,
‘Protection motivation and deterrence: A framework for Security Policy Compliance in Organisations’, European Journal of Information Systems, vol. 18, no. 2, 2009, pp. 106-25.

12 Ibid.

13S.L. Pfleeger and D.D. Caputo,’Leveraging Behavioral Science to Mitigate Cyber Security Risk’, Computers & Security,vol. 31, no. 4, 2012, pp. 597-611. Accessed on 1 July 2014.

14K. Jenni and G. Loewenstein, ‘Explaining the Identifiable victim Effect’, Journal of Risk and Uncertainty,1997, vol. 14, no. 3, pp. 235-57, Accessed on 1 July 2014.

1515 R.E. Petty and J.T. Cacioppo, ‘The Elaboration Likelihood of Perusation’ Accessed on 1 July 2014.


17 A. Bandura,’Human Agency in Social Cognitive Theory’,American psychologist, vol. 44, no. 9, 1989, p.1175.

18 W.Samuelson and R. Zeckhauser, ‘Status Quo Bias in Decision Making’. Accessed on 1 July 2014.

19A.Tversky and D. Kahneman,’Rational Choice and the Framing of Decisions’, Journal of Business,1986, pp. S251-S278.
Accessed on 29 June 2014.

20 D. Dunning, C. Heath and J.M.Suls, ‘Flawed Self-Assessment’, Accessed on 1 July 2014.

21 J. Baron and J.C. Hershey,’Outcome Bias in Decision Evaluation’, Journal of Personality and Social Psychology,vol. 54, no. 4, 1988, p. 569. Accessed on 1 July 2014.

22 M. Lewika, ‘Confirmation Bias’, Personal Control in Action, Springer, 1998, pp. 233-58. Abstract accessed on 1 July 2014.

23 R.Thaler, ‘The Psychology of Choice and the Assumptions of Economics’, Laboratory Experimentation in Economics, p. 99. Accessed on 1 July 2014.

Perspectives in Cyber Security, the future of cyber malware

Posted on Updated on

Published in The Indian Journal of Criminology (ISSN 0974 - 7249), Vol .41 (1) & (2), Jan. & July,2013
Published in The Indian Journal of Criminology (ISSN 0974 – 7249), Vol .41 (1) & (2), Jan. & July,2013, p.210-227

Sandeep Mittal, I.P.S.,*



The term ‘Malware’ has become a fashionable word to throw around now days. However, it should not be thought of something very sophisticated only. In this paper, we would give a brief definition and description of the term ‘malware’ and the related concepts including the evolutionary and historical time line. The concept of the future of ‘malware’ would be dealt with from four perspectives which may be dependent upon one another at least at some point in space and time. The first being the ‘malware design’ as the malware experts are using increasingly complex designs, taking the ‘malware’, to the scale of ‘war- grade- weapon’ in the recent past. The second important perspective is the ‘terrain’ of the cyber domain where the malware operates or is deployed. The third important perspective would be the ‘technologies’ that are used to detect these malware. As the malware are becoming ‘multiplatform’ and complex, the technologies have to keep pace with the evolution of malware. However, it is made clear at the outset that this paper deals only with the basics of issues raised and technical details have been kept to the minimum, being beyond the scope of present work.

The Malware Understood

‘Malware’ is an ‘unitary’ term for the different types of software- codes which are called as ‘virus’, ‘Trojan horse’ and ‘worm’ at different stages of its evolution. It could be as simple in its design as ‘virus’ or could be extremely complex as some of the ‘worms’ discovered recently. It would be useful if we understand these terms clearly before we venture in to malware understanding. A ‘virus’ is a self-replicating program whose only purpose is to propagate itself by modifying another program to include itself through an act of the user of the system in which it exists (modified after Skardhamar, 1996). The Trojan- Horse (named after the wooden horse, the ancient Greek army used to conquer the city of Troy) is a simple program that purports to do one thing, but actually do something else entirely, often very destructive. A Trojan’s spreading potential is not very big, as once they are run, they cease to be Trojans. But its simplicity can be extremely deceptive in terms of damage. “A ‘worm’ is a type of non-parasitic- code (unlike virus) that purposely replicates a possibly evolved copy of itself by exploiting security vulnerabilities on systems. The vulnerability that a worm exploits need not be exclusively software faults. It may exploit configuration errors or operator errors. Unlike viruses, worms do not replicate by attaching themselves to a host executable or by modifying the system environment to execute the malicious code” (Symantec, 2014). In the present scenario, the malicious researchers are concentrating on worms and the term ‘worm’ has become synonymous with ‘malware’ and would be used interchangeably sometimes in this paper. A more crisp and modern definition of worm is “an independently replicating and autonomous infection agent, capable of seeking out new host systems and infecting them via a network” (Nazario, 2004). As the most of the malware encountered in recent past belong to the category of worms, let us have some deep introspection of the basic components of worms. A worm must have at least one of the following five components, the attack component being the minimum set of one (Nazario, 2004);

  1.  Reconnaissance Component hunts down other network nodes to infect. This component is responsible for identifying the host on network that is capable of being compromised by the worm’s known methods.
  2.  Attack component launches an attack against target. The attacks can be the old age buffer or heap overflow, string formatting attacks, Unicode misinterpretations and misconfigurations.
  3.  Communication Components gives the worms the interface to send messages between nodes or some other central location.
  4.  Command Components provides the interface to the worm node to issue and act on commands.
  5.  Intelligence Components provides the intelligence required to contact various worm nodes.

An assembly of the components of a worm is depicted in following figure (Nazario, 2004).


Many of the characteristics of a worm can be used to defeat it, for example, predictable behavior and characteristic signatures in contrast to manual attacks, where tactics is changed now and then. However, the worms continue to be generated as majority of the malware due to ease of continuous and the malware due to ease of continuous and exponential propagation, capacity to penetrate even difficult networks, persistence in infecting the systems despite patching and sanitization, and broad base coverage of the networks in space and time.

Hence, the future malware will continue to be worm-based in view of the foregoing discussion.

The History and Evolution of Malware

The future of malware cannot be predicted, unless we have an introspection of the history of malware to understand the evolution of malware over time.

The historical time line is depicted in the following table (Lava Soft, 2013) in a generalist manner;

HISTORY OF MALWARE (modified after Lavasoft, 2013)

S.No. Year Name of Malware Details of malware
1. 1971 Creeper First ever computer virus. ARPANET
2. 1981 Elk Clover First known microcomputer virus attached itself to Apple DOS 3.3 operating system and spread by floppy.
3. 1986 Brain Brain First computer virus for MS-DOS infected the boot sector of the storage media formatted with the FAT file –system. Written to demonstrate insecurity of computers.
4. 1987 Stoned A boot sector computer virus.
5. 1988 Morris Worm Infected around 6000 computers of University, military and NASA. Morris was a researcher, introduced the worm by accident and was the first person to be arrested for such crime.
7. 1995 Concept First Macro virus, and hid itself in a word document and spreads by integrating itself into more files each time the host program is run.
8. 1999 Happy 99, Melissa, Kak Advance malware spread quickly through Microsoft environments.
9. 2000 I Love You Computer worm attacked millions of window PCs through email message. An estimated $15 Billion was spent to clean the mess up.
10. 2001 Code Red Worm attacked computers running on Microsoft IIS server. It chose the targets pseudo-randomly on the same or different subnets as the infected machines in accordance with fixed probability distribution
11. 2001 Nimda Computer worm and file infector, utilized several propagation techniques and thus become most widespread worm in 22 minutes.
12. 2003 Sol Slammer Computer worm that caused DoS on internet hosts.
13. 2004 Cabir First mobile phone virus attacking Symbian OS spread through Bluetooth.
14. 2007 Storm Botnet A remote controlled botnet linked by storm worm spread through email and infected 50 million computers.
15. 2009 Koobface Multiplatform work that attacked users of popular social networking websites and designed to infect windows, Mac OS and Linux platforms.
16. 2010 Geinimi First Android Malware displaying botnet capability.


An era of weaponization of software code heralded in the year 2010 with the discovery of ‘Stuxnet’ followed by ‘DuQu’ and ‘Flame’ malware which are distinctively different in stealth, design, complexity and deployed for fully targeted attacks. “The Stuxnet’ targeted Iranian Nuclear Facility at Natanz. The Stuxnet used four ‘Zero day vulnerabilities’ and employed Siemens’ default passwords to access window OS that run WinCC and PC57 programs. It would hunt down frequency-converter drives made by FaraPaya in Iran and Vacon in Finland. These drives were used to power centrifuges used in the concentration of the Uranium-235 isotope. Stuxnet altered the frequency of the electrical current to the drives causing them to switch between high and low speeds for which they were not designed. This switching caused the centrifuges to fail at a higher than normal rate” (Farwell & Rohozinski, 2011). In 2011, another worm ‘DuQu’, which contained components almost identical to stuxnet, was discovered. However, the ‘DuQu’ was not self- replicating and was devoid of a payload. It seemed to be designed to conduct reconnaissance on an unknown industrial control system (Zetter, 2011). ‘Flame’ was another ‘stuxnet’- type of malware designed primarily to spy on infected computers and detectedfrom the computers of Iranian Oil Ministry, (Zetter, 2012).

Thus, it is seen from the discussion in the foregoing that the malware has evolved over a period of time from a ‘simplistic-experimental-code’ to ‘highly complex and complicated codes’ synonymous with Internet-wide devastation.

The Future of Malware Design

The ‘Samhain Project’ (Zalewski, 2000), intended to design an intelligent malware, listed seven requirements and guidelines for the intelligent worm;

  1. Portability across hardware architectures and operating system to achieve the largest possible dispersal.
  2. Invisibility from detection.
  3. Independence from manual intervention. The worm must not only spread automatically but must be adaptable to its network.
  4. The worm should be able to learn new techniques. It’s ‘database of exploits’ should be updatable itself.
  5. The integrity of the worm host must be preserved. The worm’s executable instances should avoid analysis by outsiders.
  6. Avoid the use of static signatures. By using the polymorphism the malware can avoid detection methods that rely on signature based analysis.
  7. Overall worm net usability. The network created by worms should be able to be focused to achieve the specific task.

The researchers (Zalewski, 2000) have discussed various options for implementation of ‘Samhain Worm’ for its assembly, to form the worm system. the details of which are beyond the scope of this essay. However it would be pertinent to mention the flaws in ‘Samhain Worm Architecture’ which can fail the worm network.

Firstly the ability to update the database of known attack methods requires a distribution system which would be either central or hierarchical. An attack at this point may disrupt the growth and capabilities of worm. Secondly, the mechanism used to prevent repeated worm installation on the same host is a serious flaw. The worm executable, during its initialization, looks for other instances for itself. An attack on the worm system would require forgery of this signal to prevent the installation of the worm executable. In doing so, the worm is not installed on the host and thus its growth is stopped at this point.

In earlier part of this paper, we identified five components of a functional worm. However, there are several problems with the design and implementation of current worms (Nazario etal., 2001). The signatures of the remote attacks and reconnaissance traffic can be used to identify the source nodes; as the traffic associated with worms grow exponentially the life span of the worm is reduced and traffic growth leads to increasing worm profile thus detection; no direction of spread therefore making the directed attacks against specific target, a matter of chance; utilization of a central database of affected host by worm make it susceptible to exploitation (Nazario et al., 2001). Further Nazario and his associates used these components and problems associated with them in its implementation, to give considerations for future worms by proposing various adaptations.

  1.  Instead of actively scanning the targets for exploitation, worm to simply observe network traffic to discover the hosts, remote operating system and applications in use and then launch an attack.
  2.  Instead of central topology, use ‘guerilla’ and ‘directed tree’ topologies to achieve specificity of target attack.
  3.  Instead of central communication topology, use a system where each node stores the messages and forward the messages to appropriate node one hop away to cut down the generation of traffic.
  4.  Instead of encrypted communication methods, use steganography e.g., hiding data in media files.
  5.  Attack new targets e.g., appliances with embedded technologies.
  6.  Instead of static signatures, use polymorphic pay-loads. Using modular worm behavior where single basic component is skipped in design may give the worm added evasion capability.
  7.  Design to support dynamic updates to the system.

Many of these adaptations have been observed in ‘stuxnet’, ‘duqu’ and ‘flame’ malwares. Many are yet to be seen or discovered by the world.

The Future of Malware Deployment

The deployment of a malware by an attacker depends upon the intention and motivation of the attacker, which in turn would define the sophistication of the attack and typical target groups as summarized in following figure(Zoller,2011);

figure b

Zoller further classified the attacks based on the attacker deploying the attacks as opportunists, targeting opportunists, professionals and state founded. The script- kiddies would continue to use their unsophisticated attacks in the ‘mass-malware-market’. The exploits of targeting opportunists and professional have resulted in emergence of ‘commercial-vulnerability-market.’ However, the cause of worry is the future malware like’ stuxnet’, ‘flame’ and ‘duqu’ which are considered as acts of the nation-states. Take a look at the ‘latest’ malware to join the list- ‘Mask’ or ‘Careto’ discovered recently (Kaspersky, 2014). The ‘Mask’ is learnt to have targeted so far, 380 unique victims, e.g., Government, Diplomatic, Institutes, Energy, Oil & Gas Sectors, Research Institutes, Private Commercial Establishments and Activists spread over 31 countries and learnt to be in active cyber espionage since 2007. The ‘Mask’ becomes a special malware in view of the complexity of tool set used by attackers. This includes an extremely sophisticated malware, a root kit, a boot kit, 32-64-bit windows versions, Mac OSX and Linux versions and possible the versions of Android and iPhone/iPad (Apple iOS). When active in a victim system, ‘The Mask’ can interrupt network traffic, keystrokes, Skype conversations PGP keys, analyze Wi-Fi traffic, fetch all information from Nokia devices, screen captures and monitor all file operations. The malware collects a large set of data from infected systems e.g. the encryption keys, VPN configurations, SSH keys etc. The time, money and expertise required to design and deploy such an extremely sophisticated malware leave no doubt that it is the handwork of some ‘nation state’.

The complete dependence of a Nation’s Economy and Critical-Infrastructure presents an opportunity to the ‘Nation-States’ to deploy malware to gain information- dominance in cyber- domain to transmit information and denial/restriction of such information to the ‘enemy- state’. Further, the critical- infrastructure of a country can be crippled through deployment of stealthy and well- crafted tools to exploit the ‘zero-day-vulnerability’ is a matter of hours, if not minutes (Mittal, 2014). The concept of war-maneuvering has been compared with cyber-maneuver (Applegate, 2012), where it is realized that blatantly hostile acts in cyber space are characterized by rapidity, anonymity and difficulty in attribution and are dispersed disproportionately in space and time. Even the territory of enemy or one of his allies or adversaries can be used to deploy such malware attacks.

The Future of Malware Terrain

The author has a strong feeling that the future of malware isn’t so much about the design and sophistication in the engineering of malware as much as how and where the potential victim would be attacked, thus making the terrain of malware deployment a key factor in future attacks. The low level attacks would continue to exploit the small and old vulnerabilities to their advantage. The social networking sites would be the most sought after ‘terrain’, in foreseeable future, for deployment of malware (Athanasopoulos, 2008; Luo, 2009; Felt, 2011; Abraham, 2010, Irani, 2011). Recently a malware was deployed to target the top executives of a major corporation through their spouses. The presumption was that at least there would be a few non-tech-savvy spouses using a poorly secured home PC sharing the connection, and this would provide the backdoor needed to compromise the executive’s computer and gain access to the systems of target companies (Vance, 2011). The platform- agnostic, web-based malware represent a new frontier. As the developers re-engineer websites and applications to work on a variety of devices, the malware would target the commonalities like HTML, XML, JPEGs, etc., that run on any device. The pace with which the smart phones are becoming e-wallets, tools of m-commerce and repository of flight e-boarding passes and rail-tickets, would soon make the smart phones a favorable terrain for deployment of malware. But the worst is yet to be discussed. Consider a number of embedded devices available all around us, the microwaves, the refrigerators, the washing machines, the internet cameras, the automated heating and cooling systems, the cars, the routers, the environment monitors, and the animal/cattle- tags and so on. Soon, the connected devices would be part of our lives and thus come the concept of ‘Internet of Things’ or subsequently ‘Internet of Everything’ and finally the malicious ‘Botnet of Things’. Having chips embedded in our appliances make our life simple but imagine what would happen when the number of ‘internet-connected-devises’ reaches50 billion by the year 2020 (Kumar, 2014). The main problem with these things is that unlike computers, the security patches are not updated on these things. The embedded- device- security is a matter of grave concern (John & Thompson, 2012; Stantucci, 2011). I have never seen a company or a user applying security patches to printers, modems, routers, ovens cameras etc. as it require extra time and money. Most of the embedded chips are old versions manufactured even two to three years before the device is manufactured and therefore susceptible to malware attacks even by script- kiddies. The ‘Internet of Things’ would be the favorite terrain for the deployment of malware in future (Stammberger, 2009).As a number of such nano and micro devices are likely to be implanted in human body in future, malware could be deployed even to commit murder, which at present is committed through use of conventional means. These ‘Implantable Medical Devices ( IMDs )’ often work on software- defined radios so that it can operate on multiple frequencies and use various processors ( see figure below, Leavit, 2010).

figure c

Mostly, these devices have no direct connectivity with the internet but may have connectivity with a bedside monitor who in turn may be connected to internet thus enabling hackers to deploy malware to exploit communication channel between the device and external control units. Adding encryption capabilities to IMDs would add complexity and require more battery life and computing powers to handle algorithms (Leavit, 2010). This would be a great challenge in future to build defense against such vulnerabilities by designing zero- power- defense mechanisms for IMDs (Ransford, 2014).

The future of Malware Detection

Based on the discussions in earlier parts of this paper regarding components of worms and the future considerations of worm, we would try to understand the methods of detecting worms. The aim of our ‘detection strategies’ is to detect almost any type of worm with little effort, for which one need to focus on the common features of worms. The three methods of worm detection are, traffic analysis, use of honey pots and dark network monitors and the employment of signature based detection systems and form the core of detection strategies for detecting both the hackers and worms. It is to be kept in mind that no single method work for all of the worms, however a combination of more methods would produce near complete detection. We would briefly discuss the three methods of detection in following part of this essay (modified after Nazario, 2004).

  1.  Traffic Analysis– It is the analysis of network’s communications and the inherent patterns. One need to monitor mainly three major features to detect the worms viz., volume of traffic at a network connection point like router or firewall, the number of type of scans occurring as most worms use active scans to identify new targets of attack, and change in the host traffic patterns when host is compromised. This method is a relatively simple yet powerful tool for worm detection. It uses the general properties seen in most of the worms like active reconnaissance and exponential growth. Even the worms using a variety of dynamic methods or polymorphic vectors can be detected in contrast to signature detection methods. However, this method may have difficulty in detection of ‘slow-worms’ and ‘worms using passive mechanisms’ for identifying and attacking targets. However these weaknesses would not prevent the use of traffic analysis in worm- detection in foreseeable future. Furthermore, the data generated by this analysis may also be useful to find some other network anomalies.
  2.  Honey pots and Dark (Black Hole) Network Monitors – Ahoney pot could be understood as a functional system that responds to malicious probes in a manner that elicits the desirable response by the attack. This could be designed using an entire system, a single service, or even a virtual host. The ‘dark-network-monitoring’ monitors unused network segments for malicious traffic. These could be local, unused subnets or global unused networks. Together these tools can be used in the analysis of worms. However, placing the honey pots on a production network or using the black-hole monitor on a network where the routine traffic is routed as a destination would introduce Large Vulnerability and could be counterproductive. The details of the ‘honey pot’ and ‘black-hole monitor’ setup and functionality are beyond the scope of this discussion. It would suffice to say at this point that, the ‘Black-hole Monitors’ are a more effective means to monitor worm behavior due to their promiscuous nature and can capture wealth of data from a significant portion of the Internet. However, the honey pots, in contrast, are best used at a time of high worm activity when a copy of the worm’s executable is needed. A honey pot is then quickly crafted and exposed to the network. Upon compromise, a set of worm- binaries are obtained for study (Honeynet Project, 2002).
  3.  Signature- based Detection – Adictionary of known fingerprints is used and run across a set of input. The dictionary typically contains a list of known bad signatures like ‘malicious network payloads’, or the ‘file contents of a worm executable’. The three types of signature analysis in worm detection are the ‘network payload signatures ’, ‘Log file analysis’ and ‘file signatures’. The most important weakness of the signature-based detection methods is that they are reactionary and rarely detect a new worm. They can be used only to detect only the known worms. They cannot detect the polymorphic and dynamically updatable worms.

A mix of all three technologies discussed would form a robust system to detect these worms. A detailed view of such system is well documented by NIST (Scarfone & Mell, 2007).

What is the direction of future research in this field? Off late, researchers have shown keen interest in application of principles of ‘Biological Immune Systems’ to Computer Systems, since both have to maintain their stability in ever changing environment. Numerous desirable features of the Biological Immune Systems (BIS) viz., diversity, self-tolerance, immune-memory, distributed computation, self-learning, self-organization, self-adaptation and robustness have inspired BIS based Artificial Immune Systems (AIS) for information security (Jin, 2013). This is based on the ‘danger model’ presented by many researchers (Aickelin & Cayzer, 2002, Matzinger, 2002). According to this model ‘adoptive immune systems’ are not able to distinguish self from non-self but immune response is triggered when danger signals are generated by damaged cells. The cells in the adaptive immune system are incapable of attacking their host. While the immune response of danger model is a reaction to the stimulus considered harmful by body and not reaction to non-self, the foreign and immune cells of danger model are allowed to exist together.

The following figure illustrates the main principle of danger model and its comparison with information system as shown in the accompanying table(Jin, 2013).

figure d

“The cells undergoing distress or unnatural death transmit an alarm signal to Antigen Presenting Cells (APCs), thus simulating the APCs who in term stimulate the adaptive immune system’s ‘B’ and ‘T’ – cells into action in accordance with signal 1 and 2. The signal 1 is the binding of an immune cell to an antigenic pattern presented by an APC and signal 2 is either a help signal to activate a B–Cell or a co-stimulation signal given by APC to activate T-cells (Jin, 2013). Attempts have been made by various researchers to apply this ‘danger model’ to the data processing, worm response and detection, computer network intrusion detection, security monitoring and so on. Multidisciplinary research is required to build a robust and self-healing system of malware detection and defense in foreseeable future.


The malware designs are becoming extremely complex and complicated and have evolved over a period from innocent ‘internet-joy-rides’ to ‘precision cyber-weapons’ of military grade. While the script-kiddies would continue to exploit even old vulnerabilities spread across multiple platforms, the nation-states are looking at the cyber-domain as a fifth domain of war. They would continue to deploy dangerous weaponised-malware to inflict harm in the physical world. The ‘things’ of the ‘Internet of Things ‘would act as a ‘watering hole’ for the attackers to deploy malwares to use ‘insecure-simple-embedded-chips’ to enter into relatively secure computer systems. ‘Malware as a Service’ (MaaS) would become a reality very soon. Despite all efforts, it seems that the malware is here to stay and would continue to be used in future by hacker, curious mind and the warrior of the information age.

Note: The views expressed in this paper are of the author and do not necessarily reflect the views of the organizations where he worked in the past or is working presently. The author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study.


  1. *Abraham, S. and I. Chengalur-Smith. n.d. “An Overview of Social Engineering Malware: Trends, Tactics, and Implications.” Technology in Society 32(3):183–93.
  2. Applegate, S., C. Cossack, R. Ottis , and K. Ziolkowski. n.d. “The Principle Of Maneuver in Cyber Operations.” The Principle of Maneuver in Cyber Operations. Retrieved March 2015 (
  3. Athanasopoulos, E. et al. 2008. “Antisocial Networks: Turning a Social Network into a Botnet” Information Security. Springer.
  4. *Davis, M. 2010. Hacking exposed malware & rootkits : Malware & rootkits security secrets & solutions. New York: McGraw Hill.
  5. Farewell, P., & Rohozinsk, R. 2011. “Stuxnet and the Future of Cyber War”. Survival, 53(1), 23-40. April 2, 2014,
  6. Feder, B. 2008. “A Heart Device Is Found Vulnerable to Hacker Attacks.” New York Times, 12.
  7. *Felt,, A., Finifter, M., Chin, E., Hanna, S., & Wagner, D. 2011. “A survey of mobile malware in the wild”. Proceedings of the 1st ACM Workshop on Security and Privacy in Smartphones and Mobile DevicesACM, 3-3.
  8. Honeynet Project, “Know Your Enemy: Passive Fingerprinting, Identifying Remote Hosts, Without them Knowing”. 2002. Retrieved April 5, 2014, from
  9. *Irani, D., Balduzzi, M., Kirda, D., & Pu, C. 2011. “Reverse social engineering attacks in online social networks”. Detection of Intrusions and Malware, and Vulnerability Assessment (pp. 55-74). Springer.
  10. Jin, X. 2013. “ENSREdm: E-government Network Security Risk Evaluation Method Based on Danger Model”. Research Journal of Applied Sciences, Engineering and Technology, 5(21), 4988-4993. Retrieved from
  11. Unveiling “ Careto – The Masked APT”. 2014. Retrieved September 3, 2015, from
  12. Kumar, A. 2014, March. “Internet of Things (IOT): Seven enterprise risks to consider”. Retrieved April 2, 2015, from
  13. History of Malware. (n.d.). Retrieved April 2, 2014, from
  14. *Leavitt, N. 2010. “Researchers fight to keep implanted medical devices safe from hackers”. Computer, 43(8), 11-14.
  15. Luo, W., Liu, J., & Fan, C. 2009. “An analysis of security in social networks”. Dependable, Autonomic and Secure Computing, 2009. DASC’09. Eighth IEEE International Conference OnIEEE, 648-
  16. Matzinger, A. 2002. “The danger model: A renewed sense of self”. Science, 2002(12), 301-305. Retrieved April 5, 2014, from
  17. Mittal, S. 2014. The Threats and Opportunities in Cyber Domain. Essay submitted to Cranfield University.
  18. Nazario, J. 2004. Defense and Detection Strategies against Internet Worms. USA: Artech House.
  19. Nazario, J. 2001. “The Future of Internet Worms”. Retrieved September 3, 2015, from
  20. *Ransford, B., Clark, S., Kune, D., & Burleson, W. 2014. “Design Challenges for Secure Implantable Medical Devices”. Security and Privacy for Implantable Medical Devices, 157-173.
  21. *Santucci, G. 2011. “The Internet of Things: The Way Ahead”. Internet of Things-Global Technological and Societal Trends From Smart Environments and Spaces to Green ICT, 53.
  22. Scarfone, K., & Mell, P. 2007. “Guide to Intrusion Detection And Prevention System”. NIST Special Publication, 80-94. Retrieved April 5, 2014, from
  23. Skardhamar, R. 1996. Virus Detection And Elimination (UK ed.). Academic Press.
  24. *Stammberger, K. 2009. “Current trends in cyber attacks on mobile and embedded systems”. Embedded Computing Design, 7(5), 8-12.
  25. Symantec. 2014. Worms. Retrieved September 3, 2015, from
  26. Vance, J. 2011. “The Future of Malware”. Network World, (October). Retrieved April 5, 2014, from
  27. Viega, J., & Thompson, H. 2012. “The State of Embedded-Device Security (Spoiler Alert: It’s Bad)”. IEEE Security & Privacy, 10(5), 68-70.
  28. Zalewski, M. 2000. “I Don’ t think I Really Love you, or Writing Internet Worms for Fun and Profit”. Retrieved April 1, 2014, from
  29. Zetter, K. 2012. “’Flame’ spyware infiltrating Iranian computers”. Wired. Retrieved April 1, 2014, from
  30. Zetter, K. 2011. “Son of the Stuxnet in the Wild”. Wired. Retrieved April 1, 2014, from
  31. Zoller, T. 2011. “Musings on Information Security – Luxembourg / A blog by Thierry Zoller.: Attacker Classes and Pyramid (Version 3)”. Retrieved April 1, 2014, from

* Indicates that the Abstract of this reference was read on Google Scholar as these references were not available to Author.

*Shri Sandeep Mittal, I.P.S., presently working as Deputy Inspector General of Police in LNJN National Institute of Criminology and Forensic Science, Ministry of Home Affairs, Government of India, Delhi since 2012, joined I.P.S. in 1995. He has served in various communally sensitive districts in Tamilnadu. He specializes in Cyber Security and was instrumental in neutralizing a number of ‘online-drug-trafficking-syndicates’ globally. He is Life Member of USI, Associate Member of IDSA and Life Member of Indian Society of Criminology. He is a Chevening Cyber Policy Scholar sponsored by Foreign & Commonwealth Office, United Kingdom.