Cyber security

Old Wine With a New Label : Rights of Data Subjects under GDPR

Posted on

International Journal of Advanced Research in Computer Science, ISSN No. 0976-5697, Volume 8, No. 7, July – August 2017

Sandeep Mittal
Cyber Security & Privacy Researcher
Former Director, LNJN NICFS (MHA)
New Delhi, India


Abstract: Recent reforms in data privacy protection framework in European Union have lead to enactment of General Data Protection Regulation (GDPR). However, it remains debatable if GDPR would lead to significant improvement in the protection of privacy rights of individuals, which is always considered the fundamental right. The advent of technology and movement of data across geographical barriers and outsourcing of data processing jobs to countries outside the EU necessitated enactments of GDPR. An analysis is done to demonstrate that though some of the provision of GDPR remain generically remain similar to the Data Protection Directive, GDPR has incorporated some new provisions by choosing the ‘regulation’ as an instrument of law for better harmonisation, expensing the ‘right to be forgotten, legitimisation the role of consent, providing data protection by design and default, increasing accountability of data controllers and expanding the scope of provision of the directive to extra territorial jurisdiction would be remain to be seen whether GDPR is an old wine with the new label or something else in a wine bottle.

Keywords: Rights of Data; Data Protection Regulation; Accessing of Personal data; Internet of Things; Control of Users over Their Personal Data; Data Protection Framework; General Data Protection Regulation


With about 46 per cent of the world’s population having access to it, the Internet has emerged as most popular medium of free expression, and as tool for conducting free trade and the use of smart devices. This propensity to use the Internet for various applications has thus resulted in the generation of a large volume of personal data online including (but not limited to) the name, address, mobile number, date of birth, email address, geographical location, health record of the user, among other things. This data has a high potential of secondary use which necessitates the protection of privacy and confidentiality of this personal data both at residence and in motion across the borders.[1] [2] [3] European Union Directive 95/46/EC (The Directive) [4] remained the basic instrument for protection of data privacy for over 20 years in European Union (EU) recognizing privacy as a fundamental human right.[5] However, the practical implementation of the Directive across the EU states and the seminal decisions of Court of Justice of European Union (CJEU) raised several issues regarding an understanding and need for individual rights to protection on the Internet in EU.[6] This, in turn, triggered the process of reform in the Data Privacy Protection Framework, leading to enactment of the General Data Protection Regulation (GDPR)[7], which is slated to usher in reforms and changes in the EU Data Protection Framework. The scope of this essay is to discuss whether the GDPR signifies any improvement over the current directive in terms of the Right of Individual Data Subjects.


The Directive had almost become antiquated in view of the evolution of new technology such as Internet of Things (IoT), and Cloud, among others, giving rise to a new type of risk that was unknown when the Data Protection Directive was enacted. With the advent of advanced technology and the outsourcing of online services across borders, the adoption of divergent approaches to privacy prevalent both within and outside Europe have given rise to the concern for protection of data privacy in the EU.[8] [9] [10] [11] [12] However, the more immediate trigger for reformation in this policy was the taking of seminal decisions by the CJEU, which led to a lot of important changes in the understanding of the Data Protection Regulation legal framework. In Google Spain,[13] [14] [15] it was ruled that Google would be classified as the controller, as the search, indexing, and storage of information implied the processing of personal data as defined by the Directive. Therefore, search engines are obliged to remove the links to web pages from their results if so requested by the data subject. This gave rise to serious consequences for the search engine and its credibility, as also for the role of intermediaries, as this judgement empowered individuals to ascertain their ‘right to be forgotten’, affecting the free flow of information on the Internet in the process. Another case in which the decision changed the legal situation relating to the data protection law was the Schrems Judgement,[16] wherein the CJEU ruled that a third country ensuring an adequate level of protection cannot eliminate or reduce the power of national supervisory authority to assess the adequacy of data protection under the Directive. Further, the court declared that the Safe Harbor Agreement [17] with the USA was invalid. [2] (Burri and Schär 2016)[18] This judgement highlighted the various challenges that the existing data protection framework was facing in an overwhelming environment of use of advanced technology over two decades since the enactment of the Directive. The following section presents a discussion on the selected key provisions of the GDPR, which could prove to be in terms of their implications for the protection of the rights of individual data subjects.


The legal instruments that are used by the EU are in the form of Communication, Directive and Regulation. A directive has to be transposed into the national law by enacting an amendment or new laws that would be applicable within the national territory inhabited by the members whereas a regulation can be directly applied as a law. Therefore, the problem of harmonisation of the Directive across the EU member-states has been overcome through the choice of regulation during enactment of the GDPR [19]. Albeit the Commission has promised a “strong, clear and uniform legislative framework at [the] EU level” that will “do away with the patchwork of the legal regime across the 27 member-states and remove the barrier to market entry” [20]. However, the coordination of the member countries, their respective data protection authorities, national laws and courts would not be an easy task to achieve by 2018, when the Regulation comes into force.


The 1995 Directive specifies that “personal data shall mean any data relating to identified or identifiable nature person data subjects”.[21] While the identified individual is more or less clear, identifiability is not explained in the Directive. This has been explained in the GDPR and expanded in Article 29 of the Working Party Document [22] and Article 41 of the GDPR has adopted the same approach. However, the Recital 23 has introduced a proportionality test (positing that identifiability is related to “mean reasonably and likely to be used” taking account of “all objective factors such as technology, effort and cost”) in order to assess each time the nature of the data that may help protect the identifiable individual. If the proportionality test is not passed, then such data will not be considered, as the personal data provision and the GDPR does not apply to anonymous data.[23] The regulation has also introduced a new class of data, that is, “pseudonymous data”, which alludes to the processing of personal data in such a way that data can no longer be attributed to a specific data subject without the use of additional information, as long as such additional information is kept separately and is subject to technology and organisational measures for ensuring its non-attribution to an identified and identifiable person”.[24] However, the questions that arise are: What is the relationship between pseudonymous data and personal data? Is pseudonymous data a sub-category of personal data, and does it fall under the scope of the GDPR? According to the Recital 23, “data which has undergone pseudonymization, which could be attributed to [a] natural person by use of additional information should be considered as information on an identifiable natural person”. [25] If this is so, then the proportionality test would have to be applicable to the information pertaining to an identifiable person and only then should it be considered as personal data for the purpose of data protection legislation. The GDPR would also not apply to information concerning a deceased person.[26] As regards the issue of sensitive data, the regulation has adopted and applied the same approach as the Directive. It propounds that sensitive data are data which reveal “racial or ethnical origin, political opinion, religious, philosophical believes, trade union membership, processing of genetic data, biometric data in order to uniquely identify a person or data concerning health or sex life or sexual orientation”.[27] Thus, genetic data, biometric data, and sexual orientation data are new categories included under sensitive data. The processing of data relating to criminal conviction and offences or relating to security measures is allowed only under the control of an official authority or after adequate safeguards have been provided under the law.[28] However, Articles 4 and 9 of the GDPR, while remaining similar to the Directive at the generic level, provide some improvement in terms of privacy protection.


The “right to be forgotten” is currently one of the most hotly debated issue because of the Google Spain judgement and has been incorporated in Article 17 of the GDPR. A data subject can now get his personal data erased and put an end to further processing if the data in question is no longer necessary for the purpose for which it was collected irrespective of whether a data subject as an individual is the subject or whether his personal data is being processed.[29] However, this right is not absolute.[30] The right to be forgotten includes an obligation on the part of the data controller who has made the personal data public to inform other controllers who would process such personal data to erase any links, copies, or replications pointing to that personal data. Also, while doing so, the data controller concerned would have to take reasonable steps in accordance with the technology and resources available to him for use including technology measures.[31] However, Article 17 may lead to certain problems, some of which are delineated below:

i) The controller may not even know or be able to contact all the third parties.

ii) The third party may have different legal grounds for not agreeing to erasure of the request of the original controller.

iii) The issue of who the third party controller would be in the case of ‘Internet-bounces’ is ambiguous, as the modern Internet has blurred the distinction between the controller and the data subject, leading to a grey area in the data protection law.

However, it is claimed that actually the right to be forgotten would become an absolute right only when the data is removed by every controller but ironically, modern technology developments do not allow data subjects to know the identity of the controller(s) processing their data. [32] Therefore, theoretically, it may be claimed as a ‘right to be forgotten’, but with practical implementation in the future, it may become ‘a right forgotten’.


A host of other rights are included in the GDPR, including the right to transfer information,[33] the right of access to personal data,[34] the right to data portability,[35] and the right to object.[36] A data subject cannot be subjected to a decision based on automatic processing including profiling, which has legal or other considerable effects on the data subject. However, this right is limited if the processing is necessary for contractual obligation between the data subject and the data controller or is authorised by law as applicable in the EU, or in any of its member-states of which the data controller is a subject or if it is based on the data subject’s explicit concern.[37] The right to data portability is a considerable and significant protection for users, who now have the potential right to receive their personal data in a structured, commonly used and machine-readable format. This can be transferred to another controller without hindrance from the controller who is controlling the original personal data.[38] However, it has been argued by a few that data portability may hamper innovation by making it freely available, and thereby hurting the self-correcting power of the market.[39]

The GDPR, however, limits the access right of the subject in a situation wherein the data controller is not in a position to identify the subject. The right to confirmation and the right to access to data represent greater risk of harm if the information is disclosed to someone who is not a data subject.[40] If the person requesting for this data provides additional information that facilitates his identification for restoring the right to full access to the subject, the right itself becomes a risk.[41] For example, if the data subject is asked to prove his identity by providing a copy of his passport, this proves that the person requesting for the data could be someone with the same name as data subject, but does not prove that he himself is the data subject.[42] Therefore, this right entails an undue risk to the privacy of the individual concerned and is a necessary limitation of the data protection right.


Article 2H of the Directive defines the data subject’s consent as “any freely given specific information and indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed”.[43] Article 7 (2) of the Directive also lists the legal grounds that make data processing legitimate, with the unambiguous consent of the data subject being one of them.[44] However, the Directive does not define how the unambiguity and the consent would be validated as both are affected by cognitive factors attributable to the data subject’s behaviour, becoming even more complex in the online environment. In the context of the EU’s data privacy framework, the consent is an important instrument in the hands of the data subjects for controlling their personal data. The GDPR has placed a responsibility on the data controller to demonstrate that the consent was given by the data subject.[45] It stipulates that the consent to process personal data is conditional to the performance of a contract, and that it would not be considered ‘given freely’.[46] The GDPR also provides that the personal data processing of a child of or below 15 years of age is unlawful in the absence of the consent of the person having the parental responsibility of such a child.[47] The data controller also has the responsibility of making a reasonable effort to verify that such a consent is lawful.[48]

However, it remains to be seen if in practice, the consent of the data subject correlates autonomy [49] with its legitimacy. Several cognitive and psychological imitations, coupled with the demographic, cultural and racial profile of the data subject, affect and influence the complex process of giving or withholding of consent. The data subject has the right to withdraw his consent at any time, as the regulation explains that “it shall be as easy to withdraw as [to] give any consent”[50].


It has been widely claimed that the right to explanation of a decision made by an automatic or artificial intelligence algorithm system will be legally mandated by the [3](Wachter, Mittelstadt, and Floridi 2016)GDPR,[51] which is viewed as a mechanism for ensuring better accountability and transparency.

The right to explanation can possibly to derived from:[52]

i) Safeguard against automated decision making;[53]
ii) Notification duties; [54] and
iii) Right to access [55]

Scholars have argued that Article 22 of the GDPR has the potential of dual interpretation as a ‘prohibition’ or the ‘right to object’, and would need to be clarified before the GDPR is implemented by 2018. Without any such clarification, prior to enforcement, Article 23 will allow for a conflicting interpretation of the right of the data subject to control any automated decision-making across the EU member-states. This conflict would become inevitable especially because different interpretations protect very different interests. Article 22, while being interpreted as ensuring prohibition, offers greatest protection of the data subject. On the other hand, if interpreted as a right, Article 22 creates a loophole that allows the data controller to prevent the person requesting for information access to Article 22 to requester under the automated decision-making rule unless an objection against that is raised by the data subject [56]. Thus, the GDPR does not guarantee transparent and accurate automated decision-making and there is no legally binding right to an explanation in this context.


Article 25 of the GDPR provides new obligations under the title of “Data Protection by Design[57] and by Default”.[58] This obligation requires the data controller to build in data protection functionality in his system. It has been suggested that the issue of ‘Data Protection by Design and by Default” may become a real game-changer if implemented by the data controller, processor, producer, and the supervising authority. However, it would not be an easy task for all stakeholders to benefit from this right as it would require in-depth knowledge and resources, and access to state-of-the-art technology, unless researchers, practitioners and supervisory authorities collaborate with each for a meaningful implementation of the said right.[59]


The GDPR has also introduced the novel concept of Data Protection Impact Assessment (DPIA).[60] When the data processing based on the use of new technology is likely to result in a high risk to the right and freedom of a natural person, the data controller is obligated to carry out an impact assessment.[61] The Regulation prescribes the minimum elements that should be considered for the DPIA, that is, a description of the processing operation, an assessment of the necessity and proportionality of processing with reference to the purpose of assessment of risk to the right of the data subjects, the remedial measures taken, and freedoms and safeguards.[62] The data controller must consult the supervising authority before processing the data wherever the DPIA points to a high risk to the processing of data. The supervisory authority has been given the power to impose limitations including banning the processing of data.[63] The data protection [4](“Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4.5.2016, p. 1–88 ” 2016)authority can also impose a fine up to a maximum of 2 crore Euros, or in the case of business, 4 per cent of the total business turnover, whichever is higher.[64]


Article 31 of the GDPR mentions that the scope of territorial application of the Directive covers the process of accessing of personal data in the context of activities leading to the establishment of a controller or a processor in the EU, regardless of whether the processing of data has taken place or not. Thus, independent obligations have been implemented on the person responsible for processing the data. The GDPR may also apply to a controller or processor of data who is not established in the EU under certain conditions having wide ramifications.[65] This would potentially mean that many companies incorporated outside the EU but targeting the EU market would be brought to book.[66]


The issue of protection of the privacy of an individual is always considered as a fundamental right in the EU, and is the hallmark of the data protection framework. The advent of technology and movement of data to a cloud across geographical barriers, and outsourcing of data processing jobs to countries outside the EU have made the data protection directive of 1995 a little redundant in terms of its ability to overcome practical difficulties and judicial enactments. The GDPR has, therefore, been enacted to provide better privacy protection to individuals. It has also been demonstrated that though the basic principle and guidelines of the Data Protection Directive and GDPR are generically similar, the inclusion of some new provisions in the GDPR regulations provides for a better protection of the privacy rights of individual data subjects. Some of the provisions of the new Directive that signify better protection of the right of individual subjects include the choice of ‘regulation’ as an instrument of law for better harmonisation, expansion of scope of the ‘right to be forgotten’ in the case of personal data, improved control of users over their personal data, better legitimisation of the role of consent in data processing, data protection by design and default, increased accountability of data controllers for their actions, and the extra-territorial scope of application of the provisiosn of the Directive. However, some provisions like Article 22 of GDPR need to be clarified before GDPR is implemented the next year in order to avoid their conflicting dual interpretation. It remains to be seen how the GDPR is actually implemented and what its impact would be when it come into force in 2018.

[1] M. M. Group. (2015, 24.11.2015). World Internet Users Statistics and 2015 World Population Stats. Available:
[2] S. R. Salbu, “European Union Data Privacy Directive and International Relations, The,” Vand. J. Transnat’l L., vol. 35, p. 655, 2002.
[3] J. Kang, “Information privacy in cyberspace transactions,” Stanford Law Review, pp. 1193-1294, 1998.
[4] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Official Journal L 281 , 23/11/1995 P. 0031 – 0050 (Accessed at: on 14 November 2016), 1995.
[5] ibid.
[6] M. Burri and R. Schär, “The Reform of the EU Data Protection Framework,” Journal of Information, vol. 6, 2016.
[7] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4.5.2016, p. 1–88 2016.
[8] D. R. Nijhawan, “Emperor Has No Clothes: A Critique of Applying the European Union Approach to Privacy Regulation in the United States, The,” Vand. L. Rev., vol. 56, p. 939, 2003.
[9] J. R. Reidenberg, “E-commerce and trans-atlantic privacy,” Hous. L. Rev., vol. 38, p. 717, 2001.
[10] D. Zwick and N. Dholakia, “Contrasting European and American approaches to privacy in electronic markets: property right versus civil right,” Electronic Markets, vol. 11, pp. 116-120, 2001.
[11] M. Boban, “DIGITAL SINGLE MARKET AND EU DATA PROTECTION REFORM WITH REGARD TO THE PROCESSING OF PERSONAL DATA AS THE CHALLENGE OF THE MODERN WORLD,” in Economic and Social Development (Book of Proceedings), 16th International Scientific Conference on Economic and Social, 2016, p. 191.
[12] G. Shaffer, “Globalization and social protection: the impact of EU and international rules in the ratcheting up of US data privacy standards,” Yale Journal of International Law, vol. 25, pp. 1-88, 2000.
[13] S. Singleton, “Balancing a Right to be Forgotten with a Right to Freedom of Expression in the Wake of Google Spain v. AEPD,” Ga. J. Int’l & Comp. L., vol. 44, pp. 165-195, 2015.
[14] A. Bunn, “The curious case of the right to be forgotten,” Computer Law & Security Review, vol. 31, pp. 336-350, 6// 2015.
[15] C. Rees and D. Heywood, “The ?right to be forgotten? or the ?principle that has been remembered?,” ibid.vol. 30, pp. 574-578, 10// 2014.
[16] “Maximillian Schrems v Data Protection Commissioner, C-362/14, Court of Justice of the European Union,” ed: Court of Justice of the European Union 2015.
[17] M. A. Weiss and K. Archick, “US-EU Data Privacy: From Safe Harbor to Privacy Shield,” Congressional Research Service, 2016.
[18] M. Burri and R. Schär, “The Reform of the EU Data Protection Framework,” Journal of Information, vol. 6, 2016.
[19] P. de Hert and V. Papakonstantinou, “The new General Data Protection Regulation: Still a sound system for the protection of individuals?,” Computer Law & Security Review, vol. 32, pp. 179-194, 2016.
[20] V. Reding, “The European data protection framework for the twenty-first century,” International Data Privacy Law, p. ips015, 2012.
[21] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Official Journal L 281 , 23/11/1995 P. 0031 – 0050 (Accessed at: on 14 November 2016), 1995.
[22] Article 29 Working Party Opinion 4/2007
[23] Regulation (EU) 2016/679, 2016. Recital 23
[24] ibid. Article 43 (b)
[25] ibid. Article 23
[26] ibid. Article 23a
[27]ibid. Article 9
[28] ibid. Article 23
[29]ibid. Article 17 (1)
[30] ibid. Article 17 (3) Recital 65
[31] ibid. Article 17 (2) Recital 66 & 67
[32] A. Mantelero, “The EU Proposal for a General Data Protection Regulation and the roots of the ?right to be forgotten?,” Computer Law & Security Review, vol. 29, pp. 229-235, 6// 2013.
[33]Regulation (EU) 2016/679, 2016. Article 12
[34]ibid. Article 13, 14, 15, 19
[35] ibid. Article 20
[36]ibid. Article 21, 22
[37]ibid. Article 22 (2)
[38] ibid. Article 21
[39] M. Burri and R. Schär, “The Reform of the EU Data Protection Framework,” Journal of Information, vol. 6, 2016.
[40] Regulation (EU) 2016/679, 2016.
[41] A. Cormack, “Is the Subject Access Right Now Too Great a Threat to Privacy,” Eur. Data Prot. L. Rev., vol. 2, p. 15, 2016.
[42] ibid.
[43] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Official Journal L 281 , 23/11/1995 P. 0031 – 0050 (Accessed at: on 14 November 2016), 1995.Article 2H
[44] ibid. Article 7 (a)
[45] Regulation (EU) 2016/679, 2016. Article 7 (1)
[46] ibid. Article 4 (4)
[47] ibid. Article 8 (1)
[48] ibid. Article 8 (2)
[49] E. Carolan, “The continuing problems with online consent under the EU’s emerging data protection principles,” Computer Law & Security Review, vol. 32, pp. 462-473, 2016.
[50] Regulation (EU) 2016/679, 2016. Article 7(3)
[51] S. Wachter, B. Mittelstadt, and L. Floridi, “Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation,” 2016.
[52] ibid.
[53] Regulation (EU) 2016/679, 2016. Article 20 (3) read with Recital 71
[54] ibid. Article 13, 14 read with Recital 60, 61, 62
[55] ibid. Article 15 read with Recital 63
[56] S. Wachter, B. Mittelstadt, and L. Floridi, “Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation,” 2016.
[57] Regulation (EU) 2016/679, 2016. Article 25(1)
[58] ibid. Article 25(2)
[59] E. Hanson, “The History of Digital Desire, vol. 1: An introduction,” South Atlantic Quarterly, vol. 110, pp. 583-599, 2011.
[60] Regulation (EU) 2016/679, 2016. Article 33
[61] ibid. Article 35
[62] ibid. Article 35(7)
[63] ibid. Article 58
[64] ibid. Article 83, 85, 86
[65] ibid. Article 3(2)
[66] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4.5.2016, p. 1–88 2016. Recital 23

Click for PDF view

Critical Analysis of Divergent Approaches to Protection of Personal Data

Posted on Updated on

International Journal of Advanced Research in Computer Science, ISSN No. 0976-5697, Volume 8, No. 7, July – August 2017

Sandeep Mittal
Cyber Security & Privacy Researcher
Former Director, LNJN NICFS (MHA)
New Delhi, India


Abstract: The protection of privacy and confidentiality of personal data generated on internet at residence and in motion within and across the border is a cause of concern. The European Union and United States have adopted divergent approaches to this issue mainly due to varying socio-cultural backgrounds. With the globalisation of businesses facilitated by internet revolution, the economic considerations out-weighed the rights consideration, and the right based approach started buckling the pressure of economic based approach but was checked by the Schrem’s case. The negotiation under TTP and TTIP has a tendency to forgo the privacy rights of the individuals over business considerations in tune with the US tactics of weakening the privacy laws through Free Trade Agreements. It has been demonstrated that a balanced approach in which individual control over data is desirable but should not be absolute, control rights are reinforced by structural safeguards or architectural controls would be desirable.

Keywords: Personal Data; Internet Governance; Right to Privacy; Data Privacy Protection; Trans-Pacific Partnership (TPP); Transatlantic Trade and Investment Partnership (TTIP); Protection of Privacy;


The number of Internet users in the world has increased by 826 per cent, from 16 million in 1995 to 3,270 million in the last 15 years, accounting for about 46 per cent of the world population.[1]. The Internet has emerged as a preferred medium of expression of free speech, conducting trade and business, and running daily errands like controlling multipurpose home devices, thereby generating large volumes of personal data. This data includes names, addresses, mobile numbers, dates of birth, emails, geographical locations, and health records like the BMI and can aid in advertising for marketing purposes. Internet users access the Internet through an ‘Internet Service Provider’ (ISP), who provides infrastructure, allowing users to access the Internet and user-generated content. This big data, which has been disclosed voluntarily or incidentally through interactive means (for example, Online Surveys) or technological (for example, Cookies) has a high potential for secondary uses. The right of privacy in general is “the right of the individual to be left alone; to live quietly, to be free from unwarranted intrusion to protect his name and personality from commercialisation.” [2] [3] The protection of privacy and confidentiality of this personal data at the residence and in motion within and across the borders is a cause for concern, [4] [5] [6] [7] more particularly in the developed economies like the European Union (EU) and the US. The EU and US have adopted divergent approaches [8] [9] [10] [11] to this issue. The scope of this essay is to critically analyse these comparative but divergent approaches for protecting privacy.


The basic premise of the EU privacy protection approach is embodied in the EU Directive 95/46, [12] recognising privacy as a fundamental human right as demonstrated by the repetition of the term ‘fundamental right and freedom’ 16 times in the Directive. Para 10 of the adoption statement of the Directive states,

“Whereas the object of the national laws on the processing of personal data is to protect fundamental rights and freedoms, notably the right to privacy, which is recognized both in Article 8 of the European Convention for the Protection of Human Rights and Fundamental Freedoms and in the general principles of Community law; whereas, for that reason, the approximation of those laws must not result in any lessening of the protection they afford but must, on the contrary, seek to ensure a high level of protection in the Community;” [13]

The Directive 1995/46 [14] gives far-reaching powers and complete control over personal data to individuals, thus creating severe legal issues not only for domestic and international businesses but also for sovereign nations in dealing with personal data. [15] The basic framework of this Directive is summarized [16] as follows:

a) Companies to inform users regarding their policy in handling the personal data collected from them.
b) Affirmative consent of users to be obtained to collect, use, and disseminate the data.
c) Documentation and registration of the above consent with ‘data authorities’, who would retain the data in their own databases.
d) Accessibility of the database to individuals for amendments and/or rectifications in their data.
e) Identity of the companies collecting the data to be disclosed to the consumers.
f) Explicit bar on trans-border data transfer if the laws destination country lacks adequate data protection.

The spirit of fundamental rights has been further reiterated and refined in the EU Directive 2002/58/EC [17]. This Directive prohibits any type of interception or surveillance, erasure and anonymisation of processed data and location-related data, an opt-out regime for itemised-billing and calling-line identification. Most importantly, inclusion of the opt-in regime for cookies [18] needs to be stored in the browser, with all these conditions being subject to consent, with certain exceptions like security or criminal acts.
The ‘consent’ in the 2002 Directive has been replaced with ‘informed consent’ in the Directive 2009/136/EC.[19] Recently, the EU passed Regulation (EU) 2016/679, which would replace the existing privacy law in the EU by 25 May 2018. It is a comprehensive regulation covering businesses outside the EU, with the data too residing outside the EU. It has also incorporated provisions regarding the custodian’s explicit informed and verifiable consent for children below 13 years of age, and penalty up to 4 per cent of the global business annual turnover of the preceding financial year, in case of violation of privacy. Thus, the approach of the EU to protect the privacy of an individual essentially remains ‘regulatory, State-controlled and penal’ and devoid of self-management. [20] [21] [22] [23]


The US approach to the protection of online privacy is ‘self-regulatory’, favouring voluntary market-based approaches over central regulation depending mainly on industry norms, and codes of conduct, among other things. The laws are in piece-meal form, sporadic, inadequate or non-existent, demonstrating that the protection of privacy is not an issue for the political and democratic systems in the US. [24] Most of the privacy provisions in various US Acts like The Driver’s Privacy Protection Act of 1984, the Video Privacy Protection Act of 1988, The Electronic Communications Privacy Act of 1986, and The Cable Communications Policy Act of 1984 are akin to knee-jerk reactions to public scandals and outcries.[25] [26] There is neither a comprehensive law nor any comprehensive mechanism to enforce the protection of privacy in the US, leaving everything to ‘industry self-regulation’.[27] However, due to the interdependence of EU-US businesses over each other and the presence of a well-crafted law in the EU, there is a tendency among US companies to draft some kind of a voluntary code for data protection, which would act as a ‘privacy-protection face-mask’ to purport as having respect for privacy protection, on the one hand, and as a smoke-screen to keep the government regulation at bay, on the other. Even the US negotiated ‘Safe Harbour Privacy Principles’ as an alternative to the adequacy clause in Article 25 of Directive 95/46/EC, wherein US businesses qualifying as ‘safe harbours’ would be deemed to have provided adequate privacy protection. [28] This ‘safe-harbour’ concept is a self-certifying framework mechanism based on seven principles,[29] as enumerated below:[30]

a) Notice to individuals regarding the likely uses of their data and the mechanism available to them for complaint and grievance redressal.
b) ‘Opt-out’ choice to individuals with regard to the collection of data and its dissemination to third parties.
c) Transfer of data only to third parties having adequate privacy protection.
d) Reasonable security assurance measures to prevent the loss of collected information.
e) Measures to ensure the integrity of data.
f) Accessibility of data to individuals for correction or deletion of incorrect data.
g) Enforcement mechanism for these guidelines.

However, there is little or no regulation by the Government except the ‘safe harbour registration, on payment of a nominal fee and the guidelines’ implementation is self-certified through either trained employees or through private industry-funded bodies. For example, TRUSTe investigates the companies that provide funding to it, thus inviting criticism. [31] The ‘safe harbour’ provision was struck down as invalid [32] by the Court of Justice of the European Union in 2015 as below,

“1. Article 25(6) of Directive 95/46/……. as amended by Regulation (EC) No 1882/2003….., read in the light of Articles 7, 8 and 47 of the Charter of Fundamental Rights of the European Union, must be interpreted as meaning that a decision adopted pursuant to that provision, such as Commission Decision 2000/520/EC of 26 July 2000 pursuant to Directive 95/46 on the adequacy of the protection provided by the safe harbour privacy principles and related frequently asked questions issued by the US Department of Commerce, by which the European Commission finds that a third country ensures an adequate level of protection, does not prevent a supervisory authority of a Member State, within the meaning of Article 28 of that directive as amended, from examining the claim of a person concerning the protection of his rights and freedoms in regard to the processing of personal data relating to him which has been transferred from a Member State to that third country when that person contends that the law and practices in force in the third country do not ensure an adequate level of protection.
2. Decision 2000/520 is invalid.” [33]

Subsequently, in view of the invalidation of the ‘safe-harbour framework’ and Regulation (EU) 2016/679 [34] likely to be in place by mid May 2018, with provisions of heavy penalties of up to 4 per cent of the international annual turnover during the preceding financial year, the US Government has negotiated an “EU-U.S. Privacy Shield” with the European Commission, which is purportedly more stringent and robust than the ‘safe harbour framework’.[35] In future, the US would bring pressure upon the EU to include the privacy protection framework while negotiating the TTIP, but the EU would have to limit itself within the framework prescribed by the CJEU.[36] [37] [38]


While the EU approach recognises the protection of privacy as a fundamental human right, the US approach is to adopt an iota of interference in the privacy rights of individuals, treating these rights as a commodity, thus leaving the issue to market forces as stated by scholars.[39] [40]

“The US approach contrasts the EU approach to data privacy. [41] Whereas in the EU, it is the responsibility of the government to protect citizens’ right to privacy, in the U.S., markets and self-regulation, and not law, shape information privacy. In the EU, privacy is seen as a fundamental human right; in the U.S., privacy is seen as a commodity subject to the market and is cast in economic terms David Aaron, who negotiated the Safe Harbor, noted that in Europe: Privacy protection is an obligation of the state towards its citizens. In America, we believe that privacy is a right that inheres in the individual. We can trade our private information for some benefit. In many instances Europeans cannot. This can have important implications when it comes to e-commerce.”[42]

Does this statement give an impression that the US has closed its eyes to the stringent data privacy laws in the EU? Superficially, it may appear so but that is only an illusion. The US is vigorously using its negotiating skills in drafting Free Trade Agreements (FTAs) with trading partners across the globe, incorporating crippling provisions, putting fetters on the data privacy concerns, in the name of facilitating free trade. Disguised in this is the message that if a partner wants free trade with the US, its data privacy laws should not act as impediments to the free flow of data to the US. Two such FTAs of interest are the Trans-Pacific Partnership (TPP), which has already been signed but is not in force, and the Transatlantic Trade and Investment Partnership (TTIP) being negotiated between the EU and the U.S. in secrecy, wherein the U.S. has well-intentioned moves to soften the relatively stringent privacy law, thus giving a protection shield to US businesses from prosecution under the ‘post-SchremEU Law’ [43]. The TTIP is under negotiation, but the intentions of the US with regard to the protection of privacy are obvious in the TPP agreement.

The TPP is the first legally binding international agreement affecting data privacy, with provisions for the enforcement of violations. “The TPP only imposes the most limited positive requirements for privacy protection, but imposes stronger and more precise limits on the extent of privacy protection that TPP parties can legally provide.”[44] Let us take a peep into the TPP’s provisions affecting data security, as enumerated in Table 1. [45] [46] [47]

A perusal of the TPP’s provisions, as delineated in Table 1, would send a ‘chill wave’ down the spines of proponents of data protection privacy. The entire exercise seems to be an attempt by the US to by-pass the local data privacy laws to protect businesses operating from its soil and to pre-empt litigation against its own business interests. The vigour with which the US is pursuing these FTAs is evident from the passage of the Trade Promotion Authority Bill by the Senate, which was termed as “……an important step toward ensuring [that] the United States can negotiate and enforce strong, high- standards trade agreements…..” by the US Presiden [48]

Table 1: Effects of TPP on Data Privacy Protection [49] [50] [51]

S. N. TPP Article Brief Title How it affects Data Privacy
1. 14.2.2
Scope includes any measures affecting trade by electronic means a) Scope is much wider as it applies to measures affecting trade (not limited only to measures governing or applicable to trade) by electronic means (not limited only to electronic commerce). Thus the scope is much wider than it looks.
b) Measures affecting the supply of service performed or delivered electronically are subject to obligations contained in relevant articles of Chapters 9 (Investment), 10 (Cross-Border Trade in Services) and 11 (Financial Services).
2. 14.8 Vague & unenforceable
Requirements for Protection of personal information
a) Obligation on parties to provide legal framework for the protection of personal information of the users of electronic commerce only. Not applicable if electronic commerce not involved.
b) No mention of protecting information as protecting human rights.
c) ‘Measure is defined to include a ‘practice’ or ‘law’, thereby implying that even legal framework is given a go-bye to include ‘self-regulation’ practice in U.S. (Article 1.3)
d) Parties free to adopt different legal approaches but should encourage cross-border compatibility which is left vague with no standards or mechanism of enforcement included.
e) Party shall endeavour to adopt non-discriminatory practices to provide data privacy protection would mean that this would not be limited only to citizens but equally to non-residents also.
3. 14.11 Restrictions on data export limitations a) Each party may have its own regulatory requirements regarding transfer of information by electronic means and may allow cross-border transfer of data if it pertains to business of a service suppliers from one of the TPP Parties. Any exceptions to this would have to be justified by applying four requirements of Article 14.11.3 as follows,
(i) Legitimate public policy Objective.
(ii) Not an arbitrary or unjustifiable discrimination.
(iii) Not a disguised restriction on trade.
(iv) Restrictions imposed on transfer of data not greater than that required to achieve the objective.
Onus of burden to prove Clauses (ii) and (iii) above would lie on party imposing the restrictions.
4. 14.13 Ban on data localisation a) A TPP Party Service supplier is not required to use computing facilities or data localisation facilities in the territory of a TPP Party where he want to conduct business.
b) In case of any exception, the four-step test of data export limitations.
5. 28 Complex Dispute Settlement Procedures The dispute settlement procedures are lengthy and complex and could even lead to revoke the benefits under free trade.
6. 9 Investor-State Dispute Settlement (ISDS) An investor from one party in territory of other party must be accorded for dispute settlement purpose,
a) ‘National Treatment’
b) ‘Most-Favoured-Nation Status’ &
c) Fair and equitable treatment
d) Full protection and security
e) Prohibition of direct or indirect expropriation of investment except for public purpose or fair compensation.

A study of the TTIP Text, [52] which was being negotiated in secrecy, reveals that privacy concerns are being sacrificed over so-called free trade. The salient features of the privacy provisions are as follows: [53]
a) Article 33(2) provides for only ‘adequate safeguards’ and ‘not legislation’ for protection of privacy, and is thus very mild.
b) Article 33(1) provides unrestricted cross-border transfer of personal data for providing financial services.
c) Article 7(1) provides general exceptions exempting measures for protecting the privacy of personal data subject to three qualifications, [54] that the measures:
(i) must be necessary,
(ii) must not constitute ‘arbitrary or unjustifiable discrimination between countries where like conditions prevail’, and
(iii) must not be ‘a disguised restriction on establishment of enterprises, the operation of investments or cross-border supply of services’.
It remains to be seen how the two contrasting approaches to the protection of privacy culminate into each other in the name of free trade. The rights-based approach is getting crushed under the growing weight of the economics-based approach being adopted by the combined might of the EU-US nexus.


The varying cultural backgrounds of the societies of the EU and US were initially reflected in their contrasting approaches to the protection of privacy. With the globalisation of businesses facilitated by the Internet revolution, the economic considerations out-weighed the rights considerations, and the rights- based approach started buckling under the pressure of the economics-based approach. However, the Schrem’s case put a brake on this tendency. The EU may be reminded that it cannot negotiate the privacy rights of individuals. However, the TTIP text discloses the position of the EU on privacy protection. This stance of EU is not very conducive to the protection of privacy. They seem to be eager to forego the privacy rights of individuals over business considerations in tune with the tactics adopted by the US to weaken the privacy laws through FTAs. Recent developments like BREXIT, the trade expansionist policy followed by the US and the probable future dependence of the EU on the US for its economic survival and stability would decide if these two comparative and contrasting approaches to the protection of privacy would remain so or would evolve into a ‘willingly-accepted-forced’ compromise by sacrificing the privacy rights of individuals. What is desirable is a balanced approach in which individual control over data is desirable but not absolute, control rights are reinforced by structural safeguards or architectural controls, and self-management is possible [55] for protecting privacy in an age of voluntary disclosure and secondary uses of personal data.


[1] M. M. Group. (2015, 24.11.2015). World Internet Users Statistics and 2015 World Population Stats. Available:
[2] A. Lindey, Lindey on Entertainment, Publishing, and the Arts: Agreements and the Law vol. 2: C. Boardman Company, 2005.
[3] S. Sorensen, “Protecting Children’s Right to Privacy in the Digital Age: Parents as Trustees of Children’s Rights,” Child. Legal Rts. J., vol. 36, p. 156, 2016.
[4] S. R. Salbu, “European Union Data Privacy Directive and International Relations, The,” Vand. J. Transnat’l L., vol. 35, p. 655, 2002.
[5] J. Kang, “Information privacy in cyberspace transactions,” Stanford Law Review, pp. 1193- 1294, 1998.
[6] J. P. Graham, “Privacy, computers, and the commercial dissemination of personal information,” Tex. L. Rev., vol. 65, p. 1395, 1986.
[7] D. H. Flaherty, “On the utility of constitutional rights to privacy and data protection,” Case W. Res. L. Rev., vol. 41, p. 831, 1990.
[8] J. M. Assey Jr and D. A. Eleftheriou, “EU-US Privacy Safe Harbor: Smooth Sailing or Troubled Waters, The,” CommLaw Conspectus, vol. 9, p. 145, 2001.
[9] D. R. Nijhawan, “Emperor Has No Clothes: A Critique of Applying the European Union Approach to Privacy Regulation in the United States, The,” Vand. L. Rev., vol. 56, p. 939, 2003.
[10] J. R. Reidenberg, “E-commerce and trans-atlantic privacy,” Hous. L. Rev., vol. 38, p. 717, 2001.
[11] D. Zwick and N. Dholakia, “Contrasting European and American approaches to privacy in electronic markets: property right versus civil right,” Electronic Markets, vol. 11, pp. 116-120, 2001.
[12] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Official Journal L 281 , 23/11/1995 P. 0031 – 0050 (Accessed at: on 14 November 2016), 1995.
[13] ibid. paras 1, 2, 10 and art 1, para1.
[14] ibid.
[15] J. S. Bauchner, “State sovereignty and the globalizing effects of the Internet: A case study of the privacy debate,” Brook. J. Int’l L., vol. 26, p. 689, 2000.
[16] D. R. Nijhawan, “Emperor Has No Clothes: A Critique of Applying the European Union Approach to Privacy Regulation in the United States, The,” Vand. L. Rev., vol. 56, p. 939, 2003.
[17] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) Official Journal of the European Union, Vol. L 201 (2002), pp. 0037-0047 by European Parliament and the Council of the European Union ( Accessed at: on 14 November 2016), 2002. Recital 1,2,3 and 11.
[18] ibid. Recitals 24, 25, art 5(3)
[19] Directive 2009/136/EC of the European Parliament and of the Council of 25 November 2009 amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services, Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector and Regulation (EC) No 2006/2004 on cooperation between national authorities responsible for the enforcement of consumer protection laws (Text with EEA relevance) OJ L 337, 18.12.2009, p. 11–36, 2009. Art 3 (5).
[20] F. Giampaolo, “Overview of the main topics of EU Regulation 2016/679-General Data Protection Regulation.”
[21] F. Mauro and D. Stella, “Brief Overview of the Legal Instruments and Restrictions for Sharing Data While Complying with the EU Data Protection Law,” in International Conference on Web Engineering, 2016, pp. 57-68.
[22] M. Boban, “DIGITAL SINGLE MARKET AND EU DATA PROTECTION REFORM WITH REGARD TO THE PROCESSING OF PERSONAL DATA AS THE CHALLENGE OF THE MODERN WORLD,” in Economic and Social Development (Book of Proceedings), 16th International Scientific Conference on Economic and Social, 2016, p. 191.
[23] H. Kranenborg, “O. Lynskey, The Foundations of EU Data Protection Law,” ed: Oxford University Press, 2016.
[24] F. H. Cate, “Principles of Internet Privacy,” Conn. L. Rev., vol. 32, p. 877, 1999.
[25] G. Shaffer, “Globalization and social protection: the impact of EU and international rules in the ratcheting up of US data privacy standards,” Yale Journal of International Law, vol. 25, pp. 1-88, 2000.
[26] J. R. Reidenberg, “E-commerce and trans-atlantic privacy,” Hous. L. Rev., vol. 38, p. 717, 2001.
[27] S. Listokin, “Industry Self-Regulation of Consumer Data Privacy and Security,” J. Marshall J. Info. Tech. & Privacy L., vol. 32, p. 15, 2015.
[28] J. M. Assey Jr and D. A. Eleftheriou, “EU-US Privacy Safe Harbor: Smooth Sailing or Troubled Waters, The,” CommLaw Conspectus, vol. 9, p. 145, 2001.
[29] Safe Harbor Framework Overview available at, (Accessed 15 November 2016)
[30] Original documents can be retrieved at, (Accessed on 15 November 2016)
[31] G. Shaffer, “Globalization and social protection: the impact of EU and international rules in the ratcheting up of US data privacy standards,” Yale Journal of International Law, vol. 25, pp. 1-88, 2000.
[32] “Maximillian Schrems v Data Protection Commissioner, C-362/14, Court of Justice of the European Union,” ed: Court of Justice of the European Union 2015. Accessed at, (Accessed on 15 November 2016)
[34] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4.5.2016, p. 1–88 2016.
[35] EU-U.S. Privacy Shield Framework Principles Issued by the U.S. Department of Commerce. (2016) Accessed at, ( accessed on 15 November 2016).
[36] D. Bender, “Having mishandled Safe Harbor, will the CJEU do better with Privacy Shield? A US perspective,” International Data Privacy Law, p. ipw005, 2016.
[37]L. J. Sotto and C. D. Hydak, “The EU-US Privacy Shield: A How-To Guide,” Law360, pp. 1-4, 2016.
[38] M. A. Weiss and K. Archick, “US-EU Data Privacy: From Safe Harbor to Privacy Shield,” Congressional Research Service, 2016.
[39] S. J. Kobrin, “Safe harbours are hard to find: the trans-Atlantic data privacy dispute, territorial jurisdiction and global governance,” Review of International Studies, vol. 30, pp. 111-131, 2004.
[40] L. B. Movius and N. Krup, “US and EU privacy policy: comparison of regulatory approaches,” International Journal of Communication, vol. 3, p. 19, 2009.
[41] S. J. Kobrin, “Safe harbours are hard to find: the trans-Atlantic data privacy dispute, territorial jurisdiction and global governance,” Review of International Studies, vol. 30, pp. 111-131, 2004.
[42] L. B. Movius and N. Krup, “US and EU privacy policy: comparison of regulatory approaches,” International Journal of Communication, vol. 3, p. 19, 2009.
[43]“Maximillian Schrems v Data Protection Commissioner, C-362/14, Court of Justice of the European Union,” ed: Court of Justice of the European Union 2015.
[44]G. Greenleaf, “The TPP & Other Free Trade Agreements: Faustian Bargains for Privacy?,” Available at SSRN 2732386, 2016. Accessed on 20/11/2016 at,
[45] ibid.
[46]B. K. T. Israel. (2015, The Highlights of the Trans-Pacific Partnership E-commerce Chapter. Accessed at on 20/11/2016.
[47] G. Greenleaf, “International Data Privacy Agreements after the GDPR and Schrems,” 2016.
[48] “Statement by the President on Senate Passage of Trade Promotion Authority and Trade Adjustment Assistance,” ed. Washington DC: The White House, 2015.
[49] B. K. T. Israel. (2015, The Highlights of the Trans-Pacific Partnership E-commerce Chapter. Accessed at on 20/11/2016.
[50]G. Greenleaf, “The TPP & Other Free Trade Agreements: Faustian Bargains for Privacy?,” Available at SSRN 2732386, 2016.
[51] G. Greenleaf, “International Data Privacy Agreements after the GDPR and Schrems,” 2016.
[52] TTIP Text available at (Accessed on 20/11/2016)
[53] G. Greenleaf, “The TPP & Other Free Trade Agreements: Faustian Bargains for Privacy?,” Available at SSRN 2732386, 2016.
[54] TTIP Text available at (Accessed on 01/12/2016)
[55] H. Kranenborg, “O. Lynskey, The Foundations of EU Data Protection Law,” ed: Oxford University Press, 2016.

Click for PDF view

Enough Law of Horses and Elephants Debated…, …Let’s Discuss the Cyber Law Seriously

Posted on

International Journal of Advanced Research in Computer Science, ISSN No. 0976-5697, Volume 8, No. 5, May-June 2017

Sandeep Mittal, IPS
LNJN National Institute of Criminology & Forensic Science
Ministry of Home Affairs, New Delhi, India
Prof. Priyanka Sharma
Professor & Head
Information Technology & Telecommunication,
Raksha Shakti University, Ahmedabad, India


Abstract: The unique characteristic of cyberspace like anonymity in space and time, absence of geographical borders, capability to throw surprises with rapidity and potential to compromise assets in virtual and real world has attracted the attention of criminal minds to commit crimes in cyberspace. The law of crimes in the physical world faces challenge in its application to the crimes in cyberspace due to issues of sovereignty, jurisdiction, trans-national investigation and extra-territorial evidence. In this paper an attempt has been made to apply routine activity theory (RAT) of crime in physical world to crime scene cyberspace. A model for crime in cyberspace has been developed and it has been argued that the criminal law of crime in physical world is inadequate in its application to crimes in virtual world. To handle crime in cyberspace there is a need to address issues of ‘applicable laws and ‘conflicting jurisdiction by regulating the architecture of the internet through special laws of cyberspace. A case has been put forward for having an International Convention of Cybercrime with Council of Europe Convention on Cybercrime as yard stick.

Keywords: Cybercrime; Cyber Law; Cyberspace; Routine Activity Theory (RAT); Cyber-criminology; EU Convention on Cybercrime; Law of Horse


The ‘Internet’ has today become an essential part of our lives and revolutionised the way communication and trade take place far beyond the ambit of national and international borders. It has, however, also allowed unscrupulous criminals to misuse the Internet and exploit it for committing numerous cybercrimes pertaining to pornography, gambling, lottery, financial frauds, identity thefts, drug trafficking, and data theft, among others [1]. Cyberspace is under both perceived and real threat from various state and non-state actors [2] [3] [4]. The incidence of cyber-attacks on information technology assets symbolises a thin line between cybercrime and cyber war, both of which have devastating outcomes in the physical world [5] [6]. The scenario is further complicated by the very nature of cyber space, manifested in its anonymity in both space and time, and asymmetric results that are disproportionate to the resources deployed, and the fact that the absence of international borders in cyber space makes it impossible to attribute the crime to a tangible source [7]. In the context of these characteristics of cyberspace, ‘the transnational dimension of cybercrime offence arises where an element or substantial effect of the offence or where part of the modus operandi of the offence is in another territory’, bringing forth the issues of ‘sovereignty, jurisdiction, transnational investigations and extraterritorial evidence’; thus necessitating international cooperation [8]. The evolution of cybercrimes from being simple acts perpetrated by immature youngsters to complex cyber-attack vectors through the deployment of advanced technology in cyberspace has necessitated the development of a distinct branch of Law, The Law of Cyberspace. However, the question of whether ‘the law of cyberspace’ can evolve into an independent field of study or would remain just an extension of the criminal laws of the physical world in the virtual world has become the subject of an interesting debate among legal and social science scholars. The scope of this essay is to critically analyse and compare traditional crimes with cybercrimes to assess if a new set of laws is required for tackling crimes in cyberspace or otherwise.


In his poem, ‘The Blind Men and the Elephant’, John Godfrey Saxe describes the dilemma of six blind men while trying to describe the elephant (which) “in (this) sense represents reality, and each of the worthy blind sages represents a different approach to understanding this reality. In all objectivity, and in line with the poem of John Godfrey Saxe, all the sages (blind men) have correctly described their piece of reality, but fail by arguing that their reality is the only truth.” [9] To quote,

“And so these men of Indostan,
Disputed loud and long,
Each in his own opinion,
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong!”[10]

In the context of this article, cyberspace can be compared with the elephant, which is understood and described differently by different stakeholders in the realms of sociology, criminology, law, technology, and commerce, among other disciplines. However, each of the stakeholder largely ignores the perspective of the others while also understating or overstating the complexity inherent in the physical and virtual processes manifested through the interplay of ‘technology with technology’ and ‘technology with humans’ in virtual space, which, in turn, is not constrained by the barriers of geography, culture, ethnicity and sovereignty of state, but still has manifestation in the physical world. A few legal scholars have also explored the concept of the cyber elephant for determining the principles needed to regulate cyberspace [11].

In 1996, Judge Frank Easterbrook delivered a lecture [12] at the University of Chicago where he discussed his ideas on ‘property in cyberspace’. He explained that coalescing two fields, without knowing much about either, in the name of ‘cross-sterilisation of ideas’ is putting [lawyers] at the ‘risk of multi-disciplinary dilettantism’. He argued that there are a large number of cases relating to various aspects of dealing with horses such as the sales of horses, people being kicked by horses, theft of horses, racing of horses or medical care of horses, but this alone cannot be the reason for designing a course on “The Law of Horses”, as that would signify shallow efforts towards understanding the unifying principles of such a law [13]. This led to the current debate on the need for a separate law of cyberspace [14]. However, scholars have strongly challenged the position taken by Judge Easterbook [15] [16] [17].


Acquiring a deep understanding of the theories of traditional crime in the physical world and their application to crimes in cyberspace would help us in identifying the factors that might govern the regulation of cyberspace. The basic components of acts of crime in the real world and how they intrinsically differ from crimes in cyberspace have been discussed and summarised in Table 1 [18]. Brenner concludes that “cybercrime differs in several fundamental respects from real-world crime and the traditional model is not an effective means of dealing with cybercrimes” [19] and that the “matrices for the real world crime do not apply to cybercrime, as it differs in the methods that are used in its commission and in the nature and extent of the harms it produces” [20]. Interestingly, Brenner had earlier adopted a more conservative stand on the law applying to cybercrime [21].
Theories of criminology have been applied to cyberspace to explore its interaction with the human dimension, as perceived by criminologists (potential dilettante) [23] [24]. The Routine Activity Theory (RAT) relating to crime in the real world has been studied by scholars to analyse if it can be transposed to cybercrime or otherwise [25]. RAT assumes that the minimum three factors required for a crime are an ‘opportunity’ in the form of a suitable target (victim), a ‘motivated offender’ with criminal inclination, and the ‘absence of a capable guardian’ (a law enforcement agency, the neighborhood, etc.). Lack of any one of these factors would prevent the occurrence of the crime [26] [27]. The different controls in traditional crimes and cybercrimes seen in the context of RAT have been depicted in Figure 1 [28] [29] [30].

The three constituents of RAT, viz. the Victim, Offender and Guardian, have been represented by the three vertices of the largest triangle. Each of these three controls is further dependent on sub-factors, which, in turn, are represented as three triangles (for each of these sub-factors, a low value is assigned to the Centre and a high value to the vertex) placed respectively, at each of the vertices of the main triangle. The distinction between traditional crime (Red) and cybercrime (Blue) due to the complex interplay of multiple factors is obvious. Last but not the least, the blue triangle in the Centre characterises cybercrime. The basic tenets of RAT thus fit in well with the paradigm of cybercrimes.

Table 1: Traditional Crimes versus Cybercrimes [22]

1. Proximity—the perpetrator and the victim are physically proximate at the time of committing of the crime. No physical proximity is required between the offender and the victim.
2. The crime is a ‘one-to-one’ event involving the perpetrator(s) and victim(s). A perpetrator can automate the process of victimisation and commit thousands of cybercrimes with high speed at the same time.
3. The committing of the crime is subject to ‘physical constraints’ governing all activities in the physical world. Real-world constraints do not affect perpetrators of cybercrimes, as they can be committed with anonymity, at lightning speed, and traverse beyond transnational borders.
4. The demographic contours and geographical patterns of the incidence of crime are identifiable. It is difficult to identify the patterns and contours of cybercrime due to the lack of uniformity in the definition of cybercrimes, absence of laws, technologies evolving at a faster pace, the anonymity that the perpetrator of the cybercrime enjoys in space and time, and the under-reporting of cybercrimes due to the fact that it poses a risk to many reputations.

It has been argued that the routine activity approach has both significant continuities and discontinuities in the configuration of terrestrial and virtual crimes. “While motivated offenders are likely to be almost homogeneous in both environments, the construction of suitable targets is complex, with similarity on value scale but significantly different in respect of inertia, visibility and accessibility.” [31] The concept of the ‘capable guardian’ fits in well in both settings but the degree of fitness varies. However, the spatio-temporal environment of routine activities is organised in the real world but organically disorganised in the virtual world [32]. Thus, these features of cyberspace make it a domain-distinct from the real world,[33] resulting in noticeably low level of reporting of cybercrimes as compared to that of traditional crimes, as depicted in Figure 2 [34].

Figure 1: RAT and Interplay of Different Controls in Traditional Versus Cyber Crimes


Figure 2: A Comparison of Traditional Property Crimes versus Cybercrimes over a Period of Five Years in India
(Source of Statistics: Crime in India Statistics, NCRB)


Thus, the various factors that incite an individual to commit a cybercrime include the lack of deterrents, increased anonymity, and repressed desires to offend in the real world [35]. While the issue of repressed desires can be handled in traditional ways, the other two issues need to be handled through regulation of both the law and technology, or one of the two facilitating regulation of the other. The absence of any perimeter in cyberspace also makes it easily permeable, thereby making it difficult to assign an appropriate capable guardian for overseeing activities in cyberspace [36].

Thus an individual commit cybercrime due to the lack of deterrents, Some economists have averred that people are actively involved in “transforming their relationships into social capital and their experiences into human capital (conventional or criminal)” and that these economic considerations are more compelling than the criminologist’s simple theory that a crime occurs in response to ‘associations’ and ‘events’ [37]. In fact, altering the criminal’s economic choice pattern may also help alter his behavior [38] [39]. The model of cybercrime portrayed in Figure 1 does not contradict this contention.


After analysing and understanding the various factors that contribute to the commission of a crime in cyberspace, it may be suggested that any law enacted to regulate cyberspace would have to address the following three unique features of cyberspace [40]:

(a) As ‘computer-assisted’ low-cost efforts produce asymmetric results disproportionate to the resources deployed, the law should thus develop mechanisms for increasing the cost entailed in the crime and decrease the probability of its success. For example, there should be a thorough investigation of the crimes wherein victims implemented security measures to make their systems fool proof and exercised due diligence, whereas an enhanced-sentencing regime should be employed where dual-use technology like encryption techniques or anonymity has been used to commit the crime.

(b) There is a need to add third parties (such as Internet Service Providers or ISPs) to the traditional ‘offender-victim’ scenario of the crime. The law could consider imposing responsibilities on these third parties though it may be difficult to implement in view of the costs and liabilities implied in such actions. For example, in the United States, the Digital Millennium Copyright Act (DMCA) specifies the liability of ‘online-intermediaries’ in case of intellectual property right violations but no liability of ‘online-intermediaries’ is provided for defamation under The Communications Decency Act (CDA).

(c) The invisibility of the action in cyberspace and anonymity of the offender limit the capability of the guardian to regulate. It is possible for the law to address this issue. For example, the law may make implementation of IPV06 mandatory for the more specific attribution of acts in cyberspace or the law may mandate a change in the Internet architecture to include controls that would help in the identification of the perpetrators. As most of the Internet architecture is designed, maintained, controlled and governed by private bodies, the law would have to factor in the responsibilities and liabilities of these private stakeholders through either state regulation or self-regulation. Another example would be to make the use of digital signatures (using PKI) mandatory for communication in cyberspace, which in itself would not only prevent the occurrence of many crimes but also assist in the detection of crimes that still manage to be perpetrated despite the imposition of stringent checks.

Therefore, technology-intensive cybercrimes compel us to revisit the role and limitations of criminal law, just as criminal law forces us to reinvent the role and limitations of technology [41]. However, there is a symbiotic relationship between the two.

The adage, “On the Internet, nobody knows that you’re a dog” [42] is as true today as it has been throughout the history of the Internet, but the problem plaguing law enforcement agencies today is that, “on the Internet, nobody knows where the dog is” [43]. This is because the functionality of the Internet and its architecture are technologically indifferent to geographical location [44], leaving no scope for coherence in real space and cyberspace, wherein the latter is characterised by ‘geographical indeterminacy’ [45]. This gives rise to the legal issue of ‘appropriate jurisdiction’ or even ‘conflicting jurisdiction’ for cybercrimes. Criminal law is territorial in its applicability, and as territory itself is indeterminate in cyberspace, the applicable law and the appropriate jurisdiction would need to be determined in accordance with the principles of private international law, as is being done in the resolution of e-commerce disputes. But, do the principles of the civil liability transpose well into the realm of criminal liability? Although this is procedurally possible, the answer would still be substantively ‘no’, particularly when the definition of cybercrime itself may not be known in many jurisdictions. These legal issues need to be addressed for detection, investigation, prosecution and conviction of the criminals in cyberspace. And international cooperation is imperative in order to find where the ‘dog’ is, as it involves issues of sovereignty, jurisdiction, transnational investigations and examination of extraterritorial evidence.


Lawrence Lessig, in his theoretical model of cyberspace regulation [46], argued that behaviour is regulated by four constraints, viz., laws, social norms, markets, and nature [47]. The law, however, indirectly regulates behaviour while directly influencing the other three constraints, namely, social norms, markets, and nature. Applying this concept to cyberspace, Lessig postulated that in cyberspace, the equivalent of ‘nature’ is ‘code’ [48], with the latter being a more pervasive and effective constraint in cyberspace. The code is also more susceptible to being changed by law than the nature. Therefore, both the ‘code’ and ‘law’ have the potential of regulating the behaviour in cyberspace [49]. It has been argued that regulation in cyberspace would be more efficient and effective if the law regulates code rather than individual behavior [50].

The ‘code’ being expounded by Lessig was meant to include merely the software. With the advent of advanced technology in cyberspace, however, it is obvious that code would have to include not only the software, but also the concomitant hardware, Internet protocols, standards, biometrics, and privately controlled governance structures. All these components collectively contribute to the character and peculiarities of the Internet, making it the way it is. The code could then be safely given a new name, viz., ‘cyberspace architecture’ [51], with every component of this architecture having the potential of being regulated by law.

However, as pointed out earlier, even if various national Governments have enacted some type of law pertaining to cybercrimes, inconsistencies and disharmony remain in their application in transnational environments as criminal law is territorial. This necessitates international cooperation in either an informal or formal manner. Further, evidence gathered through the former is not admissible in courts, while evidence gathered through the latter is delayed due to the prevalence of long-drawn procedures, resulting in the escape of the ‘dog’. The solution could thus lie in the creation of an ‘International Framework on Cybercrime’ for addressing various legal issues relating to cyberspace.

The Council of Europe Convention on Cybercrime (the Convention) [52] is the first comprehensive framework on cybercrime which puts forth ‘instruments to improve international cooperation’ [53] and ‘duly takes into account the specific requirements of the fight against cybercrime’ [54]. The Convention has the potential of becoming an International Cyber Law like the Private International Law that has evolved over a period of time, but would have to be used in harmony with the substantive criminal law of the territory. The complex interaction between the two underscores the necessity for the enactment a separate set of laws to handle cybercrime.


Cyberspace is increasingly becoming a favourite domain for criminals for not only committing crimes but also for maintaining secret global criminal networks. This is because the organic nature of cyberspace is manifested in anonymity in space and time, immediacy of effects, non-attribution of action, and the absence of any international borders. Due to the unique nature of cyberspace, it is difficult to apply the laws of criminal liability for traditional crimes to cybercrimes. An examination of the traditional theories reveals that cybercrime is fundamentally different from crimes in the real world, and the traditional models are not effective in dealing with cybercrime. However, the dynamics of cybercrime was explained by transposing the factors operating in Routine Activity Theory (RAT) to cyberspace. It was demonstrated that the higher levels of anonymity, confidence and technological skills enjoyed by the offender motivate him to choose and target a victim who has been rendered vulnerable by the prevalent low level of security, trust and crime-reporting emanating from poorly defined laws, poor technical skills, and deficit of trust in the law enforcement machinery. The detection, investigation, prosecution, and successful conviction of the perpetrator of a cybercrime require the law to address the specific features of crime in virtual space. Anonymity and invisibility of action in cyberspace and its ‘geographic indeterminacy’ give rise to the legal issues of ‘applicable laws’ and ‘conflicting jurisdiction’. The architecture of the Internet needs to be governed by law, which has the potential to improve the behaviour of criminals in cyberspace. This would also entail international cooperation to address the issues of sovereignty, jurisdiction, transnational investigations, and extraterritorial evidence. It is suggested that the Council of Europe Convention on Cybercrime could be a yardstick for initiating measures in this direction. However, all this does not preclude the need for a separate set of laws for handling cybercrimes and providing legal remedies against them.


[1] Sandeep Mittal, ‘A Strategic Road-map for Prevention of Drug Trafficking through Internet’ (2012) 33 Indian Journal of Criminology and Criminalistics 86
[2] Marco Gercke, Europe’s legal approaches to cybercrime (Springer 2009)
[3] Marco Gercke, ‘Understanding cybercrime: a guide for developing countries’ (2011) 89 International Telecommunication Union (Draft) 93
[4] David L Speer, ‘Redefining borders: The challenges of cybercrime’ (2000) 34 Crime, law and social change 259
[5] Sandeep Mittal, ‘Perspectives in Cyber Security, the future of cyber malware’ (2013) 41 The Indian Journal of Criminology 18
[6] Sandeep Mittal, ‘The Issues in Cyber- Defense and Cyber Forensics of the SCADA Systems’ (2015) 62 Indian Police Journal 29
[7] Sandeep Mittal, ‘A Strategic Road-map for Prevention of Drug Trafficking through Internet’
[8] Open-ended Intergovernmental Expert Group on Cybercrime, Comprehensive Study on Cyber Crime, 2013)
[9] (Accessed on 13/04/2017)
[10] (Accessed on 13/04/2017)
[11] Martina Gillen, ‘Lawyers and cyberspace: Seeing the elephant’ (2012) 9 ScriptED 130
[12] Frank H Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) U Chi Legal F 207
[13] Ibid at 207, para 3
[14] Joseph H Sommer, ‘Against cyberlaw’ (2000) Berkeley Technology Law Journal 1145
[15] Lawrence Lessig, ‘The law of the horse: What cyberlaw might teach’ (1999) 113 Harvard law review 501
[16] Andrew Murray, ‘Looking back at the law of the horse: Why cyberlaw and the rule of law are important’ (2013) 10 SCRIPTed 310
[18] Susan W Brenner, ‘Toward a criminal law for cyberspace: A new model of law enforcement’ (2004) 30 Rutgers Computer & Tech LJ 1
[19] Ibid at page 104
[20] Susan W Brenner, ‘Cybercrime Metrics: Old Wine, New Bottles?’ (2004) 9 Va JL & Tech 13
[21] Susan W Brenner, ‘Is There Such a Thing as’ Virtual Crime’?’ (2001)
[22] Brenner, ‘Toward a criminal law for cyberspace: A new model of law enforcement’
[23] Miltiadis Kandias and others, An insider threat prediction model (Springer 2010)
[24] Sandeep Mittal, ‘Understanding the Human Dimension of Cyber Security’ (2015) 34 Indian Journal of Criminology and Criminalistics 141
[25] Majid Yar, ‘The Novelty of ‘Cybercrime’ An Assessment in Light of Routine Activity Theory’ (2005) 2 European Journal of Criminology 407
[26] Ibid
[27] Lawrence E Cohen and Marcus Felson, ‘Social change and crime rate trends: A routine activity approach’ (1979) American sociological review 588
[28] Nir Kshetri, ‘The simple economics of cybercrimes’ (2006) 4 IEEE Security & Privacy 33
[29] Yar, ‘The Novelty of ‘Cybercrime’ An Assessment in Light of Routine Activity Theory’
[30] Majid Yar, Cybercrime and society (Sage 2013)
[31] Yar, ‘The Novelty of ‘Cybercrime’ An Assessment in Light of Routine Activity Theory’. at page424
[32] Ibid
[33] Mittal, ‘A Strategic Road-map for Prevention of Drug Trafficking through Internet’
[34] Statistics Source: Crime in India Statistics, NCRB, Ministry of Home Affairs, Government of India, New Delhi.
[35] Karuppannan Jaishankar, ‘Establishing a theory of cyber crimes’ (2007) 1 International Journal of Cyber Criminology 7
[36] Susan W Brenner, ‘Toward a criminal law for cyberspace: Product liability and other issues’ (2004) 5 Pitt J Tech L & Pol’y i
[37] Bill McCarthy, ‘New economics of sociological criminology’ (2002) 28 Annual Review of Sociology 417
[38] JR Probasco and William L Davis, ‘A human capital perspective on criminal careers’ (1995) 11 Journal of Applied Business Research 58
[39] Kshetri, ‘The simple economics of cybercrimes’
[40] Neal Kumar Katyal, ‘Criminal law in cyberspace’ (2001) 149 University of Pennsylvania Law Review 1003
[41] Ibid
[43] Alexandre López Borrull and Charles Oppenheim, ‘Legal aspects of the Web’ (2004) 38 Annual review of information science and technology 483
[44] Though every computer or smart device has a machine address which can be easily spoofed, we are talking here specifically about geographical location. The remote access, incognito logins, encrypted platforms for communication, anonymous remailers and availability of ‘cached’ copies of frequently accessed internet resources further complicate and make impossible to attribute actions in cyberspace.
[45] Dan L Burk, ‘Jurisdiction in a World without Borders’ (1997) 1 Va JL & Tech 1
[46] Lessig, ‘The law of the horse: What cyberlaw might teach’
[47] In real space nature is represented by architecture.
[48] That includes software that makes internet to behave as it is.
[49] Graham Greenleaf, ‘An endnote on regulating cyberspace: architecture vs law?’ (1998)
[50] Lessig, ‘The law of the horse: What cyberlaw might teach’
[51] Greenleaf, ‘An endnote on regulating cyberspace: architecture vs law?’
[52] Council of Europe, Convention on Cybercrime, 23 November 2001, available at: [accessed 26 February 2017]
[53] Ibid. Articles 23-35
[54] Ibid. Preamble

Click for PDF view

A Review of International Legal Framework to Combat Cybercrime

Posted on Updated on

International Journal of Advanced Research in Computer Science, ISSN No. 0976-5697, Volume 8, No. 5, May-June 2017

Sandeep Mittal, IPS
LNJN National Institute of Criminology & Forensic Science
Ministry of Home Affairs, New Delhi, India
Prof. Priyanka Sharma
Professor & Head
Information Technology & Telecommunication,
Raksha Shakti University, Ahmedabad, India


Abstract: Cyberspace is under perceived and real threat from various state and non-state actors. This scenario is further complicated by distinct characteristic of cyberspace, manifested in its anonymity in space and time, geographical indeterminacy and non-attribution of acts to a tangible source. The transnational dimension of cybercrime brings forth the issue of sovereignty, jurisdiction, trans-national investigation and extra territorial evidence necessitates international cooperation. This requires and international convention on cybercrime which is missing till date. Council of Europe Convention of Cybercrime is the lone instrument available. Though it is a regional instrument, non-members state like US, Australia, Canada, Israel, Japan etc. have also signed and ratified and remains the most important and acceptable international instruments in global fight to combat cybercrime. In this paper, authors have argued that Council of Europe Convention on Cybercrime should be the baseline for framing an International Convention on Cybercrime.

Keywords: Cybercrime, International Convention on Cybercrime, Cyber Law, Cyber Criminology, International Cooperation on Cybercrime, Internet Governance, Transnational Crimes.


Information Societies have high dependency on the availability of information technology which is proportional to security of cyber space [1] [2]. The availability of information technology is under continuous real and perceived threat from various state and non-state actors [3]. The cyber-attack on availability of information technology sits on a thin line to be classified as cybercrime or cyber war having devastating effects in the physical world. The discovery of ‘cyber-attack vectors’ like Stuxnet, Duqu, Flame, Careto, Heart Bleed etc. in the recent past only demonstrates the vulnerability of the confidentiality, integrity and availability of information technology resources [4] [5]. The scenario is further complicated by the very nature of cyber space manifested in anonymity in space and time, rapidity of actions resulting in asymmetric results disproportionate to the resources deployed, non-attribution of actions and absence of international borders [6]. By virtue of these features, ‘the transnational dimension of cybercrime offence arises where an element or substantial effect of the offence or where part of the modus operandi of the offence is in another territory’, bringing forth the issues of ‘sovereignty, jurisdiction, transnational investigations and extraterritorial evidence’; thus necessitating international cooperation [7]. In this essay, international efforts and their efficacy in combating cybercrimes would be analysed.


Although several bilateral and multilateral efforts have been attempted to combat cybercrime, the European Union remains at the forefront in creating a framework on cybercrime [8] [9] [10] [11]. Going beyond the European Union by inviting even non-member States, incorporating substantial criminal law provisions and procedural instruments, the Council of Europe Convention on Cybercrime (the Convention) [12] puts forth ‘instruments to improve international cooperation’ [13]. The Convention makes clear its belief ‘that an effective fight against cybercrime requires increased, rapid and well-functioning international cooperation in criminal matters’ [14]. As on December 2016, 52 States have ratified the Convention and 4 States have signed but not ratified. As of July 2016, the non-member States of Council of Europe that have ratified the treaty are Australia, Canada, Dominican Republic, Israel, Japan, Mauritius, Panama, Sri Lanka and US. The Convention is today the most important and acceptable international instrument in global fight to combat cybercrime [15] [16] [17] thereby limiting the scope of discussion to the Convention for the purpose of this essay.

The Convention seeks to harmonise the substantive criminal law by defining ‘offences against the confidentiality, integrity and availability of computer data and systems’ [18], ‘computer related offences’ [19], ‘content related offences’ [20], ‘offences related to infringement of copyright and related rights’[21] and ‘ancillary liability and sanctions’ [22]. The convention also seek to harmonise the procedural law by providing scope, conditions and safeguards to procedures [23], expedited preservation of stored computer data, traffic data and partial disclosure of traffic data [24]; the search and seizure of stored computer data [25] and collection of real time data [26]. The jurisdiction over the offences established by the Convention is also sought to be harmonized [27]. However the strength of the Convention is the details in which general and specific principles relating to international co-operation including extradition and mutual assistance are enumerated [28]. To sum up, the Convention intends to provide ‘a swift and efficient system of international cooperation, which duly takes into account the specific requirements of fight against cybercrime’ [29]. However, a few scholars [30] have raised doubts about the effectiveness of the Convention, in improving the international co-operation thus enabling law enforcement agencies to fight cybercrime, and thereby terming it merely a symbolic instrument. The Convention ‘is an important step in right direction’ [31] and remains as ‘the most significant treaty to address cybercrimes’ [32].


A number of contentious legal and procedural issues generally arise while investigating cybercrimes involving transnational dimension, thus acting as impediment to the very process of investigation [33] [34] [35]. The cyber space has evolved exponentially since the Convention was drafted. The deployment of ‘military-grade precision-vectors’ and the advanced persistent threats (APTs) to attack infrastructure in virtual and real world are the order of the day. The internet of things has beginning to become botnet of things. The Nation-states also have realised that the cyber-space has almost become the fifth domain of war.[36] In view of this escalated scenario, while the formal channels like extradition and mutual assistance are delayed to the extent of killing the investigation, the informal requests between law enforcement agencies (LEAs) are viewed with suspicion.

The Convention only seeks to harmonize the domestic law but many nation-states have no cybercrime legislation. This combined with heterogeneity of skills, capacity, technology access and sub-culture of LEAs, cybercriminals and victims forms a ‘vicious circle of cybercrime’ [37]. The role of consent, having cognitive and cultural limitations, for accessing stored computer data in accordance with Article 32 of the Convention, is not well defined and therefore open to the interpretation of courts making this provision rather an instrument of international non-cooperation. Moreover, EU Primary Law viz., Charter of Fundamental Rights (CFR) of the European Union of 2000 [38], Treaty on European Union [39] and the jurisprudence of the CJEU [40], now recognise data protection as a fundamental right. The shield of human rights is very effectively used to prevent international co-operation. The domestic laws of some nation-states, e.g., Section 230, CDA [41] in US, have become judicial oak in hampering international co-operation in cybercrime investigations as it provides blanket immunity to search engines like Google.

The very nature of the internet-governance structure, tilted heavily toward private players, leaves very little in the hands of the States. The efforts for strengthening international co-operation to combat cybercrime, including the Convention, have miserably failed to tap this private element of the governance mainly due to conflict of private and public interests.


As cyber space is rapidly evolving with the advent of new technologies, the cybercrime is assuming new dimensions in space and time impeding its investigation in ways never before contemplated. The law and the capacity building of LEAs are not able to keep pace with these new developments. While the cyber space has no borders for the cybercriminals, the law enforcement agencies would have to respect the sovereignty of other nations. The national disparities in ‘law’, ‘legal systems’ and ‘capacity’ to combat cybercrimes are so wide that the international co-operation remains the only hope to combat crime. The Convention on Cybercrime is, though symbolic, a great effort to identify issues and provide solution to the existing legal and procedural gaps in fighting cybercrime. As the laws were and would always remain inadequate for enforcement, it would only be a concerted effort to achieve international co-operation to make cybercrime a very high cost and high risk proposition. The UN has recently woken up to the situation [42] and would do well to take the Convention on Cybercrime as the baseline to frame an International Convention on Cybercrime.


[1] M. Gercke, “Europe’s legal approaches to cybercrime,” in ERA forum, 2009, pp. 409-420.
[2] M. Gercke, “Understanding cybercrime: a guide for developing countries,” International Telecommunication Union (Draft), vol. 89, p. 93, 2011.
[3] D. L. Speer, “Redefining borders: The challenges of cybercrime,” Crime, law and social change, vol. 34, pp. 259-273, 2000.
[4] S. Mittal, “Perspectives in Cyber Security, the future of cyber malware,” The Indian Journal of Criminology, vol. 41, p. 18, 2013.
[5] S. Mittal, “The Issues in Cyber- Defense and Cyber Forensics of the SCADA Systems,” Indian Police Journal, vol. 62, pp. 29- 41, 2015.
[6] S. Mittal, “A Strategic Road-map for Prevention of Drug Trafficking through Internet,” Indian Journal of Criminology and Criminalistics, vol. 33, pp. 86- 95, 2012.
[7] O.-e. I. E. G. o. Cybercrime, “Comprehensive Study on Cyber Crime,” UNODC2013.
[8] “COMMUNICATION FROM THE COMMISSION TO THE COUNCIL, THE EUROPEAN PARLIAMENT, THE ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS Creating a Safer Information Society by Improving the Security of Information Infrastructures and Combating Computer-related Crime,” ed, 2001.
[9] “Communication from the Commission to the Council, the European Parliament, the Economic and Social Committee and the Committee of the Regions: Creating a safer information society by improving the security of information infrastructures and combating computer-related crime [COM(2000) 890 final – not published in the Official Journal].”
[10] “Council Framework Decision 2005/222/JHA of 24 February 2005 on attacks against information systems,” vol. OJ L 69, 16.3.2005, p. 67–71, ed.
[11] Council of Europe, Convention on Cybercrime, 23 November 2001, available at: [accessed 26 February 2017].
[12] ibid.
[13] ibid.. Articles 23-35
[14] ibid. Preamble
[15] “Communication from the Commission to the Council, the European Parliament, the Economic and Social Committee and the Committee of the Regions: Creating a safer information society by improving the security of information infrastructures and combating computer-related crime [COM(2000) 890 final – not published in the Official Journal].”
[16] O.-e. I. E. G. o. Cybercrime, “Comprehensive Study on Cyber Crime,” UNODC2013.
[17] “United Nations, UN General Assembly Resolution 55/63: Combating the Criminal Misuse of Information Technologies (Jan. 22, 2001),” ed.
[18] Council of Europe, Convention on Cybercrime, 23 November 2001, available at: [accessed 26 February 2017].. Articles 2 – 6.
[19] ibid.. Articles 7, 8.

Click for PDF view

Risks and Opportunities provided by the Cyber- Domain and Policy- Needs to address the Cyber- Defense

Posted on Updated on

International Research Journal On Police Science, ISSN 2454-597X Volume 2, Issue 1&2

Sandeep Mittal, I.P.S.,*


International Research Journal On Police Science. ISSN: 2454-597X, Issue 1&2, December 2016


The term ‘Cyber Domain’ has been used widely by various experts, sometimes interchangeably with ‘Cyber Space’, to imply – “the global domain within the information environment that encompasses the interdependent networks of information technology infrastructures, including the internet and telecommunication networks” (Camillo & Miranda, 2011). Today it has become “the fifth domain of warfare after land, sea, air and space and its a challenge to have a common definition of cyber Domain” but for the purpose of this essay the definition given above would suffice. Any entity, whether it is a Nation State or an Enterprise, who operates in cyber domain need to maintain confidentiality, integrity and availability of its deployed resources. The dynamics of cyber domain is complex and complicated in time and space. The humans, machines, things and their interaction is evolving continuously to pose risks and opportunities in the cyber domain. The risk to someone becomes opportunity for the other. In this essay, the ‘risks presented by’ and ‘opportunities available in’ the cyber Domain would be identified, discussed and analyzed to consider key strategic policy elements to defend the cyber domain.

Risks and Opportunities in Cyber Domain

The ‘very low cost efforts’ giving asymmetric results coupled with anonymity in space and time makes the cyber domain attractive (Cyber Security Strategy of UK, 2009) for use by various actors for malicious objectives. This faceless and boundary less domain is highly dynamic and throwing surprises with rapidity and having the potential of causing damages (real and virtual) which are disproportionate to the resources deployed. Let us have a look at various realms in terms of risks associated with them.

  1. The information system platforms and the equipment supporting the cyber ecosystem is susceptible to conventional physical attacks. The electronic equipment can be subjected to destruction by generating High Energy Radio Frequencies and Electromagnetic Pulses.
  2. The services in the cyber- space may be disrupted by direct attack e.g. DoS, DDoS etc. This is the most common attack and has the potential to paralyze the lines of communication, bring down banking services and sabotage military operations. It has been deployed over the years not only by novice script kiddies but also sophisticated state sponsored agencies successfully. Botnets working round the clock have become a serious challenge.
  3. The sensitive data (in storage and on the move) may be accessed, stolen or manipulated to have the desired effect immediately or at a subsequent date. The technology and deployment methodology is evolving with time and simple malware tools have been replaced with complex, intelligent and well-crafted attacks generally known as Advanced Persistent Threats (APTs). The stealth, patience and dedicated consistency of APTs has the capability to bypass the best firewalls (including New Generation Firewalls) and Intrusion Detection and Prevention Systems to exploit the Zero- Day- Vulnerabilities (Fire Eye White Paper, 2014).

The risks  associated with various realms as discussed above may manifest themselves in various dimensions of the society like Civic Infrastructural Breakdown (e.g., failure of electric power grids, disruption of fuel pipelines, disruption of water supply chain etc.), Economy Disruption (e.g., disruption of banking services, business continuity and maintenance related costs), Social Behavioral Effects on Society (e.g., gambling, spamming, pornography, drugs supply, propagation of extremist ideology) and last but not the least hacking and intrusion into privacy, compromising the Nations Morale through  use  of social media leading  to civic unrest and hampering diplomatic relations (e.g. Wiki Leaks ) and thus finally setting the stage for Cyber Warfare. Eventually, the Cyber Domain becomes a ‘means’ of most serious ‘end’, that is, the Cyber Warfare (Cornish et al, 2009). The ‘research-tool of yester- years’ has evolved into a strong medium of mass communication. In the Chatham Report titled ‘Cyberspace and the National Security of the United Kingdom, 2009, the concept of Cyber Threat Domains is introduced.

Let us have a look at the challenges and opportunities in Cyber security in terms of four ‘Cyber- Threat- Domains” (Cornish et al, 2009).

  1. ‘State-sponsored Cyber-attacks: The complete dependence of a Nation’s economy and critical infrastructure presents an opportunity to the ‘Nation States’ to deploy cyber- tools to gain information-dominance in cyber-domain to transmit information and denial/ restriction of such information to enemy state, as also the collection of tactical information. Going further, crippling a nation by paralyzing its critical infrastructure through deployment of stealthy and well-crafted tools to exploit ‘Zero-day-vulnerability’ is a matter of hours, and not even days. The use of Cyber attacks in raising the temperatures of furnaces in nuclear power plants and increasing the flow-speed of liquids in fuel pipelines may be used as weapons of mass- destruction.
  2. The concepts of war-maneuvering have been compared with cyber-maneuver (Applegate 2012), where it is realized that blatantly hostile acts in cyber space are characterized by rapidity, anonymity and difficulty in attribution and are dispersed in space and time. Even the territory of enemy or one of his allies can be used to achieve desired asymmetric results.

  3. Cyber-Terrorism /Extremism –There is no other medium which is more powerful and anonymous than cyberspace, where asymmetric results can be achieved by deploying minimal resources with ease. The internet is an anarchic play ground or an ungoverned space, which can be exploited by extremists for communication and information sharing, designing strategies, conducting training for its members, procurement of resources, infiltrating State’s assets and forming alliances with organization having common objectives but different motivations. The use of social media by political extremists to propagate their ideology and take on the government machinery may spearhead insurgency by exploiting public sentiment.
  4. Serious and Organized Criminal Groups are exploiting the cyber space not only to maintain their criminal networks but also for money laundering, drug-trafficking, extortion, credit card frauds, industrial espionage etc. “In the cyber space, physical strength is insignificant […….] , strength is in  software , not in numbers of individuals“ (Brenner, 2002). It poses a great challenge to the Law Enforcement Agencies to tackle Cyber- criminality. The need of operational level coordination with international LEAs can not be under stated as the existing mechanisms of MLAT etc have not given desired results. The thrust LEAs is on acquisition of hardware and software and the training of human resources is lacking.
  5. Lower –level Individual Attacks: are acts of individuals and may give results disproportionate to the skills deployed. These attacks may not be technologically advanced but have the capabilities to create panic and day to day disruptions. Sometimes fools pose great questions. Free availability of a number of   hacking and penetration testing tools on internet assist the script kiddies to venture in the world of hacking.

Thus it is amply clear form the foregoing that the cyber domain presents unimaginable opportunities spread over space and time with rapidity, anonymity and almost no investments.

Policies to Address Cyber Defense

Any policy for cyber- defense has to be multipronged, tiered and dynamic. There are many approaches to decide upon the strategic policies. One is the systematic approach while the other is to keep the national security as the central theme and then weave other defenses around it. What should be the strategy for a secure Information Society?  For the purpose of this essay we may define it as    “the ability of a network or an information system to resist, at a given level of confidence, accidental events or malicious actions that compromise the availability authenticity, integrity and confidentiality of stored or transmitted data and the related services offered by or accessible via these networks and systems” (Commission of the European Communities, 2006). Though this is a network- system- centric definition, it is felt by author that, if this approach is taken care of, by the strategic policy, the other considerations would fall in line. The approach should not be like the example of the “elephant and the five blind men’ rather it should be an integrative approach to address various risks, issues and opportunities in the cyber domain.  We would try to build up the key elements of the strategy which a strategic policy should address to defend the cyber domain. “The integrated application of cyberspace capabilities and processes to synchronize in real- time, ability to detect, analyze and mitigate threats and vulnerabilities, and outmaneuver adversaries, in order to defend designated networks is part of cyber defense strategy and includes proactive network operations, defensive counter cyber operations and defensive countermeasures” ( U.S Department of Defense, 2010 ). As policy should be general and broad, it would be beyond the scope of this essay to discuss procedures, details of technologies and processes associated with them and mechanisms to deploy them.  We would be focusing rather on the key elements; a security policy should incorporate to achieve the objective of defending the cyber domain. It should incorporate the ground realities present in the scenario where policy would be applied. In the lighter vein, I am incorporating three cartoons, based on three real incidents in India, conceptualized by the author.

The author has perused the summaries of  the National Cyber Security Strategies of nineteen countries (Luijf, Besseling & Graaf, 2013) and based on them, tried to identify the key elements of the strategic policy to defend the cyber domain.

  1. Legislation/Legal Framework:

    The cyber domain has no boundary. The various stakeholders and players may be spread all round the globe irrespective of national jurisdictions. Hence, a law which is progressive and aligned with international conventions on cyber-crime and Laws of the other nation states would be a basic requirement to defend the cyber domain. Additionally, the judiciary needs to be sensitized on various aspects of cyber law for better appreciation while dealing with such cases.

  2. Mandating the Security Standards:

    Mandating the minimal security standards in information security is like preparing the ground before the seeds are sown. Security assurance measures for products ( ISO/IEC 15408), security assurance measures for development process  (ISO /IEC 21827) , measures for Security Management (ISO/IEC 27001) etc should  be implemented with Zero tolerance for non-compliance. Personnel expertise and knowledge should be mandated through professional certifications.

  3. Secure protocols, Soft wares and Products:

    At present there is no system in place for ‘cyber-supply-chain-security-ratings’. This is a big loophole as these hardware and software ,  have to be frequently changed and have the potential of getting compromised thus putting the cyber- security at stake. These software and hardware become the gateway to attacks in the cyber domain.

  4. Active-Dynamic Security Measures for Prevention, Detection and Response Capabilities:

    The technology of the malware and the methodology of its deployment in cyber-domain has radically evolved over the years. “The attacks are advanced, targeted, stealthy and persistent and cut across multiple threat vectors [web, email, file shares, and mobile devices ] and unfold in multiple stages, with calculated steps to get in , signal back out of the compromised  network,  and get  the valuables out (Fire Eye White  Paper, 2013).  While firewalls, new generation firewalls , Intrusion Prevention Systems etc. are important security defenses, they can not stop  dynamic attacks  that exploit zero-day vulnerabilities. Hence integrated platforms having the capability to identify and block these sophisticated attacks,   and thus safeguard their critical and sensitive assets. Attack  Attribution Analysis should be deployed to  identify the attackers (Lewis, 2014) . Zero Trust Model of Information Security also helps in reducing the attacks from digitally- signed-malware (IBM Forrester Research Paper, 2013).

  5. Threat and vulnerability Analysis:

    A detailed threat and vulnerability analysis of the resources should be maintained and updated periodically as per minimum At least a broad 3×3 matrix  as per NIST FIPS 199 Standards is suggested.  A risk- profile- dashboard should be kept ready. The assets which are critical need to be identified clearly and SOPs for their protection be put in place.

  6. Continuity and contingency Plans should be prepared and kept ready. Many nations are deploying in house “Government- off- the- shelf“ (GOTS) technology for sensitive defense and critical infrastructure systems. The attacks are inevitable but if the services are maintained, the confidence and trust of the stakeholders is vindicated. The Governments should also work towards a mechanism of Cyber Liability and Cyber Insurance which at present is generally lacking.

  7. Information Sharing: In most of the countries there is a mechanism to share information on security breaches and related developments by establishing Computer Emergency Response Teams (CERTs). These national CERTs also interact with each other at international level. However , the author’s personal experience shows that many of the enterprises  don’t share information on breaches  in order to protect corporate image. Sometimes the security breaches may not be even known for months. There is an urgent need for devising a mechanism where reporting of security  breaches should be made mandatory with penalties for non-compliance.

  8. Awareness, education and training: Practice makes a man perfect. Continuous awareness and educational campaigns for various stakeholders on dos and don’ts have to be run repeatedly. The training workshops  for the workforce  should be organized. We should always remember that the human behavior is the greatest risk to security and this risk can only be minimized by education and training only.

  9. Reforms in school and Collegiate Education: If cyber security as a subject is included in the school and college curricula, a ready cyber work force would be available to be deployed across various sectors. The online training courses in cyber security should be designed and incentives offered to workers, if they attend and successfully complete these courses.

  10. International Collaboration: The cyber domain has no boundaries. The attacker sitting in one country using the system and resources of a second country may compromise a sensitive database in a third country. If there is no international collaboration, what  ever strategy we may design, it is bound to fail. Although, there is a Regional Convention on Cyber Crime but unfortunately there is no such convention on cyber security [The Council of Europe (Budapest) convention on Cyber Crime, 2004]. There is a necessity for comprehensive international cooperation to sort-out issues regarding Jurisdiction, Mutual Assistance, Extradition , 24 / 7 Network etc ( Clough, 2013). However , personal experience of the author is that there is need to galvanize international cooperation, which is presently almost ineffective at operational level.

However, to achieve the desired objectives, the strategies need to be implemented through acquirement and effective allocation of sufficient resources through accountable responsibilities ( Ward & Peppard, 2002). But even if all this is done, the things will not turn out as desired ( Johnson & Scholes, 2002 ) as demonstrated in the following figure. Therefore a strategic management process that can adapt to changing scenarios during the implementation of original strategy is not a substitute for the original strategy but it’s a way of making it work.


The Cyber Domain by virtue of its unique characteristics of anonymity, availability and maneuverability in space and time, having no international borders ,  and capacity to give asymmetric results hugely disproportionate to the resources deployed, offers tremendous risks and opportunities for various stakeholders. It is rapidly expanding its scope from internet of human beings and machines to internet of things. It has the potential of disrupting a Nations economy, polity, civic and military infrastructure and last not the least, may lead to the cyber-warfare. Any policy and strategy to defend the Cyber Domain should be dynamic enough to adjust to the rapidly changing nature of attacks and technology. The futuristic scenarios  like “Botnet of Things” have the potential of disrupting the normal life of humans. The strategic policy explained in this essay,  if implemented,  should take care of various aspects of defending the cyber domain. However, as the attacks, technologies and attackers evolve, the policy should also evolve with the same rapidity. The ‘unknown- unknown’ of the cyber domain is yet to be seen by the world.

Note:      The views expressed in this paper are of the author and do not necessarily reflect the views of the organizations where he worked in the past or is working presently. The author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study. The author is also thankful to his student Ms. Avinash Kaur @ NICFS who skillfully converted the given situations depicted by the author into the cartoons included in this paper.


Applegate,S. 2012, “ The Principle of Maneuver in Cyber Operations  accessed on 14/03/2014.

Brenner, S.W. 2002, “Organized Cybercrime? How Cyberspace May Affect theStructure of Criminal Relationships  (Vol. 4, Issue 1, Fall 2002), p. 24.”, Journal of Law & Technology, North Carolina, vol. 4, no. 1, pp. 24.

Clough , J. 2013, “The Budapest Convention on Cyber Crime: Is Harmonisation Achievable in a Digital World.
Accessed on 13/03/2014.”, 2nd International Serious and Organised Crime Conference, ed. Presentation, Monash University, Brisbane, 29-30 July 2013.

Cornish, P., Livingstone, D., Clemente, D. & and Yorke, C. 2009, Cyber Security and the UK’s Critical National Infrastructure. Accessed on 13/03/2014, A Chatham House Report, United Kingdom.

Cornish, P., Hughes, R. & and Livingstone, D. 2009, Cyber space and the National Security of the UnitedKingdom : Threats and Responses. Accessed on 14/03/2014, A Chatham House Report, United Kingdom.

Cornish, P., Livingstone, D., Clemente, D. & and Yorke, C. 2010, On Cyber Warfare on: 11/03/2014, A Chatham House Report, United Kingdom.

Federica Di Camillo and Vale’rie Miranda 2011, Ambiguous Definitions in Cyber Domains: Costs, Risks and the Way Forward., Istituto Affari Internazionali, Roma.

FireEye White Paper 2014, Advanced Attacks Require Federal Agencies to Reimagine IT Security, online publisher, Accessed 11/03/2014.

FireEye White Paper 2013, Thinking Locally, Targetted Globally- New Security Challenges for State and Local Governments accessed on 11/03/2014,

IBM 2013, Supporting the Zero Tr ust Model of Information Security:The Important Role of Today’ s Intrusion Prevention Systems on 13/03/2014, IBM Forresster Research Paper, Online.

Luiijf, E., Besseling, K. & and de Graaf, P. 2013, “Nineteen national cyber security strategies’, , Vol. 9, Nos. 1/2, pp.3–31.”, Int. J. Critical Infrastructures, vol. 9, no. 1/2, pp. 3–31.

NIST 800- 39, Managing Information Security Risk: Organization Mission and Information System View.  , NIST Special Publication., USA.

NIST “Guide for Applying Risk Management Framework to Federal Information Systems. NIST Special Publication 800- 37.   “, NIST, vol. 800- 37.

NIST Recommended Security Controls for Federal Information Systems and Organizations. NIST Special Publication 800- 53., 800- 53 edn, NIST, USA.

NIST FIPS Standards for Security Categorization of Federal Information and Information Systems., NIST FIPS, USA.

NIST FIPS Standards for Security Categorization of Federal Information and Information Systems.  NIST FIPS Publication 199  , 199th edn, NIST FIPS, USA.

Purser, S. 2004, A practical guide to managing information security, Artech House, Boston, Mass. ; London.

Stevens, T. 2010, , ‘US Cyber Command achieves “full operational capability,” international cyberbullies be warned’, 5 November 2010, Accessed 11/03/2014, November edn,

The Joint Chiefs of the Staff 2010,, Memorendum for Chief of Military Services edn, US Department of Defense, Washington D.C.

UK Cabinet Office 2010, Securing Britain in an Age of Uncertainty: The Strategic Defence and Security Review , p. 47. Accessed 11/03/2014, Cm7948 edn, The Stationary Office, London.

UK Cabinet Office 2009, Cyber Security Strategy of the United Kingdom: Safety, Security and Resilience in Cyber Space, p. 12., Cm7642 edn, The Stationery Office, London.

Reputational Risk, Main Risk Associated with Online Social Media

Posted on

IJCC, Volume XXXIV No. 2 July-Dec.,2015 ISSN 09704345

Sandeep Mittal, I.P.S.,*


The Indian Journal of Criminology & Criminalistics,
Volume 35 (2) July – Dec. 2015


Social media is undoubtedly a revolution in the business arena blessing the organizations with the power to connect to their consumers directly. However, as the saying goes nothing comes without a cost; there is cost involved here as well. This article examines the risks and issues related to social media at the time when the world is emerging as a single market. Social networking and online communications are no more just a fashion but an essential feature of organizations in every industry. Unfortunately, inappropriate use of this media has resulted in increasing risks to organizational reputation threatening the very survival in the long-run and necessitating the management of these reputational risks.

This article attempts to explore the various risks associated with social media. The main aim of this study is to particularly focus on reputational risks and evaluate it’s intensity from the perspectives of public relations and security staff of an organization. The article is structured to firstly explain the concept of social media followed by identification of various social media risks and the analysis of reputational risk from perspectives of public relations and organizational security staff. The article then based on the analysis provides various recommendations in order to help the contemporary organizations to overcome such risks and thus, enhance their effectiveness and efficiency to gain competitive advantage in the long-run.

Keywords: Reputational Risk, Online Social Media, OSM Security, OSM Risk, Organizational Reputation, Cyber Security, Information Assurance, Cyber Defence, Online Communication.


With changing times, the concept of socializing has been transforming. Globalization and digitalization to a large extent are responsible for the same. With internet, it is possible to stay connected with people located in various regions of the world. One such medium of socializing is the social media. In todays time, online social media services have been one of the most vibrant tools adopted not only by individuals but also corporate and government organizations (Picazo-Vela et al., 2012). Corporates in fact have been abiding social media extensively as it is one of the cheapest ways of communicating with the masses. The importance of social media can be understood from the fact that at present there are more than 100 million blogs that are highly operational and connect people from across the world (Kietzmann et al., 2010). Further there has been a surge in social media members for websites like Facebook or Twitter with over 800 million active users in Facebook in 2012 and 300 million users of Twitter (Picazo-Vela et al., 2012). In spite of being a very powerful mode of communication it is subjected to a large number of risks.

Organizations do not operate in vacuum, thus, management of reputation is crucial for them, as it affects their markets as well as the overall environment. Organizational reputation not only impacts its existing relations but also affects the future courses of action (McDonnell and King, 2013). In this article, an attempt is made to understand the various reputational risks associated with social media that affects an organization’s working and also suggests some ways to overcome them.

Concept of Social Media

The foundations of social media have been laid by the emergence of Web 2.0 (Kaplan and Haenlein, 2010). It is with the help of this technological development that social media is accessed at such a wide scale and is available in devices like cell phones and tablets, other than personal computers and laptops. Social media is gaining importance in the corporate world as decision makers and consultants are exploring its various aspects to exploit its potential optimally (Kaplan and Haenlein, 2010). Social media is an online communication system through which information is generated, commenced, distributed and utilized by a set of consumers who aim to aware themselves regarding various aspects related to a product, service, brand, problems and persona (Mangold and Faulds, 2009). It is also known as consumer-generated media. In simple terms, it can be explained as a platform to create and sustain relationships through an Internet based interactive platform.

Social media is categorized under collaborative projects, blogs, content communities, social networking sites, virtual game worlds, and virtual social worlds (Kaplan and Haenlein, 2010). The examples of various communication systems under social media are provided in the Table 1 for ready reference.

Organizations have realized the importance of social media and have been using it along with other integrated marketing communication tools to converse with target audience effectively and efficiently (Michaelidou et al, 2011). This is mainly because the modern day consumers are shifting from traditional promotional sources to such modernized sources. Social media has a very strong hold and is influencing consumer behavior to a large extent. Out of all the above few examples, Twitter has emerged as one of the most powerful social media tools. In the present day scenario, approximately 145 million users communicate by transferring around 90 million ‘tweets’ per day, of 140 characters or less (Kietzmann et al, 2010). Another example is of Youtube in which videos can go viral in few seconds and can attract more than 9.5 million views for a single video (Kietzmann et al, 2010).

Table 1: Example of Social Media Types

Social Media Type Example
Social networking websites MySpace, Facebook, Faceparty, Twitter
Innovative sharing websites Video Sharing (Youtube), Music Sharing (, Photo Sharing (Flickr), Content Sharing (, General intellectual property sharing (Creative Commons),
User-sponsored blogs The Unofficial AppleWeblog,
Company-sponsored websites/blogs, P&G’s Vocalpoint
Company-sponsored cause/help sites Dove’s Campaign for Real Beauty,
Invitation-only social networks
Business networking sites LinkedIn
Collaborative websites Wikipedia
Virtual worlds Second Life
Commerce communities eBay,, Craig’s List, iStockphoto,
Podcasts For Immediate Release: The Hobson and Holtz Report
News delivery sites Current TV
Educational materials sharing MIT OpenCourseWare, MERLOT
Open Source Software communities Mozilla’s,
Social bookmarking sites which permit browsers to suggest online news stories, music, videos Digg,, Newsvine, Mixx it, Reddit

Source: Mangold and Faulds, 2009.


Risks Associated with Social Media

Before discussing the various risks associated with social media, it is essential to understand the various risks faced by an organization while using the internet. This can be depicted with the help of a diagram provided as Figure 1.

Figure 1: Internet Related Risks for Organizations
Source: Lichtenstein and Swatman, 1997

In Figure 1, other internet participants imply other members from the internet society. These risks are very general and are experienced by organizations even in cases where they are not connected to the internet like the risks associated with corrupted software (Lichtenstein and Swatman, 1997).

The horizon of risks have expanded to a larger extent by things becoming more critical and complicated with extensive popularity and usage of social media (Armstrong, 2012). Organizations are challenged with new and unique risks which need to be catered proactively. These risks threaten the effectiveness of this mode and thus organizations fail to reap its benefits completely. It is due to such risks that many organizations have either limited their approach towards usage of social media or do not resort to such measures. Such risks range from data outflow and legal complications to risks associated with reputation (Everett, 2010).

These risks can be categorized under two heads namely; those related to user and security related issues (Chi, 2011). User related risks are inadequate certification controls, phishing, information seepage, and information truthfulness (Chi, 2011). The security related risks are Cross Site Scripting (XSS), Cross Site Request Forgery (CSRF), injection defects, deficient anti-automation (Chi, 2011).

Out of all the risks related to social media, an organization is mainly threatened with risks related to information confidentiality, organizational reputation and legal conformities (Thompson, 2013). Issues related to information confidentiality emerge mainly because information is shared digitally using social media. Thus, there are chances of such information getting hacked or shared unintentionally. This may raise risks related to privacy thus affecting information integrity.

Legal issues while using social media are bound to take place mainly because this media is used for global approach and is therefore affected by international rules and regulations. It is challenging for an organization to understand varied legal obligations of differing countries and then determine a universally accepted legal protocol. Risks related to organizational reputation are discussed in detail in the next section.

Reputational Risk

Reputation of an individual or organization is related to one’s reliability and uprightness. Thus, managing and securing the reputation becomes highly critical. With organizations resorting to social media extensively, they are bound to experience such reputational risks thus affecting their goodwill negatively. Reputational risks arise from the fact that organizations share all-embracing information with customers and browsers (Woodruff, 2014). This information in many circumstances is misused which damages organizational reputation. The various depressing effects from reputational damage are negative impact on goodwill in the real world, restricting development of social contacts and contracts, detrimental impact on attracting potential customers (Woodruff, 2014). In one of the research studies, 74 per cent employees accept the ease of causing reputational damage to organizations through social media (Davison et al., 2011). It is due to this reason that organizations to a large extent scrutinize the use of social networking sites by their employees.

Public Relations

Public relations depict organization’s relations with its various stakeholders. Organizations use the social media platform to interact with their stakeholders and thus develop a strong and positive public image. In fact the social media, organizations and stakeholders together interact within the dynamic business world (Aula, 2010). These interactions are shaped by organizational public relations objectives and the extent of social media usage for developing organizational reputation. But developing and sustaining a positive public relation is not easy as they are hampered to a large extent when subjected to reputational risks. Organization’s personal identity is at stake as it can be plagiarized and used without authentication (Weir et al, 2011).

Reputational risks are related to organizational credibility and results from security risks like identity theft and profiling risks. These risks challenge organizational reputation by questioning its compliance with societal rules and regulations (McDonnell and King, 2013). Organizations to a large extent fail to integrate social media with organizational and stakeholders objectives resulting into ineffective reputation management.

Social media has made organizations global, due to which even minor incidents get highlighted internationally. Local issues get international fame resulting in a negative reputation for the organization globally. Further with social media being active, organizations cannot escape from the clutches of negative publicity (Kotler, 2011). One example of failure of reputation management that resulted in earning negative fame across the world is Nestle. In 2010, Greenpeace uploaded a video on YouTube against KitKat by Nestle (Berthon et al, 2012). The video went viral and resulted in negative publicity for the organization. Though the advertisement was made mainly for consumers in Malaysia and Indonesia for conserving rainforests but it was acknowledged by the world at large.

Another risk that is faced by the organizations is the creation of a public image through standardized marketing programs. Differing stakeholders from different countries use different social media platforms which make it essential for organizations to clearly analyze and understand their usage requirements and patterns. This is where most of the organizations fail and thus are unable to use social media appropriately.

Below is a graph that depicts usage of differing social media platforms in different countries as per statistics in 2011 (Berthon et al, 2012).

Figure 2: Relative Frequency of Search Terms from Google Insights: Social Media by Country

Source: Berthon et al, 2012

Organizational Security Staff

Organizational employees are indispensable for the success. But these employees can also be a threat to the organization. It is mainly possible as employees have access to organization’s confidential and important information which they can leak to outsiders. With social media’s growing popularity, the line between personal and professional conversations on web has become blurred. Further inspite of keeping this information under security they can evade such systems through illegal measures. Further research has proved that only in USA approximately 83per cent staffs use organizational resources to contact their social media (Zyl, 2009). Other than using these resources for personal messages exchange over social media, 30 per cent employees in USA and 42 per cent employees in UK also exchanged information related to their work and organization (Zyl, 2009). This depicts the intensity of problem of security risks related to social media. Thus, the organizational security staff has to be on its toes to ensure that such information is highly secured and not utilized inappropriately.

In 2002, an employee of an international financial services organization in the USA infiltrated the organizational digital security systems and used ‘Logic Bomb’ virus to delete approximately 10 billion files from 1300 organization’s servers. This resulted in a financial loss of around $3 million and it also had to suffer due to negative publicity. This depicts failure of organizational society staff to combat risks. Such issues have become very common in the social networking world. Employees have the freedom to generate nasty and unsecured comments or links that harms organizational reputation, finances and creates security related risks (Randazzo, 2005).

With the help of social media, social engineering attacks are possible due to easy admission to hefty information by hackers, spammers and virus creators. They can easily misuse the same by creating fake profiles, stealing identity and collect details with regards to job titles, phone numbers, e-mail addresses. Further they can also corrupt systems using malwares that ultimately are a threat to organizational data. Data infiltration and loss ultimately impact organizational reputation negatively as these leaked data are used for unauthentic and illegal activities.


Organizations who are either unaware of these risks or are unable to defend themselves can face dire consequences at times. Organizations are aware of the gains that they would derive from using social media networking and thus take such risks readily. These risks cannot be avoided completely,organizations need to work out measures through which they can manage these risks and mitigate their negative influences.

In order to overcome issues related to privacy that ultimately results in hampering one’s reputation, the organizations should take proactive measures before using social media. During the sign-up phase or creation of social networking profiles, specific concerns related to privacy and confidentiality should be resolved and proper regulations designed (Fogel and Nehmad, 2009). These rules and regulations should be very clearly communicated to organizational employees so that they have complete information regarding social media dos and don’ts. Further the organization should not only design strict punishments but also execute them against those who break such rules (Hutchings, 2012).

One of the ways to overcome reputational risks related to social media is by appointing an efficient social media manager. These managers are specialists and would be responsible for determining the social media related protocol based on organizational top secret information, contemporary issues and prospective plans (Bottles and Sherlock, 2011). The social media manager should have a responsibility towards the organization and various stakeholders and thus intermingle with them sincerely and empathetically (Brammer and Pavelin, 2006). The manager should also have a vigilant eye and an analytical attitude to identify various fact, figures and events that can impact organizational reputation and thus take corrective actions. As security staff play crucial role in determining organizational security standards, the organizations should be very specific in recruiting and selecting them. Besides, there should be a greater emphasis in the organization development of culture, values, and ethics within an organization.

Organizations should also understand that management of reputational risks requires collaborative and innovative approach. The organization needs to develop a social media involvement protocol by consulting and taking advice from differing sources like legal experts, marketing experts, international business experts, media experts and other stakeholders (Montalvo, 2011). The organization should also be innovative in selecting and distributing the content through social media so that it can responsibly deal with issues.


Organizations today prefer to use social media in comparison to traditional media (Hutchings, 2012). It is mainly due to the various benefits associated with the same but they cannot also overlook various associated risks. It takes ages for an organization to develop a positive reputation and thus careful measures needs to be taken to maintain and sustain it. Organizations are unable to exercise control on social media completely but they can take restrictive measures to ensure that reputational risks are minimized and their ill effects are combated.

The article identified that the major reputational risks related to social media for organizations arise due to data outflow, identity theft, profiling risks, inappropriate choice of public relation strategy, inability to control external environmental factors, inappropriate information management and security policy and failure to have efficient and effective security staff. In order to overcome such issues, organizations need to appoint social media managers and hire employees skilled in social media management. Further, it should be a collaborative and creative approach and design social media protocol to mitigate such risks.

To conclude, it can be stated that the organizations need to be proactive and have a vigilant eye on environmental factors to secure themselves and benefit from online social media.

Note: The views expressed in this paper are of the author and do not necessarily reflect the views of the organization where he worked in the past or is working presently, the author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study.


A. Kaplan, and M. Haenlein, “Users of the world, unite! The challenges and opportunities of Social Media. “Business horizons, vol: 53, iss: 1, 2010, pp. 59-68. log/pics/sdarticle.pdf. [Accessed on 07/08/2014]

A. Woodruff, Necessary, unpleasant, and disempowering: reputation management in the internet age. ACM, In Proceedings of the 32nd annual ACM conference on Human factors in computing systems,2014, pp. 149-58. woodruff.pdf?ip= OA&key=4D4702B0C3E38B35% 2E4D4702B 0C3E38B 35 %2E4D4702B 0C3E38B 35 % 2E362513C443C43C7A&CFID= 381960244&CFTOKEN=18798755&__acm__=1404667886_023e822660bflb 4433893921552068cc [ Accessed on 06/07/2014 ]

A. Zyl, “The impact of Social Networking 2.0 on organisations. “Electronic Library, vol. 27, iss: 6, 2009, pp. 906-18. social_networking.pdf. [Accessed on 07/08/2014]

C. Everett, “Social media: opportunity or risk?” Computer Fraud & Security, vol: 2010, iss: 6,2010, pp. 8-10. d16c69fe23c071cc363e2a967ce68e4e. [Accessed on 06/07/2014].

C. Hutchings, “Commercial Use of Facebook and Twitter: Risks and Rewards.” Computer Fraud & Security, vol: 2010, iss: 6,2012, pp. 19-20.¬S 1361372312700659-main.pdf?_tid=ed89f016-0528- 11 e4-8935 -00000aab0f27 &acdnat= 14046635870ebcbda0807a69b549a7dfa0a62430c1. [Accessed on 06/07/2014]

G. Weir, F. Toolan, and D. Smeed, “The threats of social networking: Old wine in new bottles?”. Information Security Technical Report, vol: 1, 6, 2011, pp. 38-43. S1363412711000598/1-s2.0-51363412711000598-main.pdf?_tid=fe220808-052a-l1e4-80b4- 00000aacb361&acdnat=1404664473_4ac2c6946ec5ac14beeaf9f567432b0d. [Accessed on 06/07/ 2014]

H. Davison, C. Maraist and M. Bing, “Friend or Foe? The Promise and Pitfalls of Using Social Networking Sites for HR Decisions”. Journal of Business Psychology, vol: 26, 2011 pp. 153-9. JBP_2011_Social%20Networking %20and%2OHR.pdf [Accessed on 06/07/2014].

I. Ahmed, Fascinating #SocialMedia Stats 2015: Facebook, Twitter, Pinterest, Google+, 2015. (Accessed: 24/05/2016)

J. Fogel and E Nehmad, “Internet social network communities: Risk taking, trust, and privacy concerns.” Computers in Human Behavior, vol: 25, 2009,pp, 153-160. S0747563208001519/1-s2.0-50747563208001519-main.pdf?_tid=36c1e884-052d-l1e4-bd79- 00000aacb35d&acdnat=1404665427_ecb8f0d08d037b033d3e8c901bf2d27f. [Accessed on 06/ 07/2014].

J. Kietzmann, K. Hermkens, I. McCarthy and B. Silvestre, “Social media? Get serious! Understanding the functional building blocks of social media.” Business horizons, vol: 54, iss: 3, 2011,pp. 241-51. [ Accessed on 07/08/2014 ]

K. Bottles, & T. Sherlock, “Who should manage your social media strategy”. Physician executive, vol: 37, iss: 2, 2011, pp: 68-72. WhoShouldManageYourSocialMediaStrategy.pdf [ Accessed on 06/07/2014 ]

K. Armstrong, “Managing your Online Reputation: Issues of Ethics, Trust and Privacy in a Wired, “No Place to Hide “World.” World Academy of Science, Engineering and Technology, vol: 6, 2012, pp. 716-21. [ Accessed on 06/07/2014 ]

M. Chi, Security Policy and Social Media Use,2011 The SANS Institute. reading-room/whitepapers/policyissues/reducing-risks-social-media-organization-33749 [ Accessed on 07/07/2014]

M. Langheinrich and G. Karjoth, “Social networking and the risk to companies and institutions.” Information Security Technical Report, vol: 1 5, 2010,pp.51-6. 51363412710000233/1-s2.0-51363412710000233-main.pdf?_fid=880db588-052d- 1 1 e4-9416- 00000aacb361&acdnat=1404665564_4c01c9309cedc188fe4fc0888009c66e.[ Accessed on 06/07/ 2014 ]

M. McDonnell and B. King, “Keeping up Appearances Reputational Threat and Impression Management after Social Movement Boycotts.” Administrative Science Quarterly, vol: 58, iss: 3, 2013, pp. 387-419. [ Accessed on 07/08/ 2014]

M. Randazzo, M. Keeney,E.Kowalski, D. Cappelli, and A. Moore, Insider threat study: Illicit cyber activity in the banking and finance sector,2005(No. CMU/SEI-2004-TR-021). Carnegie-Mellon University Pittsburgh Pa Software Engineering Institute. fulltext/u2/a441249.pdf. [Accessed on 01/08/2014]

N. Michaelidou, N. Siamagka and G. Christodoulides, “Usage, barriers and measurement of social media marketing: An exploratory investigation of small and medium B2B brands”. Industrial Marketing Management, vol: 40, iss: 7, 2011, pp. 1153-9. S0019850111001374/1-s2.0-50019850111001374-main.pdf?_tid=c846abe2-1e5e- 1 1 e4-8a82- 00000aacb35f&acdnat=1407435496_aflecOcd05602467585a29dcc4394261.[ Accessed on 07/08/ 2014 ]

P. Aula, “Social media, reputation risk and ambient publicity management”. Strategy & Leadership, Vol. 38 Iss: 6, 2010, pp. 43 — 9 journals.htm?articleid=1886894. [Accessed on 07/08/2014]

P. Berthon, L. Pitt, K. Plangger and D. Shapiro, “Marketing meets Web 2.0, social media, and creative consumers: Implications for international marketing strategy.” Business Horizons, vol: 55, iss: 3, 2012, pp: 261-71. [ Accessed on 07/08/2014 ]

P. Kotler, “Reinventing marketing to manage the environmental imperative. “Journal of Marketing, vol: 75, iss: 4, 2011, pp. 132-5. 2.1.%20Reinventing%20 Marketing%20to%20Manage%20the%20Environmental%20Imperative.pdf. [ Accessed on 07/ 08/2014]

R. Montalvo, “Social Media Management. “International Journal of Management & Information Systems. vol. 15, No.3,2011, pp. 91-6. article/download/4645/4734.[ Accessed on 06/07/2014]

S. Brammer and S. Pavelin, “Corporate reputation and social performance: The importance of fit.” Journal of Management Studies, vol: 43, iss: 3, 2006,pp: 435-55. http:// _Social_Performance_The_Importance_of Fit/file/60b7d522d9749b6686.pdf. [ Accessed on 07/08/2014 ]

S. Picazo-Vela, I. Gutierrez-Martinez and L. Luna-Reyes, “Understanding risks, benefits, and strategic alternatives of social media applications in the public sector.” Government Information Quarterly, vol: 29, 2012 pp. 504-11. s2.0-S0740624X12001025-main.pdf?_tid=a206f2a2-0527-11e4-a7ea-00000aacb362&acdnat=140466303198b29673394d658f23bd31968c72aefd. [Accessed on 06/ 07/2014]

T. Thompson, J. Hertzberg and M. Sullivan, Social media risks and rewards,2013 Financial Executive Research Foundation, Retrieved from content-page-files/advisory/pdfs/2013/ADV-social-media-survey.ashx [Accessed 30/07/ 2014]

W. Mangold and D. Faulds, “Social media: The new hybrid element of the promotion mix.” Business horizons, vol: 52, Iss: 4, 2009, pp. 357-65. http://www.iaadiplom.d1c/Billeder/ MasterClass07/07-1SocialMedia-inthePromotionalMix.PDF. [Accessed 07/08/2014]

9TH ASSOCHAM ANNUAL SUMMIT on Cyber and Network security

Posted on Updated on

Friday 29th July 2016 Hotel Hyatt, Bhikaji Cama Place, New Delhi

9TH ANNUAL SUMMIT CYBER & NETWORK SECURITY which ASSOCHAM organised along with the “Official Support” of the Ministry of Electronics and IT, Government of India, Cert-In and Council of Europe in New Delhi.

The Summit was a huge success with active participation of more then 300 Senior Officers from CISF; CRPF; SPG; BSF; ECIL; Bharat Petroleum; GAIL; ONGC; PNB; RBI; CCIL; NSIC; TCIL; Delhi Metro; Embassy of Sweden, USA, Israel, Malaysia, Japan, Germany, China, Indonesia; Enforcement Directorate; Intelligence Bureau; National Investigating Agency; Government of Manipur, Uttarakhand, Haryana, Nagaland, Punjab, Chhattisgarh, Uttar Pradesh, Delhi; Bureau of Police Research and Development; HQ IDS; NAVY; Air Force; Ministry of Defence, Rural Development, Women & Child Development, Railways, Department of Revenue, Directorate of Income Tax, CBEC, Statistics and Program Implementation, CBDT, Directorate General of Systems and Data, Ministry of Communications, Government of India; DRDO; National Institute of Criminology and Forensic Science; Cabinet Secretariat; IDSA apart from Industry representatives; academicians; journalists and other stakeholders in large number.

The Valedictory Session “Targeted Attacks: Protection of Critical Infrastructure of the Country & Capacity Building” was graced by the auspicious presence of Shri R. Gandhi, Deputy Governor, RBI, Shri Sandeep Mittal, IPS, Officiating Director, National Institute of Criminology and Forensic Science, Shri R. N. Dhoot (M. P.), Past President, ASSOCHAM.

Sh. Sandeep Mittal IPS welcoming with buquet by Sh. D.S. Rawat, Secretary General, ASSOCHAM
Sh. Sandeep Mittal IPS welcoming with buquet by Sh. D.S. Rawat, Secretary General, ASSOCHAM

Sh. Sandeep Mittal IPS DIG (Admin) addressing the Annual Summit in the Valedictory Session
Sh. Sandeep Mittal IPS DIG (Admin) addressing the Annual Summit in the Valedictory Session

The Valedictory Session “Targeted Attacks: Protection of Critical Infrastructure of the Country & Capacity Building” was graced by the auspicious presence of Shri R. Gandhi, Deputy Governor, RBI, Shri Sandeep Mittal, IPS, Officiating Director, National Institute of Criminology and Forensic Science, Shri R. N. Dhoot (M. P.), Past President, ASSOCHAM.
The Valedictory Session “Targeted Attacks: Protection of Critical Infrastructure of the Country & Capacity Building” was graced by the auspicious presence of Shri R. Gandhi, Deputy Governor, RBI, Shri Sandeep Mittal, IPS, Officiating Director, National Institute of Criminology and Forensic Science, Shri R. N. Dhoot (M. P.), Past President, ASSOCHAM.

Understanding the Human Dimension of Cyber Security

Posted on Updated on

 Indian Journal of Criminology & Criminalistics (ISSN 0970 - 4345), Vol .34 No. 1 Jan- June,2015, p.141-152
Indian Journal of Criminology & Criminalistics (ISSN 0970 – 4345), Vol .34 No. 1 Jan- June,2015, p.141-152

Sandeep Mittal, I.P.S.,*



It is globally realized that humans are the weakest link in cyber security to the extent that the dictum ‘users are the enemy’ has been debated over about two decades to understand the behavior of the user while dealing with cyber security issues.Attempts have been made to identify the user behavior through various theories in criminology to understand the motive and opportunities available to the user while he interacts with the computer system. In this article, the available literature on interaction of user with the computer system has been analyzed and an integrated model for user behavior in information system security has been proposed by the author. This integrated model could be used to devise a strategy to improve user’s behaviour by strengthening the factors that have a positive impact and reducing the factors that have a negative impact on information system security.


Most of the system security organizations work on the premise that the human factor is the weakest link in the security of computer systems, yet not much research has hitherto been undertaken to explore the scientific basis of these presumptions. The interaction between computers and humans is not a simple mechanism but is instead a complex interplay of social, psychological, technical and environmental factors operating in a continuum of organizational externality and internality.1 This article tries to examine various aspects of interaction between humans and computers with particular reference to the ‘users’.The taxonomy adopted for understanding who is actually a user is based on the available literature.It is also imperative to explore the following questions: Why do users behave the way they do? Is there a psychological basis for the specific behaviour of users during the human’ computer interaction, and if yes, how does it affect the security of the computer system?Various hypotheses and suggestions offered by different experts are thus being reviewed in order to identify ways to improve both user behaviour and the overall security of computer systems. The debate on this issue was initiated by an article entitled,’UsersAre Not the Enemy’2,where the authors studied the behaviour and perceptions of users relating to password systems, and challenged the conclusion drawn in a previous work3 (DeAlvare,1988 quoted in Adams and Sasse, 1999) that many password users do not comply with the password security rules because ‘users are inherently careless and therefore insecure’.

Adams and Sasse (1999) concluded that the possession of a large number of passwords by users prevents the latter from memorising all of them, thereby also compromising password security,that users are generally not aware of the concept of secure passwords, and that they also have insufficient information about security issues. The earlier perceptions of security managers were thus challenged and users were no longer seen as the ‘enemy’. Since then, a number of studies have been undertaken by researchers who have adopted either of these two positions, viz., ‘The user is the enemy’ or ‘The user is not the enemy’. In this article, we examine various hypotheses before taking either of these two positions.

Taxonomy of Users’ Behaviours

It has been found that the effectiveness of technology is impacted by the behaviour of human agents or users,who access, administer and maintain information system resources4. These users could be physically or virtually situated inside or outside the organisations,thus, bringing into interplay a range of environmental factors that influence their behaviour. Most of the organizations tend to be more concerned with threats from external users even though surveys conducted by professional bodies indicate that three- quarters of the security breaches in computer systems originate from within the user fraternity.5Therefore,it is necessary to foster a systematic understanding of the behaviour of users and how it impacts information security.In this context, researchers have developed taxonomy of the behaviour of information security end-users.6 This taxonomy of security behaviour, comprising of six elements, (as has been depicted in Figure 1) is dependent upon two factors, viz., intentionality and technical expertise. On the one hand, the intentionality dimension indicates whether a particular behaviour was intentionally malicious or beneficial, or whether there was no intent at all. The dimension of technical expertise, on the other hand, takes into consideration the degree of technological knowledge and skill required for the performance of a particular behaviour.

Source: Adapted from Stanton,et al., 2005
Source: Adapted from Stanton,et al., 2005





The taxonomy of end-user behaviour, as delineated in Figure 1, helps in classifying the raw data on users’ behaviours and also in selecting the paths that could be followed for improving the information security behaviour of a particular user within an organization.


Exploring ‘What the Users Do?’

A fundamental postulate is that the users’ behaviour is guided by the risk which they perceive to be associated with their interaction with the information system in everyday situations. However, research has revealed that users normally fail to take optimal or reasoned decisions about the risks concerning security of information systems. The decision-making process of users exhibits the following predictable characteristics, and thereby understanding them would be of great use in positively impacting the decision-making ability of users7:

  1. Users often do not consider themselves to be at risk.In fact, as the users increase
    the security measures for their computer systems, they start indulging in more risky behaviours.
  2. Although users are not, by and large, imbecile or obtuse in their thinking, they
    lack both the motivation and capacity to devote full attention to information processing, especially since they resort to multi-tasking, which prevents them from concentrating fully on a single task at a time.
  3. The concept of safety per se is unlikely to be a persuasive element in determining human behaviour, especially because the argument that safety prevents something bad from happening is a rather abstract one, and consequently, human beings do not perceive adherence to safety norms as a gain or a beneficial exercise.
  4. It has been observed, that adherence to safety and security norms does not always produce instant results. In fact, the results often come weeks or months later, if at all, which prevents human beings from immediately comprehending the positive outcomes of their actions, thereby making them complacent. The same delay in perception of outcomes is also evident in the case of negative actions. Thus, human beings realize the impact of their actions only when the results can be seen instantaneously, as in the case of disasters.
  5. Research on the association between the concepts of risk, losses and gains indicatethat ‘people are more likely to avoid risk when alternatives are presented as gains and take risks when alternatives are presented as losses. When evaluating a security decision, the negative consequences are potentially greater, but the probability is generally less and unknown. When there is a potential loss in a poor security decision as compared to the guaranteed loss of making a pro-security decision, the user may be inclined to take the risk’.8 This study, therefore, shows a strong likelihood of users gambling to offset a potential loss rather than accepting a guaranteed loss in toto. This observation is depicted in Figure 2 (West, 2008, adapted from Tversky and Kahneman, 1986).9
Figure 2: Losses carry more value as compared to gains when both are perceived as equal. For non- zero values, if the value of loss (X) = value of gain (Y), then motivation of loss (A)>motivation of gain (B) (West, 2008, adapted from Tversky and Kahneman, 1986).
Figure 2: Losses carry more value as compared to gains when both are perceived as equal. For non-
zero values, if the value of loss (X) = value of gain (Y), then motivation of loss (A)>motivation of
gain (B) (West, 2008, adapted from Tversky and Kahneman, 1986).

The author is tempted to undertake a detailed literature survey to study the influence of human factors on security of information systemsin order to gain an insight into the entire scenario. However, in view of the limited scope of the present article, the author is restricting himself to presenting only a summary of the important available literature on users’ behaviour vis-à-vis the information system security (Table 1), leaving it to the readers to probe the matter further.

Table 1: Summary of Research on Users’ Behaviour and Information System Security

S.No. Dimension Postulate Reference
1. Users’ Behaviour a) There is a relation between end-user security behaviour and a combination of situational factors.

b) The various factors that are believed to influence security-related behaviour include the users’ perceptions of their own susceptibility and efficiency, and the possible benefits they are likely to derive from security.

c) It is extremely difficult to audit employee behaviour and the reasons thereof as individuals react differently in each situation, depending upon organizational culture.

Stanton et al., 2004

Ng, Kankanhalli and Xu, 2009

Vroom and von
Solms, 2004

2. Familiarity with information security aspects a) Shared knowledge about information security is important as it contributes towards bringing about a change in individual behaviour and eventually in an organization’s behaviour.
b) The following three factors have been identified as barriers to information
* General security awareness,
* Users’ computer skills, and
* Organizational budgets.
Vroom and von Sloms, 2004

Shawet al., 2009

3. Awareness The following factors have been identified among Users as three levels of security
* Perception re-use of potential security risks,
* Comprehensive know-how to perceive and interpret risks, and
* Prevention of the user.s ability to predict future situational events.
Shawet al., 2009
4. Organizational In a positive work environment,users Environment understand their role in the complex information security system,which helps them improve their behaviour. An organization with a positive climate may influence the behaviour and commitment of users. Shawet al., 2009.
5. Work Conditions Unsatisfactory and negative work conditions can contribute negatively to work. Tiredness and fatigue may also lead to failure to follow policies and procedures among users, thereby resulting in their disregarding information security. Kellowayet al., 2010 .


Unfolding Criminology Theories to Understand Users’ Behaviour

The theoretical foundation for several research models designed for studying users’ behavior has been provided by criminology theories. These theories have been categorized according to their focal concepts and aims, as enumerated in Table 2.10 As pointed out in the last column of Table 2, a number of researchers have tried to apply these criminology theories in isolation or in combination with each other to the information security system. These theories explain the behaviour of users as perceived by criminologists, most of whom have deep foundations in psychology.

Table 2: Criminology Theories, Concepts and Principles in Information Security (IS) Literature (afterTheoharidou , et al., 2005)

Criminal theories Focal concept Basic Principles Related Research within IS Security literature
General Deterrence Theory (GDT).

(Blumstein, 1978,1986)

A person commits a crime if the expected benefits outweight the cost of sanction. (Goodhue and Straub, 1991)

(Straub and Welke, 1998).

Social Bond Theory

(Hirschi, 1969)

A person commits a crime if the social bonds of attachment, involvement and belief are weak. (Lee and Lee, 2002),
(Agnew, 1995)
(Hollinger, 1986)
(Lee et al., 2003)
Social Learning Theory
(Sutherland, 1924 ,
quoted in Akers,2011)
Motive A person commits a crime if (s) he associates with delinquent peers, who transmit delinquent ideas, reinforce delinquency, and function as delinquent role models. (Lee and Lee, 2002)
(Skinner and Fream, 1997)
(Hollinger, 1993)
Theory of Planned Behavior (TPB)
(Ajzen and Fishbein,2000)
A person’s intension towards crime is akey factor in predicting his/ her behavior. Intentions are shaped based on attitude, subjective norms and perceived behavioural control (Lee and Lee, 2002)
(Leach, 2003)
Situational Crime Prevention (SCP)


Opportunity A crime occurs when there is both motive and opportunity. Crime is reduced when no opportunities exist. (Willison, 2000)


Models of User Behaviour

Researchers have used theories used in general criminology and literature pertaining to interaction between humans and technology in information security systems for developing theoretical and research models to understand users’ behaviour. Figure 3 depicts an integrated model of this behaviour derived and designed by the present author from two research studies.11

Figure 3: An integrated model for User’s behavior in Information  System Security developed by integrating models proposed by Luchiano et al., 2010 and Herath and Rao 2009.
Figure 3: An integrated model for User’s behavior in Information System Security developed by integrating models proposed by Luchiano et al., 2010 and Herath and Rao 2009.

The findings of these studies can be summarized as follows12:

  1. A constructive organizational environment has a positive impact on the responsible behaviour of users towards information security.
  2. Stressful work conditions would negatively impact the responsible behaviour of users towards information security.
  3. The adoption of responsible behaviour by users in terms of adhering to information security policies and procedures would negatively impact the vulnerabilities of users to information security breaches.
  4. Familiarity with information security policies and procedures among users would:

    a)Positively impact their responsible behaviour towards information security;

    b)Negatively impact their vulnerability to information security breaches; and c)Positively impact their awareness of potential information security threats.

  5. Awareness of potential information security threats among users would:

    a)Positively impact their responsible behaviour towards information security; and

    b)Negatively impact their vulnerability to information security breaches.

  6. Some of the key elements that play a vital role in users’ behaviour include gender, work experience, age, and educational qualifications.
  7. The intentions of users to follow security policies are determined by both internal and external motivating factors.
  8. The security behaviour of users is positively affected by both standard prescriptive beliefs as well as peer influences.
  9. The security-related behavioural intentions of users are positively impacted if detection is certain.
  10. The security-related behavioural intentions of users are negatively impacted if the prospective penalty for neglecting security is expected to be severe.
  11. The perceptions of users regarding compliance by others with security behaviour also play an important role in determining their own behaviour towards security.
  12. The vulnerability of users to any breaches in information security are inversely related to the compliance with security procedures among users. This implies that the stronger the users’ intention to adhere to security behaviour, the lower would be their vulnerability to any security failures.

While the element of technology remains constant during human’computer interaction, it is the human element which remains highly dynamic mainly due to the complexity of human behavior. Suggestions for the relevant implications of human behavioural science in improving cyber security are as follows13:

  1. The implication of the ‘Identifiable Victim’s Effect’ (the tendency of an individual to offer greater help when an identifiable person is observed in hardship as compared to a vaguely defined group in the same need) may lead a user to choose a stronger security system when possible negative outcomes are real and personal, rather than abstract. 14
  2. The ‘Elaboration Likelihood Model’ describes how human attitudes form and persist. There are two main routes to attitude change, viz., the central route (the logical, conscious and thoughtful route, resulting in a permanent change in attitude) and the peripheral route (that is, when people do not pay attention to persuasiv e arguments, and are instead in fluenced by superficial characteristics, and the change in their attitude is consequently temporary). Efforts should thus be made to motivate users to take the central route while receiving cyber security training and education. Fear can also be used to compel users to pay attention to security, but this would be effective only when the fear levels are moderate and simultaneously, a solution is also offered to the fear-inducing situation. The inducement of a strong fear, on the other hand, would lead to ‘fight or flight’ reactions from users.15
  3. Cognitive Dissonance (a feeling of discomfort due to conflicting thoughts) acts as a powerful motivator by evoking the following reactions among users, making people react in the following three ways:

    a)Change in their behaviour

    b)Justification of their behaviour through a rejection of any conflicting attitude; or

    c) Addition of new attitudes for justifying their behaviour.

  4. Cognitive dissonance is hence used to persuade users to change their attitude towards cyber security and then eventually adopt a behaviour that motivates them to choose better security.16
  5. Social Cognitive Theory stipulates that learning among people is based on two key elements—by watching others, or through the effect of their own personality. Thus, by incorporating the demographic elements of age, gender and ethnicity, one could initiate a cyber awareness campaign that would help reduce cyber risk by enabling the users to identify with their recognisable peers and thereby imitate the secure behaviour of the latter.17
  6. Status Quo Bias’the tendency of a person not to change an established behaviour without being offered a compelling incentive to do so’ necessitates the introduction of strong incentives for users to change their cyber behaviour. This can be exploited positively by information system designers.18
  7. The Prospect Theory helps us in framing user choices about cyber security by framing them as gains rather than losses.19
  8. Another factor to be considered is Optimism Bias, which leads users to under- estimate the security risk, thereby making them perceive that they are immune to cyber-attacks. In order to enable users to overcome this attitude, the security system could be designed to incorporate the real experiences of users for effectively conveying the impact of the risk.20
  9. 8. Control Bias or the belief among users that they have a strong control over or capacity to determine outcomes hinders people from following security measures. This bias should be kept in mind while designing systems and training programmes for users.21
  10. Confirmation Bias—looking for evidence to confirm a position—exposes the users’ minds to new ideas. In order to overcome this bias, the system must provide evidence to change their current beliefs (for example, regular security digests may be e-mailed to them).22
  11. While trying to improve the cyber behaviour of users, the Endowment Effect, wherein people place a higher value on the objects they own as compared to the objects they do not own, could be used. Users may thus be persuaded to pay more for security when it allows them to safely keep something that they already have (for example, the privacy of data).23
  12. It is amply clear from the foregoing discussion that human–computer interaction is not a simple process but is instead a complex and dynamic mechanism, characterized by the interplay of a large number of technological, human and environmental factors with each other in space and time. Being humans, users do not have the biological capacity to handle these numerous factors simultaneously in space and time, which is why they behave the way they do, thus, unintentionally or accidentally (and sometimes maliciously) compromising the information system security. In this way, users themselves become the enemy of information security, and are therefore categorized as the weakest link in the information security chain.


The most important and dynamic aspect of the interaction between humans and computers is the behaviour of the user, which varies in space and time. It is also influenced by psychological, intrinsic and extrinsic factors, which in turn, are governed by peer behaviour, normative beliefs, and social pressures, among other things. Therefore, the behaviour of the user is not solely dependent on the user himself, or we could say that he might have little control over his own behaviour while interacting with the security of information systems. The integrated model discussed in this article may thus be used to devise a strategy for improving the users’ behaviour by strengthening the factors that have a positive impact and reducing or even eliminating the factors that have a negative impact on the security of the information system security. However, this is a complex task and should not be considered as simple, as for instance, selling a non-durable consumer item like a soap!


1E.M. Luciano, M.A. Mahmood and A.C.G Maçada, ‘The Influence of Human Factors on Vulnerability to Information Security Breaches’, ‘Proceedings of the Sixteenth Americas Conference on Information Systems, Lima’, Peru, August, 2010, p. 12.
_Maada/file/e0b4952f0d76b267b1.pdf Accessed on 29 June 2014.

2A. Adams and A.M. Sasse, ‘Users Are Not The enemy’, Communications of the ACM, vol. 42, no. 12,1999,pp. 40-6.

3A. Adams and A.M. Sasse, ‘Users Are Not the Enemy’, Communications of the ACM, vol. 42, no. 12, 1999.

4C.Vroom and R.Von Solms,’Towards InformationSecurityBehaviouralCompliance’, Computers & Security,2004, vol. 23, no. 3, pp. 191-8. Accessed on 2 July 2014.

5J.M.Stanton et al.,’Analysis of end user security behaviors’, Computers & Security,vol. 24, no. 2, 2005,pp.124-33.

6J.M. Stanton, et al. ‘Analysis of end user security behaviors’, Computers & Security, vol. 24, no. 2, 2005, pp.124-133.

7 R. West, ‘The psychology of security’, Communications of the ACM, vol. 51, no. 4, 2008, pp. 34-40.

8R. West, ‘The psychology of security’, Communications of the ACM, vol. 51, no. 4, 2008; R. West et al.,’The Weakest Link: A Psychological Perspective on Why’, Social and Human Elements of Information Security: Emerging Trends,2009.

9A. Tversky, and D. Kahneman,’Rational Choice and the Framing of Decisions’, Journal of Business, 1986, pp. S251-S278.

Click to access Rational%20Choice%20and%20the%20Framing%20of%20Decisions.pdf

Accessed on 29 June 2014.

10M. Theoharidou et al., ‘The insider threat to information systems and the effectiveness of ISO17799’, Computers & Security, vol. 24, no. 6, 2005, pp. 472-84.

11D.L. Goodhue, & D.W. Straub, ‘Security Concerns of System Users: A Study of Perceptions of the Adequacy of Security’, Information & Management,vol. 20, no. 1, pp. 13-27 Edimara_Mezzomo_Luciano/publication/260012210_Influence_of_human_factors_on_information_security_ breaches_-_Luciano_-_Mahmood_-_Maada/file/e0b4952f0d76b267b1.pdf ; T. Herath and R.H. Rao,
‘Protection motivation and deterrence: A framework for Security Policy Compliance in Organisations’, European Journal of Information Systems, vol. 18, no. 2, 2009, pp. 106-25.

12 Ibid.

13S.L. Pfleeger and D.D. Caputo,’Leveraging Behavioral Science to Mitigate Cyber Security Risk’, Computers & Security,vol. 31, no. 4, 2012, pp. 597-611. Accessed on 1 July 2014.

14K. Jenni and G. Loewenstein, ‘Explaining the Identifiable victim Effect’, Journal of Risk and Uncertainty,1997, vol. 14, no. 3, pp. 235-57, Accessed on 1 July 2014.

1515 R.E. Petty and J.T. Cacioppo, ‘The Elaboration Likelihood of Perusation’ Accessed on 1 July 2014.


17 A. Bandura,’Human Agency in Social Cognitive Theory’,American psychologist, vol. 44, no. 9, 1989, p.1175.

18 W.Samuelson and R. Zeckhauser, ‘Status Quo Bias in Decision Making’. Accessed on 1 July 2014.

19A.Tversky and D. Kahneman,’Rational Choice and the Framing of Decisions’, Journal of Business,1986, pp. S251-S278.

Click to access Rational%20Choice%20and%20the%20Framing%20of%20Decisions.pdf

Accessed on 29 June 2014.

20 D. Dunning, C. Heath and J.M.Suls, ‘Flawed Self-Assessment’, Accessed on 1 July 2014.

21 J. Baron and J.C. Hershey,’Outcome Bias in Decision Evaluation’, Journal of Personality and Social Psychology,vol. 54, no. 4, 1988, p. 569. Accessed on 1 July 2014.

22 M. Lewika, ‘Confirmation Bias’, Personal Control in Action, Springer, 1998, pp. 233-58. Abstract accessed on 1 July 2014.

23 R.Thaler, ‘The Psychology of Choice and the Assumptions of Economics’, Laboratory Experimentation in Economics, p. 99. Accessed on 1 July 2014.

Perspectives in Cyber Security, the future of cyber malware

Posted on Updated on

Published in The Indian Journal of Criminology (ISSN 0974 - 7249), Vol .41 (1) & (2), Jan. & July,2013
Published in The Indian Journal of Criminology (ISSN 0974 – 7249), Vol .41 (1) & (2), Jan. & July,2013, p.210-227

Sandeep Mittal, I.P.S.,*



The term ‘Malware’ has become a fashionable word to throw around now days. However, it should not be thought of something very sophisticated only. In this paper, we would give a brief definition and description of the term ‘malware’ and the related concepts including the evolutionary and historical time line. The concept of the future of ‘malware’ would be dealt with from four perspectives which may be dependent upon one another at least at some point in space and time. The first being the ‘malware design’ as the malware experts are using increasingly complex designs, taking the ‘malware’, to the scale of ‘war- grade- weapon’ in the recent past. The second important perspective is the ‘terrain’ of the cyber domain where the malware operates or is deployed. The third important perspective would be the ‘technologies’ that are used to detect these malware. As the malware are becoming ‘multiplatform’ and complex, the technologies have to keep pace with the evolution of malware. However, it is made clear at the outset that this paper deals only with the basics of issues raised and technical details have been kept to the minimum, being beyond the scope of present work.

The Malware Understood

‘Malware’ is an ‘unitary’ term for the different types of software- codes which are called as ‘virus’, ‘Trojan horse’ and ‘worm’ at different stages of its evolution. It could be as simple in its design as ‘virus’ or could be extremely complex as some of the ‘worms’ discovered recently. It would be useful if we understand these terms clearly before we venture in to malware understanding. A ‘virus’ is a self-replicating program whose only purpose is to propagate itself by modifying another program to include itself through an act of the user of the system in which it exists (modified after Skardhamar, 1996). The Trojan- Horse (named after the wooden horse, the ancient Greek army used to conquer the city of Troy) is a simple program that purports to do one thing, but actually do something else entirely, often very destructive. A Trojan’s spreading potential is not very big, as once they are run, they cease to be Trojans. But its simplicity can be extremely deceptive in terms of damage. “A ‘worm’ is a type of non-parasitic- code (unlike virus) that purposely replicates a possibly evolved copy of itself by exploiting security vulnerabilities on systems. The vulnerability that a worm exploits need not be exclusively software faults. It may exploit configuration errors or operator errors. Unlike viruses, worms do not replicate by attaching themselves to a host executable or by modifying the system environment to execute the malicious code” (Symantec, 2014). In the present scenario, the malicious researchers are concentrating on worms and the term ‘worm’ has become synonymous with ‘malware’ and would be used interchangeably sometimes in this paper. A more crisp and modern definition of worm is “an independently replicating and autonomous infection agent, capable of seeking out new host systems and infecting them via a network” (Nazario, 2004). As the most of the malware encountered in recent past belong to the category of worms, let us have some deep introspection of the basic components of worms. A worm must have at least one of the following five components, the attack component being the minimum set of one (Nazario, 2004);

  1.  Reconnaissance Component hunts down other network nodes to infect. This component is responsible for identifying the host on network that is capable of being compromised by the worm’s known methods.
  2.  Attack component launches an attack against target. The attacks can be the old age buffer or heap overflow, string formatting attacks, Unicode misinterpretations and misconfigurations.
  3.  Communication Components gives the worms the interface to send messages between nodes or some other central location.
  4.  Command Components provides the interface to the worm node to issue and act on commands.
  5.  Intelligence Components provides the intelligence required to contact various worm nodes.

An assembly of the components of a worm is depicted in following figure (Nazario, 2004).


Many of the characteristics of a worm can be used to defeat it, for example, predictable behavior and characteristic signatures in contrast to manual attacks, where tactics is changed now and then. However, the worms continue to be generated as majority of the malware due to ease of continuous and the malware due to ease of continuous and exponential propagation, capacity to penetrate even difficult networks, persistence in infecting the systems despite patching and sanitization, and broad base coverage of the networks in space and time.

Hence, the future malware will continue to be worm-based in view of the foregoing discussion.

The History and Evolution of Malware

The future of malware cannot be predicted, unless we have an introspection of the history of malware to understand the evolution of malware over time.

The historical time line is depicted in the following table (Lava Soft, 2013) in a generalist manner;

HISTORY OF MALWARE (modified after Lavasoft, 2013)

S.No. Year Name of Malware Details of malware
1. 1971 Creeper First ever computer virus. ARPANET
2. 1981 Elk Clover First known microcomputer virus attached itself to Apple DOS 3.3 operating system and spread by floppy.
3. 1986 Brain Brain First computer virus for MS-DOS infected the boot sector of the storage media formatted with the FAT file –system. Written to demonstrate insecurity of computers.
4. 1987 Stoned A boot sector computer virus.
5. 1988 Morris Worm Infected around 6000 computers of University, military and NASA. Morris was a researcher, introduced the worm by accident and was the first person to be arrested for such crime.
7. 1995 Concept First Macro virus, and hid itself in a word document and spreads by integrating itself into more files each time the host program is run.
8. 1999 Happy 99, Melissa, Kak Advance malware spread quickly through Microsoft environments.
9. 2000 I Love You Computer worm attacked millions of window PCs through email message. An estimated $15 Billion was spent to clean the mess up.
10. 2001 Code Red Worm attacked computers running on Microsoft IIS server. It chose the targets pseudo-randomly on the same or different subnets as the infected machines in accordance with fixed probability distribution
11. 2001 Nimda Computer worm and file infector, utilized several propagation techniques and thus become most widespread worm in 22 minutes.
12. 2003 Sol Slammer Computer worm that caused DoS on internet hosts.
13. 2004 Cabir First mobile phone virus attacking Symbian OS spread through Bluetooth.
14. 2007 Storm Botnet A remote controlled botnet linked by storm worm spread through email and infected 50 million computers.
15. 2009 Koobface Multiplatform work that attacked users of popular social networking websites and designed to infect windows, Mac OS and Linux platforms.
16. 2010 Geinimi First Android Malware displaying botnet capability.


An era of weaponization of software code heralded in the year 2010 with the discovery of ‘Stuxnet’ followed by ‘DuQu’ and ‘Flame’ malware which are distinctively different in stealth, design, complexity and deployed for fully targeted attacks. “The Stuxnet’ targeted Iranian Nuclear Facility at Natanz. The Stuxnet used four ‘Zero day vulnerabilities’ and employed Siemens’ default passwords to access window OS that run WinCC and PC57 programs. It would hunt down frequency-converter drives made by FaraPaya in Iran and Vacon in Finland. These drives were used to power centrifuges used in the concentration of the Uranium-235 isotope. Stuxnet altered the frequency of the electrical current to the drives causing them to switch between high and low speeds for which they were not designed. This switching caused the centrifuges to fail at a higher than normal rate” (Farwell & Rohozinski, 2011). In 2011, another worm ‘DuQu’, which contained components almost identical to stuxnet, was discovered. However, the ‘DuQu’ was not self- replicating and was devoid of a payload. It seemed to be designed to conduct reconnaissance on an unknown industrial control system (Zetter, 2011). ‘Flame’ was another ‘stuxnet’- type of malware designed primarily to spy on infected computers and detectedfrom the computers of Iranian Oil Ministry, (Zetter, 2012).

Thus, it is seen from the discussion in the foregoing that the malware has evolved over a period of time from a ‘simplistic-experimental-code’ to ‘highly complex and complicated codes’ synonymous with Internet-wide devastation.

The Future of Malware Design

The ‘Samhain Project’ (Zalewski, 2000), intended to design an intelligent malware, listed seven requirements and guidelines for the intelligent worm;

  1. Portability across hardware architectures and operating system to achieve the largest possible dispersal.
  2. Invisibility from detection.
  3. Independence from manual intervention. The worm must not only spread automatically but must be adaptable to its network.
  4. The worm should be able to learn new techniques. It’s ‘database of exploits’ should be updatable itself.
  5. The integrity of the worm host must be preserved. The worm’s executable instances should avoid analysis by outsiders.
  6. Avoid the use of static signatures. By using the polymorphism the malware can avoid detection methods that rely on signature based analysis.
  7. Overall worm net usability. The network created by worms should be able to be focused to achieve the specific task.

The researchers (Zalewski, 2000) have discussed various options for implementation of ‘Samhain Worm’ for its assembly, to form the worm system. the details of which are beyond the scope of this essay. However it would be pertinent to mention the flaws in ‘Samhain Worm Architecture’ which can fail the worm network.

Firstly the ability to update the database of known attack methods requires a distribution system which would be either central or hierarchical. An attack at this point may disrupt the growth and capabilities of worm. Secondly, the mechanism used to prevent repeated worm installation on the same host is a serious flaw. The worm executable, during its initialization, looks for other instances for itself. An attack on the worm system would require forgery of this signal to prevent the installation of the worm executable. In doing so, the worm is not installed on the host and thus its growth is stopped at this point.

In earlier part of this paper, we identified five components of a functional worm. However, there are several problems with the design and implementation of current worms (Nazario etal., 2001). The signatures of the remote attacks and reconnaissance traffic can be used to identify the source nodes; as the traffic associated with worms grow exponentially the life span of the worm is reduced and traffic growth leads to increasing worm profile thus detection; no direction of spread therefore making the directed attacks against specific target, a matter of chance; utilization of a central database of affected host by worm make it susceptible to exploitation (Nazario et al., 2001). Further Nazario and his associates used these components and problems associated with them in its implementation, to give considerations for future worms by proposing various adaptations.

  1.  Instead of actively scanning the targets for exploitation, worm to simply observe network traffic to discover the hosts, remote operating system and applications in use and then launch an attack.
  2.  Instead of central topology, use ‘guerilla’ and ‘directed tree’ topologies to achieve specificity of target attack.
  3.  Instead of central communication topology, use a system where each node stores the messages and forward the messages to appropriate node one hop away to cut down the generation of traffic.
  4.  Instead of encrypted communication methods, use steganography e.g., hiding data in media files.
  5.  Attack new targets e.g., appliances with embedded technologies.
  6.  Instead of static signatures, use polymorphic pay-loads. Using modular worm behavior where single basic component is skipped in design may give the worm added evasion capability.
  7.  Design to support dynamic updates to the system.

Many of these adaptations have been observed in ‘stuxnet’, ‘duqu’ and ‘flame’ malwares. Many are yet to be seen or discovered by the world.

The Future of Malware Deployment

The deployment of a malware by an attacker depends upon the intention and motivation of the attacker, which in turn would define the sophistication of the attack and typical target groups as summarized in following figure(Zoller,2011);

figure b

Zoller further classified the attacks based on the attacker deploying the attacks as opportunists, targeting opportunists, professionals and state founded. The script- kiddies would continue to use their unsophisticated attacks in the ‘mass-malware-market’. The exploits of targeting opportunists and professional have resulted in emergence of ‘commercial-vulnerability-market.’ However, the cause of worry is the future malware like’ stuxnet’, ‘flame’ and ‘duqu’ which are considered as acts of the nation-states. Take a look at the ‘latest’ malware to join the list- ‘Mask’ or ‘Careto’ discovered recently (Kaspersky, 2014). The ‘Mask’ is learnt to have targeted so far, 380 unique victims, e.g., Government, Diplomatic, Institutes, Energy, Oil & Gas Sectors, Research Institutes, Private Commercial Establishments and Activists spread over 31 countries and learnt to be in active cyber espionage since 2007. The ‘Mask’ becomes a special malware in view of the complexity of tool set used by attackers. This includes an extremely sophisticated malware, a root kit, a boot kit, 32-64-bit windows versions, Mac OSX and Linux versions and possible the versions of Android and iPhone/iPad (Apple iOS). When active in a victim system, ‘The Mask’ can interrupt network traffic, keystrokes, Skype conversations PGP keys, analyze Wi-Fi traffic, fetch all information from Nokia devices, screen captures and monitor all file operations. The malware collects a large set of data from infected systems e.g. the encryption keys, VPN configurations, SSH keys etc. The time, money and expertise required to design and deploy such an extremely sophisticated malware leave no doubt that it is the handwork of some ‘nation state’.

The complete dependence of a Nation’s Economy and Critical-Infrastructure presents an opportunity to the ‘Nation-States’ to deploy malware to gain information- dominance in cyber- domain to transmit information and denial/restriction of such information to the ‘enemy- state’. Further, the critical- infrastructure of a country can be crippled through deployment of stealthy and well- crafted tools to exploit the ‘zero-day-vulnerability’ is a matter of hours, if not minutes (Mittal, 2014). The concept of war-maneuvering has been compared with cyber-maneuver (Applegate, 2012), where it is realized that blatantly hostile acts in cyber space are characterized by rapidity, anonymity and difficulty in attribution and are dispersed disproportionately in space and time. Even the territory of enemy or one of his allies or adversaries can be used to deploy such malware attacks.

The Future of Malware Terrain

The author has a strong feeling that the future of malware isn’t so much about the design and sophistication in the engineering of malware as much as how and where the potential victim would be attacked, thus making the terrain of malware deployment a key factor in future attacks. The low level attacks would continue to exploit the small and old vulnerabilities to their advantage. The social networking sites would be the most sought after ‘terrain’, in foreseeable future, for deployment of malware (Athanasopoulos, 2008; Luo, 2009; Felt, 2011; Abraham, 2010, Irani, 2011). Recently a malware was deployed to target the top executives of a major corporation through their spouses. The presumption was that at least there would be a few non-tech-savvy spouses using a poorly secured home PC sharing the connection, and this would provide the backdoor needed to compromise the executive’s computer and gain access to the systems of target companies (Vance, 2011). The platform- agnostic, web-based malware represent a new frontier. As the developers re-engineer websites and applications to work on a variety of devices, the malware would target the commonalities like HTML, XML, JPEGs, etc., that run on any device. The pace with which the smart phones are becoming e-wallets, tools of m-commerce and repository of flight e-boarding passes and rail-tickets, would soon make the smart phones a favorable terrain for deployment of malware. But the worst is yet to be discussed. Consider a number of embedded devices available all around us, the microwaves, the refrigerators, the washing machines, the internet cameras, the automated heating and cooling systems, the cars, the routers, the environment monitors, and the animal/cattle- tags and so on. Soon, the connected devices would be part of our lives and thus come the concept of ‘Internet of Things’ or subsequently ‘Internet of Everything’ and finally the malicious ‘Botnet of Things’. Having chips embedded in our appliances make our life simple but imagine what would happen when the number of ‘internet-connected-devises’ reaches50 billion by the year 2020 (Kumar, 2014). The main problem with these things is that unlike computers, the security patches are not updated on these things. The embedded- device- security is a matter of grave concern (John & Thompson, 2012; Stantucci, 2011). I have never seen a company or a user applying security patches to printers, modems, routers, ovens cameras etc. as it require extra time and money. Most of the embedded chips are old versions manufactured even two to three years before the device is manufactured and therefore susceptible to malware attacks even by script- kiddies. The ‘Internet of Things’ would be the favorite terrain for the deployment of malware in future (Stammberger, 2009).As a number of such nano and micro devices are likely to be implanted in human body in future, malware could be deployed even to commit murder, which at present is committed through use of conventional means. These ‘Implantable Medical Devices ( IMDs )’ often work on software- defined radios so that it can operate on multiple frequencies and use various processors ( see figure below, Leavit, 2010).

figure c

Mostly, these devices have no direct connectivity with the internet but may have connectivity with a bedside monitor who in turn may be connected to internet thus enabling hackers to deploy malware to exploit communication channel between the device and external control units. Adding encryption capabilities to IMDs would add complexity and require more battery life and computing powers to handle algorithms (Leavit, 2010). This would be a great challenge in future to build defense against such vulnerabilities by designing zero- power- defense mechanisms for IMDs (Ransford, 2014).

The future of Malware Detection

Based on the discussions in earlier parts of this paper regarding components of worms and the future considerations of worm, we would try to understand the methods of detecting worms. The aim of our ‘detection strategies’ is to detect almost any type of worm with little effort, for which one need to focus on the common features of worms. The three methods of worm detection are, traffic analysis, use of honey pots and dark network monitors and the employment of signature based detection systems and form the core of detection strategies for detecting both the hackers and worms. It is to be kept in mind that no single method work for all of the worms, however a combination of more methods would produce near complete detection. We would briefly discuss the three methods of detection in following part of this essay (modified after Nazario, 2004).

  1.  Traffic Analysis– It is the analysis of network’s communications and the inherent patterns. One need to monitor mainly three major features to detect the worms viz., volume of traffic at a network connection point like router or firewall, the number of type of scans occurring as most worms use active scans to identify new targets of attack, and change in the host traffic patterns when host is compromised. This method is a relatively simple yet powerful tool for worm detection. It uses the general properties seen in most of the worms like active reconnaissance and exponential growth. Even the worms using a variety of dynamic methods or polymorphic vectors can be detected in contrast to signature detection methods. However, this method may have difficulty in detection of ‘slow-worms’ and ‘worms using passive mechanisms’ for identifying and attacking targets. However these weaknesses would not prevent the use of traffic analysis in worm- detection in foreseeable future. Furthermore, the data generated by this analysis may also be useful to find some other network anomalies.
  2.  Honey pots and Dark (Black Hole) Network Monitors – Ahoney pot could be understood as a functional system that responds to malicious probes in a manner that elicits the desirable response by the attack. This could be designed using an entire system, a single service, or even a virtual host. The ‘dark-network-monitoring’ monitors unused network segments for malicious traffic. These could be local, unused subnets or global unused networks. Together these tools can be used in the analysis of worms. However, placing the honey pots on a production network or using the black-hole monitor on a network where the routine traffic is routed as a destination would introduce Large Vulnerability and could be counterproductive. The details of the ‘honey pot’ and ‘black-hole monitor’ setup and functionality are beyond the scope of this discussion. It would suffice to say at this point that, the ‘Black-hole Monitors’ are a more effective means to monitor worm behavior due to their promiscuous nature and can capture wealth of data from a significant portion of the Internet. However, the honey pots, in contrast, are best used at a time of high worm activity when a copy of the worm’s executable is needed. A honey pot is then quickly crafted and exposed to the network. Upon compromise, a set of worm- binaries are obtained for study (Honeynet Project, 2002).
  3.  Signature- based Detection – Adictionary of known fingerprints is used and run across a set of input. The dictionary typically contains a list of known bad signatures like ‘malicious network payloads’, or the ‘file contents of a worm executable’. The three types of signature analysis in worm detection are the ‘network payload signatures ’, ‘Log file analysis’ and ‘file signatures’. The most important weakness of the signature-based detection methods is that they are reactionary and rarely detect a new worm. They can be used only to detect only the known worms. They cannot detect the polymorphic and dynamically updatable worms.

A mix of all three technologies discussed would form a robust system to detect these worms. A detailed view of such system is well documented by NIST (Scarfone & Mell, 2007).

What is the direction of future research in this field? Off late, researchers have shown keen interest in application of principles of ‘Biological Immune Systems’ to Computer Systems, since both have to maintain their stability in ever changing environment. Numerous desirable features of the Biological Immune Systems (BIS) viz., diversity, self-tolerance, immune-memory, distributed computation, self-learning, self-organization, self-adaptation and robustness have inspired BIS based Artificial Immune Systems (AIS) for information security (Jin, 2013). This is based on the ‘danger model’ presented by many researchers (Aickelin & Cayzer, 2002, Matzinger, 2002). According to this model ‘adoptive immune systems’ are not able to distinguish self from non-self but immune response is triggered when danger signals are generated by damaged cells. The cells in the adaptive immune system are incapable of attacking their host. While the immune response of danger model is a reaction to the stimulus considered harmful by body and not reaction to non-self, the foreign and immune cells of danger model are allowed to exist together.

The following figure illustrates the main principle of danger model and its comparison with information system as shown in the accompanying table(Jin, 2013).

figure d

“The cells undergoing distress or unnatural death transmit an alarm signal to Antigen Presenting Cells (APCs), thus simulating the APCs who in term stimulate the adaptive immune system’s ‘B’ and ‘T’ – cells into action in accordance with signal 1 and 2. The signal 1 is the binding of an immune cell to an antigenic pattern presented by an APC and signal 2 is either a help signal to activate a B–Cell or a co-stimulation signal given by APC to activate T-cells (Jin, 2013). Attempts have been made by various researchers to apply this ‘danger model’ to the data processing, worm response and detection, computer network intrusion detection, security monitoring and so on. Multidisciplinary research is required to build a robust and self-healing system of malware detection and defense in foreseeable future.


The malware designs are becoming extremely complex and complicated and have evolved over a period from innocent ‘internet-joy-rides’ to ‘precision cyber-weapons’ of military grade. While the script-kiddies would continue to exploit even old vulnerabilities spread across multiple platforms, the nation-states are looking at the cyber-domain as a fifth domain of war. They would continue to deploy dangerous weaponised-malware to inflict harm in the physical world. The ‘things’ of the ‘Internet of Things ‘would act as a ‘watering hole’ for the attackers to deploy malwares to use ‘insecure-simple-embedded-chips’ to enter into relatively secure computer systems. ‘Malware as a Service’ (MaaS) would become a reality very soon. Despite all efforts, it seems that the malware is here to stay and would continue to be used in future by hacker, curious mind and the warrior of the information age.

Note: The views expressed in this paper are of the author and do not necessarily reflect the views of the organizations where he worked in the past or is working presently. The author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study.


  1. *Abraham, S. and I. Chengalur-Smith. n.d. “An Overview of Social Engineering Malware: Trends, Tactics, and Implications.” Technology in Society 32(3):183–93.
  2. Applegate, S., C. Cossack, R. Ottis , and K. Ziolkowski. n.d. “The Principle Of Maneuver in Cyber Operations.” The Principle of Maneuver in Cyber Operations. Retrieved March 2015 (
  3. Athanasopoulos, E. et al. 2008. “Antisocial Networks: Turning a Social Network into a Botnet” Information Security. Springer.
  4. *Davis, M. 2010. Hacking exposed malware & rootkits : Malware & rootkits security secrets & solutions. New York: McGraw Hill.
  5. Farewell, P., & Rohozinsk, R. 2011. “Stuxnet and the Future of Cyber War”. Survival, 53(1), 23-40. April 2, 2014,
  6. Feder, B. 2008. “A Heart Device Is Found Vulnerable to Hacker Attacks.” New York Times, 12.
  7. *Felt,, A., Finifter, M., Chin, E., Hanna, S., & Wagner, D. 2011. “A survey of mobile malware in the wild”. Proceedings of the 1st ACM Workshop on Security and Privacy in Smartphones and Mobile DevicesACM, 3-3.
  8. Honeynet Project, “Know Your Enemy: Passive Fingerprinting, Identifying Remote Hosts, Without them Knowing”. 2002. Retrieved April 5, 2014, from
  9. *Irani, D., Balduzzi, M., Kirda, D., & Pu, C. 2011. “Reverse social engineering attacks in online social networks”. Detection of Intrusions and Malware, and Vulnerability Assessment (pp. 55-74). Springer.
  10. Jin, X. 2013. “ENSREdm: E-government Network Security Risk Evaluation Method Based on Danger Model”. Research Journal of Applied Sciences, Engineering and Technology, 5(21), 4988-4993. Retrieved from
  11. Unveiling “ Careto – The Masked APT”. 2014. Retrieved September 3, 2015, from
  12. Kumar, A. 2014, March. “Internet of Things (IOT): Seven enterprise risks to consider”. Retrieved April 2, 2015, from
  13. History of Malware. (n.d.). Retrieved April 2, 2014, from
  14. *Leavitt, N. 2010. “Researchers fight to keep implanted medical devices safe from hackers”. Computer, 43(8), 11-14.
  15. Luo, W., Liu, J., & Fan, C. 2009. “An analysis of security in social networks”. Dependable, Autonomic and Secure Computing, 2009. DASC’09. Eighth IEEE International Conference OnIEEE, 648-
  16. Matzinger, A. 2002. “The danger model: A renewed sense of self”. Science, 2002(12), 301-305. Retrieved April 5, 2014, from
  17. Mittal, S. 2014. The Threats and Opportunities in Cyber Domain. Essay submitted to Cranfield University.
  18. Nazario, J. 2004. Defense and Detection Strategies against Internet Worms. USA: Artech House.
  19. Nazario, J. 2001. “The Future of Internet Worms”. Retrieved September 3, 2015, from
  20. *Ransford, B., Clark, S., Kune, D., & Burleson, W. 2014. “Design Challenges for Secure Implantable Medical Devices”. Security and Privacy for Implantable Medical Devices, 157-173.
  21. *Santucci, G. 2011. “The Internet of Things: The Way Ahead”. Internet of Things-Global Technological and Societal Trends From Smart Environments and Spaces to Green ICT, 53.
  22. Scarfone, K., & Mell, P. 2007. “Guide to Intrusion Detection And Prevention System”. NIST Special Publication, 80-94. Retrieved April 5, 2014, from
  23. Skardhamar, R. 1996. Virus Detection And Elimination (UK ed.). Academic Press.
  24. *Stammberger, K. 2009. “Current trends in cyber attacks on mobile and embedded systems”. Embedded Computing Design, 7(5), 8-12.
  25. Symantec. 2014. Worms. Retrieved September 3, 2015, from
  26. Vance, J. 2011. “The Future of Malware”. Network World, (October). Retrieved April 5, 2014, from
  27. Viega, J., & Thompson, H. 2012. “The State of Embedded-Device Security (Spoiler Alert: It’s Bad)”. IEEE Security & Privacy, 10(5), 68-70.
  28. Zalewski, M. 2000. “I Don’ t think I Really Love you, or Writing Internet Worms for Fun and Profit”. Retrieved April 1, 2014, from
  29. Zetter, K. 2012. “’Flame’ spyware infiltrating Iranian computers”. Wired. Retrieved April 1, 2014, from
  30. Zetter, K. 2011. “Son of the Stuxnet in the Wild”. Wired. Retrieved April 1, 2014, from
  31. Zoller, T. 2011. “Musings on Information Security – Luxembourg / A blog by Thierry Zoller.: Attacker Classes and Pyramid (Version 3)”. Retrieved April 1, 2014, from

* Indicates that the Abstract of this reference was read on Google Scholar as these references were not available to Author.

*Shri Sandeep Mittal, I.P.S., presently working as Deputy Inspector General of Police in LNJN National Institute of Criminology and Forensic Science, Ministry of Home Affairs, Government of India, Delhi since 2012, joined I.P.S. in 1995. He has served in various communally sensitive districts in Tamilnadu. He specializes in Cyber Security and was instrumental in neutralizing a number of ‘online-drug-trafficking-syndicates’ globally. He is Life Member of USI, Associate Member of IDSA and Life Member of Indian Society of Criminology. He is a Chevening Cyber Policy Scholar sponsored by Foreign & Commonwealth Office, United Kingdom.