Information Assurance

Reputational Risk, Main Risk Associated with Online Social Media

Posted on

IJCC, Volume XXXIV No. 2 July-Dec.,2015 ISSN 09704345

Sandeep Mittal, I.P.S.,*


The Indian Journal of Criminology & Criminalistics,
Volume 35 (2) July – Dec. 2015


Social media is undoubtedly a revolution in the business arena blessing the organizations with the power to connect to their consumers directly. However, as the saying goes nothing comes without a cost; there is cost involved here as well. This article examines the risks and issues related to social media at the time when the world is emerging as a single market. Social networking and online communications are no more just a fashion but an essential feature of organizations in every industry. Unfortunately, inappropriate use of this media has resulted in increasing risks to organizational reputation threatening the very survival in the long-run and necessitating the management of these reputational risks.

This article attempts to explore the various risks associated with social media. The main aim of this study is to particularly focus on reputational risks and evaluate it’s intensity from the perspectives of public relations and security staff of an organization. The article is structured to firstly explain the concept of social media followed by identification of various social media risks and the analysis of reputational risk from perspectives of public relations and organizational security staff. The article then based on the analysis provides various recommendations in order to help the contemporary organizations to overcome such risks and thus, enhance their effectiveness and efficiency to gain competitive advantage in the long-run.

Keywords: Reputational Risk, Online Social Media, OSM Security, OSM Risk, Organizational Reputation, Cyber Security, Information Assurance, Cyber Defence, Online Communication.


With changing times, the concept of socializing has been transforming. Globalization and digitalization to a large extent are responsible for the same. With internet, it is possible to stay connected with people located in various regions of the world. One such medium of socializing is the social media. In todays time, online social media services have been one of the most vibrant tools adopted not only by individuals but also corporate and government organizations (Picazo-Vela et al., 2012). Corporates in fact have been abiding social media extensively as it is one of the cheapest ways of communicating with the masses. The importance of social media can be understood from the fact that at present there are more than 100 million blogs that are highly operational and connect people from across the world (Kietzmann et al., 2010). Further there has been a surge in social media members for websites like Facebook or Twitter with over 800 million active users in Facebook in 2012 and 300 million users of Twitter (Picazo-Vela et al., 2012). In spite of being a very powerful mode of communication it is subjected to a large number of risks.

Organizations do not operate in vacuum, thus, management of reputation is crucial for them, as it affects their markets as well as the overall environment. Organizational reputation not only impacts its existing relations but also affects the future courses of action (McDonnell and King, 2013). In this article, an attempt is made to understand the various reputational risks associated with social media that affects an organization’s working and also suggests some ways to overcome them.

Concept of Social Media

The foundations of social media have been laid by the emergence of Web 2.0 (Kaplan and Haenlein, 2010). It is with the help of this technological development that social media is accessed at such a wide scale and is available in devices like cell phones and tablets, other than personal computers and laptops. Social media is gaining importance in the corporate world as decision makers and consultants are exploring its various aspects to exploit its potential optimally (Kaplan and Haenlein, 2010). Social media is an online communication system through which information is generated, commenced, distributed and utilized by a set of consumers who aim to aware themselves regarding various aspects related to a product, service, brand, problems and persona (Mangold and Faulds, 2009). It is also known as consumer-generated media. In simple terms, it can be explained as a platform to create and sustain relationships through an Internet based interactive platform.

Social media is categorized under collaborative projects, blogs, content communities, social networking sites, virtual game worlds, and virtual social worlds (Kaplan and Haenlein, 2010). The examples of various communication systems under social media are provided in the Table 1 for ready reference.

Organizations have realized the importance of social media and have been using it along with other integrated marketing communication tools to converse with target audience effectively and efficiently (Michaelidou et al, 2011). This is mainly because the modern day consumers are shifting from traditional promotional sources to such modernized sources. Social media has a very strong hold and is influencing consumer behavior to a large extent. Out of all the above few examples, Twitter has emerged as one of the most powerful social media tools. In the present day scenario, approximately 145 million users communicate by transferring around 90 million ‘tweets’ per day, of 140 characters or less (Kietzmann et al, 2010). Another example is of Youtube in which videos can go viral in few seconds and can attract more than 9.5 million views for a single video (Kietzmann et al, 2010).

Table 1: Example of Social Media Types

Social Media Type Example
Social networking websites MySpace, Facebook, Faceparty, Twitter
Innovative sharing websites Video Sharing (Youtube), Music Sharing (, Photo Sharing (Flickr), Content Sharing (, General intellectual property sharing (Creative Commons),
User-sponsored blogs The Unofficial AppleWeblog,
Company-sponsored websites/blogs, P&G’s Vocalpoint
Company-sponsored cause/help sites Dove’s Campaign for Real Beauty,
Invitation-only social networks
Business networking sites LinkedIn
Collaborative websites Wikipedia
Virtual worlds Second Life
Commerce communities eBay,, Craig’s List, iStockphoto,
Podcasts For Immediate Release: The Hobson and Holtz Report
News delivery sites Current TV
Educational materials sharing MIT OpenCourseWare, MERLOT
Open Source Software communities Mozilla’s,
Social bookmarking sites which permit browsers to suggest online news stories, music, videos Digg,, Newsvine, Mixx it, Reddit

Source: Mangold and Faulds, 2009.


Risks Associated with Social Media

Before discussing the various risks associated with social media, it is essential to understand the various risks faced by an organization while using the internet. This can be depicted with the help of a diagram provided as Figure 1.

Figure 1: Internet Related Risks for Organizations
Source: Lichtenstein and Swatman, 1997

In Figure 1, other internet participants imply other members from the internet society. These risks are very general and are experienced by organizations even in cases where they are not connected to the internet like the risks associated with corrupted software (Lichtenstein and Swatman, 1997).

The horizon of risks have expanded to a larger extent by things becoming more critical and complicated with extensive popularity and usage of social media (Armstrong, 2012). Organizations are challenged with new and unique risks which need to be catered proactively. These risks threaten the effectiveness of this mode and thus organizations fail to reap its benefits completely. It is due to such risks that many organizations have either limited their approach towards usage of social media or do not resort to such measures. Such risks range from data outflow and legal complications to risks associated with reputation (Everett, 2010).

These risks can be categorized under two heads namely; those related to user and security related issues (Chi, 2011). User related risks are inadequate certification controls, phishing, information seepage, and information truthfulness (Chi, 2011). The security related risks are Cross Site Scripting (XSS), Cross Site Request Forgery (CSRF), injection defects, deficient anti-automation (Chi, 2011).

Out of all the risks related to social media, an organization is mainly threatened with risks related to information confidentiality, organizational reputation and legal conformities (Thompson, 2013). Issues related to information confidentiality emerge mainly because information is shared digitally using social media. Thus, there are chances of such information getting hacked or shared unintentionally. This may raise risks related to privacy thus affecting information integrity.

Legal issues while using social media are bound to take place mainly because this media is used for global approach and is therefore affected by international rules and regulations. It is challenging for an organization to understand varied legal obligations of differing countries and then determine a universally accepted legal protocol. Risks related to organizational reputation are discussed in detail in the next section.

Reputational Risk

Reputation of an individual or organization is related to one’s reliability and uprightness. Thus, managing and securing the reputation becomes highly critical. With organizations resorting to social media extensively, they are bound to experience such reputational risks thus affecting their goodwill negatively. Reputational risks arise from the fact that organizations share all-embracing information with customers and browsers (Woodruff, 2014). This information in many circumstances is misused which damages organizational reputation. The various depressing effects from reputational damage are negative impact on goodwill in the real world, restricting development of social contacts and contracts, detrimental impact on attracting potential customers (Woodruff, 2014). In one of the research studies, 74 per cent employees accept the ease of causing reputational damage to organizations through social media (Davison et al., 2011). It is due to this reason that organizations to a large extent scrutinize the use of social networking sites by their employees.

Public Relations

Public relations depict organization’s relations with its various stakeholders. Organizations use the social media platform to interact with their stakeholders and thus develop a strong and positive public image. In fact the social media, organizations and stakeholders together interact within the dynamic business world (Aula, 2010). These interactions are shaped by organizational public relations objectives and the extent of social media usage for developing organizational reputation. But developing and sustaining a positive public relation is not easy as they are hampered to a large extent when subjected to reputational risks. Organization’s personal identity is at stake as it can be plagiarized and used without authentication (Weir et al, 2011).

Reputational risks are related to organizational credibility and results from security risks like identity theft and profiling risks. These risks challenge organizational reputation by questioning its compliance with societal rules and regulations (McDonnell and King, 2013). Organizations to a large extent fail to integrate social media with organizational and stakeholders objectives resulting into ineffective reputation management.

Social media has made organizations global, due to which even minor incidents get highlighted internationally. Local issues get international fame resulting in a negative reputation for the organization globally. Further with social media being active, organizations cannot escape from the clutches of negative publicity (Kotler, 2011). One example of failure of reputation management that resulted in earning negative fame across the world is Nestle. In 2010, Greenpeace uploaded a video on YouTube against KitKat by Nestle (Berthon et al, 2012). The video went viral and resulted in negative publicity for the organization. Though the advertisement was made mainly for consumers in Malaysia and Indonesia for conserving rainforests but it was acknowledged by the world at large.

Another risk that is faced by the organizations is the creation of a public image through standardized marketing programs. Differing stakeholders from different countries use different social media platforms which make it essential for organizations to clearly analyze and understand their usage requirements and patterns. This is where most of the organizations fail and thus are unable to use social media appropriately.

Below is a graph that depicts usage of differing social media platforms in different countries as per statistics in 2011 (Berthon et al, 2012).

Figure 2: Relative Frequency of Search Terms from Google Insights: Social Media by Country

Source: Berthon et al, 2012

Organizational Security Staff

Organizational employees are indispensable for the success. But these employees can also be a threat to the organization. It is mainly possible as employees have access to organization’s confidential and important information which they can leak to outsiders. With social media’s growing popularity, the line between personal and professional conversations on web has become blurred. Further inspite of keeping this information under security they can evade such systems through illegal measures. Further research has proved that only in USA approximately 83per cent staffs use organizational resources to contact their social media (Zyl, 2009). Other than using these resources for personal messages exchange over social media, 30 per cent employees in USA and 42 per cent employees in UK also exchanged information related to their work and organization (Zyl, 2009). This depicts the intensity of problem of security risks related to social media. Thus, the organizational security staff has to be on its toes to ensure that such information is highly secured and not utilized inappropriately.

In 2002, an employee of an international financial services organization in the USA infiltrated the organizational digital security systems and used ‘Logic Bomb’ virus to delete approximately 10 billion files from 1300 organization’s servers. This resulted in a financial loss of around $3 million and it also had to suffer due to negative publicity. This depicts failure of organizational society staff to combat risks. Such issues have become very common in the social networking world. Employees have the freedom to generate nasty and unsecured comments or links that harms organizational reputation, finances and creates security related risks (Randazzo, 2005).

With the help of social media, social engineering attacks are possible due to easy admission to hefty information by hackers, spammers and virus creators. They can easily misuse the same by creating fake profiles, stealing identity and collect details with regards to job titles, phone numbers, e-mail addresses. Further they can also corrupt systems using malwares that ultimately are a threat to organizational data. Data infiltration and loss ultimately impact organizational reputation negatively as these leaked data are used for unauthentic and illegal activities.


Organizations who are either unaware of these risks or are unable to defend themselves can face dire consequences at times. Organizations are aware of the gains that they would derive from using social media networking and thus take such risks readily. These risks cannot be avoided completely,organizations need to work out measures through which they can manage these risks and mitigate their negative influences.

In order to overcome issues related to privacy that ultimately results in hampering one’s reputation, the organizations should take proactive measures before using social media. During the sign-up phase or creation of social networking profiles, specific concerns related to privacy and confidentiality should be resolved and proper regulations designed (Fogel and Nehmad, 2009). These rules and regulations should be very clearly communicated to organizational employees so that they have complete information regarding social media dos and don’ts. Further the organization should not only design strict punishments but also execute them against those who break such rules (Hutchings, 2012).

One of the ways to overcome reputational risks related to social media is by appointing an efficient social media manager. These managers are specialists and would be responsible for determining the social media related protocol based on organizational top secret information, contemporary issues and prospective plans (Bottles and Sherlock, 2011). The social media manager should have a responsibility towards the organization and various stakeholders and thus intermingle with them sincerely and empathetically (Brammer and Pavelin, 2006). The manager should also have a vigilant eye and an analytical attitude to identify various fact, figures and events that can impact organizational reputation and thus take corrective actions. As security staff play crucial role in determining organizational security standards, the organizations should be very specific in recruiting and selecting them. Besides, there should be a greater emphasis in the organization development of culture, values, and ethics within an organization.

Organizations should also understand that management of reputational risks requires collaborative and innovative approach. The organization needs to develop a social media involvement protocol by consulting and taking advice from differing sources like legal experts, marketing experts, international business experts, media experts and other stakeholders (Montalvo, 2011). The organization should also be innovative in selecting and distributing the content through social media so that it can responsibly deal with issues.


Organizations today prefer to use social media in comparison to traditional media (Hutchings, 2012). It is mainly due to the various benefits associated with the same but they cannot also overlook various associated risks. It takes ages for an organization to develop a positive reputation and thus careful measures needs to be taken to maintain and sustain it. Organizations are unable to exercise control on social media completely but they can take restrictive measures to ensure that reputational risks are minimized and their ill effects are combated.

The article identified that the major reputational risks related to social media for organizations arise due to data outflow, identity theft, profiling risks, inappropriate choice of public relation strategy, inability to control external environmental factors, inappropriate information management and security policy and failure to have efficient and effective security staff. In order to overcome such issues, organizations need to appoint social media managers and hire employees skilled in social media management. Further, it should be a collaborative and creative approach and design social media protocol to mitigate such risks.

To conclude, it can be stated that the organizations need to be proactive and have a vigilant eye on environmental factors to secure themselves and benefit from online social media.

Note: The views expressed in this paper are of the author and do not necessarily reflect the views of the organization where he worked in the past or is working presently, the author convey his thanks to Chevening TCS Cyber Policy Scholarship of UK Foreign and Commonwealth Office, who sponsored part of this study.


A. Kaplan, and M. Haenlein, “Users of the world, unite! The challenges and opportunities of Social Media. “Business horizons, vol: 53, iss: 1, 2010, pp. 59-68. log/pics/sdarticle.pdf. [Accessed on 07/08/2014]

A. Woodruff, Necessary, unpleasant, and disempowering: reputation management in the internet age. ACM, In Proceedings of the 32nd annual ACM conference on Human factors in computing systems,2014, pp. 149-58. woodruff.pdf?ip= OA&key=4D4702B0C3E38B35% 2E4D4702B 0C3E38B 35 %2E4D4702B 0C3E38B 35 % 2E362513C443C43C7A&CFID= 381960244&CFTOKEN=18798755&__acm__=1404667886_023e822660bflb 4433893921552068cc [ Accessed on 06/07/2014 ]

A. Zyl, “The impact of Social Networking 2.0 on organisations. “Electronic Library, vol. 27, iss: 6, 2009, pp. 906-18. social_networking.pdf. [Accessed on 07/08/2014]

C. Everett, “Social media: opportunity or risk?” Computer Fraud & Security, vol: 2010, iss: 6,2010, pp. 8-10. d16c69fe23c071cc363e2a967ce68e4e. [Accessed on 06/07/2014].

C. Hutchings, “Commercial Use of Facebook and Twitter: Risks and Rewards.” Computer Fraud & Security, vol: 2010, iss: 6,2012, pp. 19-20.¬S 1361372312700659-main.pdf?_tid=ed89f016-0528- 11 e4-8935 -00000aab0f27 &acdnat= 14046635870ebcbda0807a69b549a7dfa0a62430c1. [Accessed on 06/07/2014]

G. Weir, F. Toolan, and D. Smeed, “The threats of social networking: Old wine in new bottles?”. Information Security Technical Report, vol: 1, 6, 2011, pp. 38-43. S1363412711000598/1-s2.0-51363412711000598-main.pdf?_tid=fe220808-052a-l1e4-80b4- 00000aacb361&acdnat=1404664473_4ac2c6946ec5ac14beeaf9f567432b0d. [Accessed on 06/07/ 2014]

H. Davison, C. Maraist and M. Bing, “Friend or Foe? The Promise and Pitfalls of Using Social Networking Sites for HR Decisions”. Journal of Business Psychology, vol: 26, 2011 pp. 153-9. JBP_2011_Social%20Networking %20and%2OHR.pdf [Accessed on 06/07/2014].

I. Ahmed, Fascinating #SocialMedia Stats 2015: Facebook, Twitter, Pinterest, Google+, 2015. (Accessed: 24/05/2016)

J. Fogel and E Nehmad, “Internet social network communities: Risk taking, trust, and privacy concerns.” Computers in Human Behavior, vol: 25, 2009,pp, 153-160. S0747563208001519/1-s2.0-50747563208001519-main.pdf?_tid=36c1e884-052d-l1e4-bd79- 00000aacb35d&acdnat=1404665427_ecb8f0d08d037b033d3e8c901bf2d27f. [Accessed on 06/ 07/2014].

J. Kietzmann, K. Hermkens, I. McCarthy and B. Silvestre, “Social media? Get serious! Understanding the functional building blocks of social media.” Business horizons, vol: 54, iss: 3, 2011,pp. 241-51. [ Accessed on 07/08/2014 ]

K. Bottles, & T. Sherlock, “Who should manage your social media strategy”. Physician executive, vol: 37, iss: 2, 2011, pp: 68-72. WhoShouldManageYourSocialMediaStrategy.pdf [ Accessed on 06/07/2014 ]

K. Armstrong, “Managing your Online Reputation: Issues of Ethics, Trust and Privacy in a Wired, “No Place to Hide “World.” World Academy of Science, Engineering and Technology, vol: 6, 2012, pp. 716-21. [ Accessed on 06/07/2014 ]

M. Chi, Security Policy and Social Media Use,2011 The SANS Institute. reading-room/whitepapers/policyissues/reducing-risks-social-media-organization-33749 [ Accessed on 07/07/2014]

M. Langheinrich and G. Karjoth, “Social networking and the risk to companies and institutions.” Information Security Technical Report, vol: 1 5, 2010,pp.51-6. 51363412710000233/1-s2.0-51363412710000233-main.pdf?_fid=880db588-052d- 1 1 e4-9416- 00000aacb361&acdnat=1404665564_4c01c9309cedc188fe4fc0888009c66e.[ Accessed on 06/07/ 2014 ]

M. McDonnell and B. King, “Keeping up Appearances Reputational Threat and Impression Management after Social Movement Boycotts.” Administrative Science Quarterly, vol: 58, iss: 3, 2013, pp. 387-419. [ Accessed on 07/08/ 2014]

M. Randazzo, M. Keeney,E.Kowalski, D. Cappelli, and A. Moore, Insider threat study: Illicit cyber activity in the banking and finance sector,2005(No. CMU/SEI-2004-TR-021). Carnegie-Mellon University Pittsburgh Pa Software Engineering Institute. fulltext/u2/a441249.pdf. [Accessed on 01/08/2014]

N. Michaelidou, N. Siamagka and G. Christodoulides, “Usage, barriers and measurement of social media marketing: An exploratory investigation of small and medium B2B brands”. Industrial Marketing Management, vol: 40, iss: 7, 2011, pp. 1153-9. S0019850111001374/1-s2.0-50019850111001374-main.pdf?_tid=c846abe2-1e5e- 1 1 e4-8a82- 00000aacb35f&acdnat=1407435496_aflecOcd05602467585a29dcc4394261.[ Accessed on 07/08/ 2014 ]

P. Aula, “Social media, reputation risk and ambient publicity management”. Strategy & Leadership, Vol. 38 Iss: 6, 2010, pp. 43 — 9 journals.htm?articleid=1886894. [Accessed on 07/08/2014]

P. Berthon, L. Pitt, K. Plangger and D. Shapiro, “Marketing meets Web 2.0, social media, and creative consumers: Implications for international marketing strategy.” Business Horizons, vol: 55, iss: 3, 2012, pp: 261-71. [ Accessed on 07/08/2014 ]

P. Kotler, “Reinventing marketing to manage the environmental imperative. “Journal of Marketing, vol: 75, iss: 4, 2011, pp. 132-5. 2.1.%20Reinventing%20 Marketing%20to%20Manage%20the%20Environmental%20Imperative.pdf. [ Accessed on 07/ 08/2014]

R. Montalvo, “Social Media Management. “International Journal of Management & Information Systems. vol. 15, No.3,2011, pp. 91-6. article/download/4645/4734.[ Accessed on 06/07/2014]

S. Brammer and S. Pavelin, “Corporate reputation and social performance: The importance of fit.” Journal of Management Studies, vol: 43, iss: 3, 2006,pp: 435-55. http:// _Social_Performance_The_Importance_of Fit/file/60b7d522d9749b6686.pdf. [ Accessed on 07/08/2014 ]

S. Picazo-Vela, I. Gutierrez-Martinez and L. Luna-Reyes, “Understanding risks, benefits, and strategic alternatives of social media applications in the public sector.” Government Information Quarterly, vol: 29, 2012 pp. 504-11. s2.0-S0740624X12001025-main.pdf?_tid=a206f2a2-0527-11e4-a7ea-00000aacb362&acdnat=140466303198b29673394d658f23bd31968c72aefd. [Accessed on 06/ 07/2014]

T. Thompson, J. Hertzberg and M. Sullivan, Social media risks and rewards,2013 Financial Executive Research Foundation, Retrieved from content-page-files/advisory/pdfs/2013/ADV-social-media-survey.ashx [Accessed 30/07/ 2014]

W. Mangold and D. Faulds, “Social media: The new hybrid element of the promotion mix.” Business horizons, vol: 52, Iss: 4, 2009, pp. 357-65. http://www.iaadiplom.d1c/Billeder/ MasterClass07/07-1SocialMedia-inthePromotionalMix.PDF. [Accessed 07/08/2014]

Understanding the Human Dimension of Cyber Security

Posted on Updated on

 Indian Journal of Criminology & Criminalistics (ISSN 0970 - 4345), Vol .34 No. 1 Jan- June,2015, p.141-152
Indian Journal of Criminology & Criminalistics (ISSN 0970 – 4345), Vol .34 No. 1 Jan- June,2015, p.141-152

Sandeep Mittal, I.P.S.,*



It is globally realized that humans are the weakest link in cyber security to the extent that the dictum ‘users are the enemy’ has been debated over about two decades to understand the behavior of the user while dealing with cyber security issues.Attempts have been made to identify the user behavior through various theories in criminology to understand the motive and opportunities available to the user while he interacts with the computer system. In this article, the available literature on interaction of user with the computer system has been analyzed and an integrated model for user behavior in information system security has been proposed by the author. This integrated model could be used to devise a strategy to improve user’s behaviour by strengthening the factors that have a positive impact and reducing the factors that have a negative impact on information system security.


Most of the system security organizations work on the premise that the human factor is the weakest link in the security of computer systems, yet not much research has hitherto been undertaken to explore the scientific basis of these presumptions. The interaction between computers and humans is not a simple mechanism but is instead a complex interplay of social, psychological, technical and environmental factors operating in a continuum of organizational externality and internality.1 This article tries to examine various aspects of interaction between humans and computers with particular reference to the ‘users’.The taxonomy adopted for understanding who is actually a user is based on the available literature.It is also imperative to explore the following questions: Why do users behave the way they do? Is there a psychological basis for the specific behaviour of users during the human’ computer interaction, and if yes, how does it affect the security of the computer system?Various hypotheses and suggestions offered by different experts are thus being reviewed in order to identify ways to improve both user behaviour and the overall security of computer systems. The debate on this issue was initiated by an article entitled,’UsersAre Not the Enemy’2,where the authors studied the behaviour and perceptions of users relating to password systems, and challenged the conclusion drawn in a previous work3 (DeAlvare,1988 quoted in Adams and Sasse, 1999) that many password users do not comply with the password security rules because ‘users are inherently careless and therefore insecure’.

Adams and Sasse (1999) concluded that the possession of a large number of passwords by users prevents the latter from memorising all of them, thereby also compromising password security,that users are generally not aware of the concept of secure passwords, and that they also have insufficient information about security issues. The earlier perceptions of security managers were thus challenged and users were no longer seen as the ‘enemy’. Since then, a number of studies have been undertaken by researchers who have adopted either of these two positions, viz., ‘The user is the enemy’ or ‘The user is not the enemy’. In this article, we examine various hypotheses before taking either of these two positions.

Taxonomy of Users’ Behaviours

It has been found that the effectiveness of technology is impacted by the behaviour of human agents or users,who access, administer and maintain information system resources4. These users could be physically or virtually situated inside or outside the organisations,thus, bringing into interplay a range of environmental factors that influence their behaviour. Most of the organizations tend to be more concerned with threats from external users even though surveys conducted by professional bodies indicate that three- quarters of the security breaches in computer systems originate from within the user fraternity.5Therefore,it is necessary to foster a systematic understanding of the behaviour of users and how it impacts information security.In this context, researchers have developed taxonomy of the behaviour of information security end-users.6 This taxonomy of security behaviour, comprising of six elements, (as has been depicted in Figure 1) is dependent upon two factors, viz., intentionality and technical expertise. On the one hand, the intentionality dimension indicates whether a particular behaviour was intentionally malicious or beneficial, or whether there was no intent at all. The dimension of technical expertise, on the other hand, takes into consideration the degree of technological knowledge and skill required for the performance of a particular behaviour.

Source: Adapted from Stanton,et al., 2005
Source: Adapted from Stanton,et al., 2005





The taxonomy of end-user behaviour, as delineated in Figure 1, helps in classifying the raw data on users’ behaviours and also in selecting the paths that could be followed for improving the information security behaviour of a particular user within an organization.


Exploring ‘What the Users Do?’

A fundamental postulate is that the users’ behaviour is guided by the risk which they perceive to be associated with their interaction with the information system in everyday situations. However, research has revealed that users normally fail to take optimal or reasoned decisions about the risks concerning security of information systems. The decision-making process of users exhibits the following predictable characteristics, and thereby understanding them would be of great use in positively impacting the decision-making ability of users7:

  1. Users often do not consider themselves to be at risk.In fact, as the users increase
    the security measures for their computer systems, they start indulging in more risky behaviours.
  2. Although users are not, by and large, imbecile or obtuse in their thinking, they
    lack both the motivation and capacity to devote full attention to information processing, especially since they resort to multi-tasking, which prevents them from concentrating fully on a single task at a time.
  3. The concept of safety per se is unlikely to be a persuasive element in determining human behaviour, especially because the argument that safety prevents something bad from happening is a rather abstract one, and consequently, human beings do not perceive adherence to safety norms as a gain or a beneficial exercise.
  4. It has been observed, that adherence to safety and security norms does not always produce instant results. In fact, the results often come weeks or months later, if at all, which prevents human beings from immediately comprehending the positive outcomes of their actions, thereby making them complacent. The same delay in perception of outcomes is also evident in the case of negative actions. Thus, human beings realize the impact of their actions only when the results can be seen instantaneously, as in the case of disasters.
  5. Research on the association between the concepts of risk, losses and gains indicatethat ‘people are more likely to avoid risk when alternatives are presented as gains and take risks when alternatives are presented as losses. When evaluating a security decision, the negative consequences are potentially greater, but the probability is generally less and unknown. When there is a potential loss in a poor security decision as compared to the guaranteed loss of making a pro-security decision, the user may be inclined to take the risk’.8 This study, therefore, shows a strong likelihood of users gambling to offset a potential loss rather than accepting a guaranteed loss in toto. This observation is depicted in Figure 2 (West, 2008, adapted from Tversky and Kahneman, 1986).9
Figure 2: Losses carry more value as compared to gains when both are perceived as equal. For non- zero values, if the value of loss (X) = value of gain (Y), then motivation of loss (A)>motivation of gain (B) (West, 2008, adapted from Tversky and Kahneman, 1986).
Figure 2: Losses carry more value as compared to gains when both are perceived as equal. For non-
zero values, if the value of loss (X) = value of gain (Y), then motivation of loss (A)>motivation of
gain (B) (West, 2008, adapted from Tversky and Kahneman, 1986).

The author is tempted to undertake a detailed literature survey to study the influence of human factors on security of information systemsin order to gain an insight into the entire scenario. However, in view of the limited scope of the present article, the author is restricting himself to presenting only a summary of the important available literature on users’ behaviour vis-à-vis the information system security (Table 1), leaving it to the readers to probe the matter further.

Table 1: Summary of Research on Users’ Behaviour and Information System Security

S.No. Dimension Postulate Reference
1. Users’ Behaviour a) There is a relation between end-user security behaviour and a combination of situational factors.

b) The various factors that are believed to influence security-related behaviour include the users’ perceptions of their own susceptibility and efficiency, and the possible benefits they are likely to derive from security.

c) It is extremely difficult to audit employee behaviour and the reasons thereof as individuals react differently in each situation, depending upon organizational culture.

Stanton et al., 2004

Ng, Kankanhalli and Xu, 2009

Vroom and von
Solms, 2004

2. Familiarity with information security aspects a) Shared knowledge about information security is important as it contributes towards bringing about a change in individual behaviour and eventually in an organization’s behaviour.
b) The following three factors have been identified as barriers to information
* General security awareness,
* Users’ computer skills, and
* Organizational budgets.
Vroom and von Sloms, 2004

Shawet al., 2009

3. Awareness The following factors have been identified among Users as three levels of security
* Perception re-use of potential security risks,
* Comprehensive know-how to perceive and interpret risks, and
* Prevention of the user.s ability to predict future situational events.
Shawet al., 2009
4. Organizational In a positive work environment,users Environment understand their role in the complex information security system,which helps them improve their behaviour. An organization with a positive climate may influence the behaviour and commitment of users. Shawet al., 2009.
5. Work Conditions Unsatisfactory and negative work conditions can contribute negatively to work. Tiredness and fatigue may also lead to failure to follow policies and procedures among users, thereby resulting in their disregarding information security. Kellowayet al., 2010 .


Unfolding Criminology Theories to Understand Users’ Behaviour

The theoretical foundation for several research models designed for studying users’ behavior has been provided by criminology theories. These theories have been categorized according to their focal concepts and aims, as enumerated in Table 2.10 As pointed out in the last column of Table 2, a number of researchers have tried to apply these criminology theories in isolation or in combination with each other to the information security system. These theories explain the behaviour of users as perceived by criminologists, most of whom have deep foundations in psychology.

Table 2: Criminology Theories, Concepts and Principles in Information Security (IS) Literature (afterTheoharidou , et al., 2005)

Criminal theories Focal concept Basic Principles Related Research within IS Security literature
General Deterrence Theory (GDT).

(Blumstein, 1978,1986)

A person commits a crime if the expected benefits outweight the cost of sanction. (Goodhue and Straub, 1991)

(Straub and Welke, 1998).

Social Bond Theory

(Hirschi, 1969)

A person commits a crime if the social bonds of attachment, involvement and belief are weak. (Lee and Lee, 2002),
(Agnew, 1995)
(Hollinger, 1986)
(Lee et al., 2003)
Social Learning Theory
(Sutherland, 1924 ,
quoted in Akers,2011)
Motive A person commits a crime if (s) he associates with delinquent peers, who transmit delinquent ideas, reinforce delinquency, and function as delinquent role models. (Lee and Lee, 2002)
(Skinner and Fream, 1997)
(Hollinger, 1993)
Theory of Planned Behavior (TPB)
(Ajzen and Fishbein,2000)
A person’s intension towards crime is akey factor in predicting his/ her behavior. Intentions are shaped based on attitude, subjective norms and perceived behavioural control (Lee and Lee, 2002)
(Leach, 2003)
Situational Crime Prevention (SCP)


Opportunity A crime occurs when there is both motive and opportunity. Crime is reduced when no opportunities exist. (Willison, 2000)


Models of User Behaviour

Researchers have used theories used in general criminology and literature pertaining to interaction between humans and technology in information security systems for developing theoretical and research models to understand users’ behaviour. Figure 3 depicts an integrated model of this behaviour derived and designed by the present author from two research studies.11

Figure 3: An integrated model for User’s behavior in Information  System Security developed by integrating models proposed by Luchiano et al., 2010 and Herath and Rao 2009.
Figure 3: An integrated model for User’s behavior in Information System Security developed by integrating models proposed by Luchiano et al., 2010 and Herath and Rao 2009.

The findings of these studies can be summarized as follows12:

  1. A constructive organizational environment has a positive impact on the responsible behaviour of users towards information security.
  2. Stressful work conditions would negatively impact the responsible behaviour of users towards information security.
  3. The adoption of responsible behaviour by users in terms of adhering to information security policies and procedures would negatively impact the vulnerabilities of users to information security breaches.
  4. Familiarity with information security policies and procedures among users would:

    a)Positively impact their responsible behaviour towards information security;

    b)Negatively impact their vulnerability to information security breaches; and c)Positively impact their awareness of potential information security threats.

  5. Awareness of potential information security threats among users would:

    a)Positively impact their responsible behaviour towards information security; and

    b)Negatively impact their vulnerability to information security breaches.

  6. Some of the key elements that play a vital role in users’ behaviour include gender, work experience, age, and educational qualifications.
  7. The intentions of users to follow security policies are determined by both internal and external motivating factors.
  8. The security behaviour of users is positively affected by both standard prescriptive beliefs as well as peer influences.
  9. The security-related behavioural intentions of users are positively impacted if detection is certain.
  10. The security-related behavioural intentions of users are negatively impacted if the prospective penalty for neglecting security is expected to be severe.
  11. The perceptions of users regarding compliance by others with security behaviour also play an important role in determining their own behaviour towards security.
  12. The vulnerability of users to any breaches in information security are inversely related to the compliance with security procedures among users. This implies that the stronger the users’ intention to adhere to security behaviour, the lower would be their vulnerability to any security failures.

While the element of technology remains constant during human’computer interaction, it is the human element which remains highly dynamic mainly due to the complexity of human behavior. Suggestions for the relevant implications of human behavioural science in improving cyber security are as follows13:

  1. The implication of the ‘Identifiable Victim’s Effect’ (the tendency of an individual to offer greater help when an identifiable person is observed in hardship as compared to a vaguely defined group in the same need) may lead a user to choose a stronger security system when possible negative outcomes are real and personal, rather than abstract. 14
  2. The ‘Elaboration Likelihood Model’ describes how human attitudes form and persist. There are two main routes to attitude change, viz., the central route (the logical, conscious and thoughtful route, resulting in a permanent change in attitude) and the peripheral route (that is, when people do not pay attention to persuasiv e arguments, and are instead in fluenced by superficial characteristics, and the change in their attitude is consequently temporary). Efforts should thus be made to motivate users to take the central route while receiving cyber security training and education. Fear can also be used to compel users to pay attention to security, but this would be effective only when the fear levels are moderate and simultaneously, a solution is also offered to the fear-inducing situation. The inducement of a strong fear, on the other hand, would lead to ‘fight or flight’ reactions from users.15
  3. Cognitive Dissonance (a feeling of discomfort due to conflicting thoughts) acts as a powerful motivator by evoking the following reactions among users, making people react in the following three ways:

    a)Change in their behaviour

    b)Justification of their behaviour through a rejection of any conflicting attitude; or

    c) Addition of new attitudes for justifying their behaviour.

  4. Cognitive dissonance is hence used to persuade users to change their attitude towards cyber security and then eventually adopt a behaviour that motivates them to choose better security.16
  5. Social Cognitive Theory stipulates that learning among people is based on two key elements—by watching others, or through the effect of their own personality. Thus, by incorporating the demographic elements of age, gender and ethnicity, one could initiate a cyber awareness campaign that would help reduce cyber risk by enabling the users to identify with their recognisable peers and thereby imitate the secure behaviour of the latter.17
  6. Status Quo Bias’the tendency of a person not to change an established behaviour without being offered a compelling incentive to do so’ necessitates the introduction of strong incentives for users to change their cyber behaviour. This can be exploited positively by information system designers.18
  7. The Prospect Theory helps us in framing user choices about cyber security by framing them as gains rather than losses.19
  8. Another factor to be considered is Optimism Bias, which leads users to under- estimate the security risk, thereby making them perceive that they are immune to cyber-attacks. In order to enable users to overcome this attitude, the security system could be designed to incorporate the real experiences of users for effectively conveying the impact of the risk.20
  9. 8. Control Bias or the belief among users that they have a strong control over or capacity to determine outcomes hinders people from following security measures. This bias should be kept in mind while designing systems and training programmes for users.21
  10. Confirmation Bias—looking for evidence to confirm a position—exposes the users’ minds to new ideas. In order to overcome this bias, the system must provide evidence to change their current beliefs (for example, regular security digests may be e-mailed to them).22
  11. While trying to improve the cyber behaviour of users, the Endowment Effect, wherein people place a higher value on the objects they own as compared to the objects they do not own, could be used. Users may thus be persuaded to pay more for security when it allows them to safely keep something that they already have (for example, the privacy of data).23
  12. It is amply clear from the foregoing discussion that human–computer interaction is not a simple process but is instead a complex and dynamic mechanism, characterized by the interplay of a large number of technological, human and environmental factors with each other in space and time. Being humans, users do not have the biological capacity to handle these numerous factors simultaneously in space and time, which is why they behave the way they do, thus, unintentionally or accidentally (and sometimes maliciously) compromising the information system security. In this way, users themselves become the enemy of information security, and are therefore categorized as the weakest link in the information security chain.


The most important and dynamic aspect of the interaction between humans and computers is the behaviour of the user, which varies in space and time. It is also influenced by psychological, intrinsic and extrinsic factors, which in turn, are governed by peer behaviour, normative beliefs, and social pressures, among other things. Therefore, the behaviour of the user is not solely dependent on the user himself, or we could say that he might have little control over his own behaviour while interacting with the security of information systems. The integrated model discussed in this article may thus be used to devise a strategy for improving the users’ behaviour by strengthening the factors that have a positive impact and reducing or even eliminating the factors that have a negative impact on the security of the information system security. However, this is a complex task and should not be considered as simple, as for instance, selling a non-durable consumer item like a soap!


1E.M. Luciano, M.A. Mahmood and A.C.G Maçada, ‘The Influence of Human Factors on Vulnerability to Information Security Breaches’, ‘Proceedings of the Sixteenth Americas Conference on Information Systems, Lima’, Peru, August, 2010, p. 12.
_Maada/file/e0b4952f0d76b267b1.pdf Accessed on 29 June 2014.

2A. Adams and A.M. Sasse, ‘Users Are Not The enemy’, Communications of the ACM, vol. 42, no. 12,1999,pp. 40-6.

3A. Adams and A.M. Sasse, ‘Users Are Not the Enemy’, Communications of the ACM, vol. 42, no. 12, 1999.

4C.Vroom and R.Von Solms,’Towards InformationSecurityBehaviouralCompliance’, Computers & Security,2004, vol. 23, no. 3, pp. 191-8. Accessed on 2 July 2014.

5J.M.Stanton et al.,’Analysis of end user security behaviors’, Computers & Security,vol. 24, no. 2, 2005,pp.124-33.

6J.M. Stanton, et al. ‘Analysis of end user security behaviors’, Computers & Security, vol. 24, no. 2, 2005, pp.124-133.

7 R. West, ‘The psychology of security’, Communications of the ACM, vol. 51, no. 4, 2008, pp. 34-40.

8R. West, ‘The psychology of security’, Communications of the ACM, vol. 51, no. 4, 2008; R. West et al.,’The Weakest Link: A Psychological Perspective on Why’, Social and Human Elements of Information Security: Emerging Trends,2009.

9A. Tversky, and D. Kahneman,’Rational Choice and the Framing of Decisions’, Journal of Business, 1986, pp. S251-S278.
Accessed on 29 June 2014.

10M. Theoharidou et al., ‘The insider threat to information systems and the effectiveness of ISO17799’, Computers & Security, vol. 24, no. 6, 2005, pp. 472-84.

11D.L. Goodhue, & D.W. Straub, ‘Security Concerns of System Users: A Study of Perceptions of the Adequacy of Security’, Information & Management,vol. 20, no. 1, pp. 13-27 Edimara_Mezzomo_Luciano/publication/260012210_Influence_of_human_factors_on_information_security_ breaches_-_Luciano_-_Mahmood_-_Maada/file/e0b4952f0d76b267b1.pdf ; T. Herath and R.H. Rao,
‘Protection motivation and deterrence: A framework for Security Policy Compliance in Organisations’, European Journal of Information Systems, vol. 18, no. 2, 2009, pp. 106-25.

12 Ibid.

13S.L. Pfleeger and D.D. Caputo,’Leveraging Behavioral Science to Mitigate Cyber Security Risk’, Computers & Security,vol. 31, no. 4, 2012, pp. 597-611. Accessed on 1 July 2014.

14K. Jenni and G. Loewenstein, ‘Explaining the Identifiable victim Effect’, Journal of Risk and Uncertainty,1997, vol. 14, no. 3, pp. 235-57, Accessed on 1 July 2014.

1515 R.E. Petty and J.T. Cacioppo, ‘The Elaboration Likelihood of Perusation’ Accessed on 1 July 2014.


17 A. Bandura,’Human Agency in Social Cognitive Theory’,American psychologist, vol. 44, no. 9, 1989, p.1175.

18 W.Samuelson and R. Zeckhauser, ‘Status Quo Bias in Decision Making’. Accessed on 1 July 2014.

19A.Tversky and D. Kahneman,’Rational Choice and the Framing of Decisions’, Journal of Business,1986, pp. S251-S278.
Accessed on 29 June 2014.

20 D. Dunning, C. Heath and J.M.Suls, ‘Flawed Self-Assessment’, Accessed on 1 July 2014.

21 J. Baron and J.C. Hershey,’Outcome Bias in Decision Evaluation’, Journal of Personality and Social Psychology,vol. 54, no. 4, 1988, p. 569. Accessed on 1 July 2014.

22 M. Lewika, ‘Confirmation Bias’, Personal Control in Action, Springer, 1998, pp. 233-58. Abstract accessed on 1 July 2014.

23 R.Thaler, ‘The Psychology of Choice and the Assumptions of Economics’, Laboratory Experimentation in Economics, p. 99. Accessed on 1 July 2014.