How is online harassment a design problem?
As an increasing proportion of interpersonal interactions take place entirely online, the design of social media platforms is imperative in shaping how those interactions play out. Social media platform affordances shape when and how people communicate. In the case of online harassment, seemingly innocuous features and affordances can increase hate speech and assist “bad actors”.
If you forgot the technical definition of “affordances”...
The Interaction Design Foundation defines an affordance as “what a user can do with an object based on the user’s capabilities” (What Are Affordances?, 2019); in the social sciences the term can refer more broadly to how technologies “open up, or constrain particular actions and social practices” (McVeigh-Schultz & Baym, 2015, p. 2). It’s important to note that affordances are not equally perceptible or accessible to all users. Thus, an affordance is not an inherent property of a platform.
To summarize...
Various affordances of social media platforms have contributed to the growth of online harassment, especially as more and more people make a living by posting publicly online. Online harassment is shaped by the affordances of a platform, which are in turn shaped by designer’s standpoints. Design processes may unintentionally solidify existing power imbalances by failing to investigate how user experiences are shaped by intersectional identities. To address online harassment, social media platforms could restructure and retool their design processes to prioritize collaboration and empathy with a more diverse set of users.
Online harassment disproportionately affects marginalized groups including women, LGBTQI+ people, and Black, Indigenous, and People of Colour, yet these groups are under-represented in the design and development of platforms (Marwick, 2021; Vitak et al., 2017). As affordance perceivability is shaped by user and designer standpoints, this underrepresentation means that designers may overlook the ways that platform features can facilitate harassment.
There are countless examples of technology designed primarily for and by men that fails to address the needs of female and non-binary users.
For example, Apple’s Siri voice assistant responded to the prompt “My foot hurts” with emergency medical service information but did not understand the prompt “I was raped” (Miner et al., 2016, p. 12). Some scholars have coined the term “gendered affordances” to discuss platform affordances that facilitate behaviour patterns among users of a specified gender (Schwartz & Neff, 2019; Semenzin & Bainotti, 2020).
Affordances that seem nondiscriminatory can still enable online harassment and abuse.
In Semenzin and Bainotti’s (2020) research on Telegram, they found that platform affordances such as high group member capacity (a maximum of 200,000 members, compared to 256 members on WhatsApp) allowed for widespread dissemination of illegal non-consensual pornography.
Understanding the ways in which platform affordances can facilitate harassment would help to address online harassment overall. Hiring a diverse team of designers is perhaps the best place to start, as having a wide variety of viewpoints would help identify and address potential issues before the feature is deployed.
One such resource is the user persona. Personas are fictional characters created to better understand a product’s users, but you probably knew that already since they're one of the most common tools used by UX designers (Dam & Siang, 2021). Personas often feature a stock image portrait, an imaginary backstory, as well as the person’s goals, motivations, and pain points. In theory, personas are created using data and quotes collected during the user research phase. However, personas are also often used in the user research phase itself, perhaps to gather stakeholder feedback before commencing research, meaning that they are created beforehand based on a designer’s intuition and experience (Cutting & Hedenborg, 2019). This allows for harmful stereotyping to take place, as many designers create unrealistic and homogenized personas when left to their own intuition and biases.
Real World Example: Biased Personas
As part of an investigation by Turner and Turner (2010), forty-two male designers were tasked with creating an iPhone app for women. When creating personas, “almost all described successful, busy, socially active and attractive female users in their mid-twenties” (Turner & Turner, 2011, p. 38). Though there is little research in the area, my own anecdotal experience studying design at university suggests that designers rarely, if ever, create personas of users that have been harmed by the product or platform affordances. This is in stark contrast to the experiences of people who have been harassed online, or whose livelihoods have been impacted by moderation or shadow-banning. Realistic personas might include people who have lost their income after being banned from a platform unexpectedly, or a user who is forced to delete their account after being subjected to intense online abuse. Yet, as personas are often used for meetings with stakeholders or investors, they usually ignore sensitive topics such as these.
One such feature is Twitter blocklists, third-party tools that compile a list of users with the aim of blocking abusive users before being impacted by them (Jhaver et al., 2018). Blocklists can be created by human moderators or algorithmically, though the latter tends to struggle with accuracy (Jhaver et al., 2018, p. 19). While blocklists were effective at lessening harassment somewhat, researchers concluded there was still a “gap between the needs of users and the affordances provided” (Jhaver et al., 2018, p. 31). Many users also had a negative response to blocklists and banning, perceiving it as unjust censorship or an infringement on free speech (Feng & Wohn, 2020).
Accordingly, numerous studies investigate the viability of affordances that encourage bystander intervention and model positive interaction (Blackwell et al., 2017; DiFranzo et al., 2018; Phillips, 2018; Royen et al., 2021). In a bystander intervention field study, Munger (2017) utilized Twitter bots that responded to racist Tweets with a suggestion to consider the emotional impact of their words. The bots were created to impersonate a white or Black user, with a high or low follower count. As predicted, users were noticeably less likely to Tweet racist remarks after being “called out” by a bot, but only if the bot was a white male with a high follower count (Munger, 2017).
This illustrates one of the key difficulties in addressing online harassment: while platform affordances enable online harassment, the root cause lies in the existing racist, misogynistic, or otherwise bigoted beliefs. Creating fake white male profiles to try and get white men to act less racist online only serves to reinforce the pattern of only respecting people in a similar position of power. Thus, as with the offline world, progress should be made by actively supporting, centring, and addressing the needs of the marginalized groups most impacted by harassment.
Thanks to a lack of harassment reduction strategies on popular social media platforms, recent years have seen an increase in alternative platforms and strategies. HeartMob, a platform designed and built by targets of online harassment, offers a place for targets to ask for support and share their experiences (Blackwell et al., 2017). Schoenebeck et al. (2021) draw upon justice theories to consider new methods of supporting and compensating targets, investigating possibilities like financial compensation or an apology from the harasser.
However, a consistent challenge remains throughout these alternative strategies. Blackwell et al. (2017) and Schoenebeck et al. (2021) both comment on the inefficacy of a “one size fits all” approach to online harassment, noting that users’ experiences and desired retributions varied based on their identities and standpoints (Schoenebeck et al., 2021, p. 2).
Herein lies the difficulty with designing platforms and policies: as the user base of platforms continue to grow, so too will the diversity of user needs. Designing social media platforms is no frivolous job; seemingly innocuous features can become gendered affordances that facilitate widespread harassment and current mitigation features fail to address existing harassment. Common design processes are woefully unequipped to address these shifting and nuanced issues.
The issue with user-centred design, the design process used at most technology companies, is right there in the name. Given the diversity of user needs, user-centred design by definition centres some user’s needs while decentralizing others. Based on the overwhelmingly white male design teams and existing power imbalances, it is obvious whose needs will be centred. To address online harassment, we must move away from design processes that cater primarily to the needs of a single demographic.
Blackwell, L., Dimond, J., Schoenebeck, S., & Lampe, C. (2017). Classification and Its Consequences for Online Harassment: Design Insights from HeartMob. Proc. ACM Hum.-Comput. Interact., 1(CSCW). https://doi.org/10.1145/3134659
Costanza-Chock, S. (2020). Design Practices: “Nothing about Us without Us.” In Design Justice (1st ed.). https://design-justice.pubpub.org/pub/cfohnud7
Cutting, K., & Hedenborg, E. (2019). Can Personas Speak? Biopolitics in Design Processes. Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion, 153–157. https://doi.org/10.1145/3301019.3323911
Dam, R. F., & Siang, T. Y. (2021). Personas – A Simple Introduction. The Interaction Design Foundation. https://www.interaction-design.org/literature/article/personas-why-and-how-you-should-use-them
DiFranzo, D., Taylor, S. H., Kazerooni, F., Wherry, O. D., & Bazarova, N. N. (2018). Upstanding by Design: Bystander Intervention in Cyberbullying. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–12). Association for Computing Machinery. https://doi.org/10.1145/3173574.3173785
Feng, C., & Wohn, D. Y. (2020). Categorizing Online Harassment Interventions. 2020 IEEE International Symposium on Technology and Society (ISTAS), 255–265. https://doi.org/10.1109/ISTAS50296.2020.9462206
Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online Harassment and Content Moderation: The Case of Blocklists. ACM Trans. Comput.-Hum. Interact., 25(2). https://doi.org/10.1145/3185593
Marwick, A. E. (2021). Morally Motivated Networked Harassment as Normative Reinforcement. Social Media + Society, 7(2), 20563051211021376. https://doi.org/10.1177/20563051211021378
McVeigh-Schultz, J., & Baym, N. K. (2015). Thinking of You: Vernacular Affordance in the Context of the Microsocial Relationship App, Couple. Social Media + Society, 1(2), 2056305115604649. https://doi.org/10.1177/2056305115604649
Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., & Linos, E. (2016). Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health. JAMA Internal Medicine, 176(5), 619–625. https://doi.org/10.1001/jamainternmed.2016.0400
Munger, K. (2017). Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment. Political Behavior, 39(3), 629–649. https://doi.org/10.1007/s11109-016-9373-5
Phillips, A. L. (2018). Youth Perceptions of Online Harassment, Cyberbullying, and “just Drama”: Implications for Empathetic Design. In J. Golbeck (Ed.), Online Harassment (pp. 229–241). Springer International Publishing. https://doi.org/10.1007/978-3-319-78583-7_10
Royen, K. V., Poels, K., Vandebosch, H., & Zaman, B. (2021). Think Twice to be Nice? A User Experience Study on a Reflective Interface to Reduce Cyber Harassment on Social Networking Sites. International Journal of Bullying Prevention. https://doi.org/10.1007/s42380-021-00101-x
Schoenebeck, S., Haimson, O. L., & Nakamura, L. (2021). Drawing from justice theories to support targets of online harassment. New Media & Society, 23(5), 1278–1300. https://doi.org/10.1177/1461444820913122
Schwartz, B., & Neff, G. (2019). The gendered affordances of Craigslist “new-in-town girls wanted” ads. New Media & Society, 21(11–12), 2404–2421. https://doi.org/10.1177/1461444819849897
Semenzin, S., & Bainotti, L. (2020). The Use of Telegram for Non-Consensual Dissemination of Intimate Images: Gendered Affordances and the Construction of Masculinities. Social Media + Society, 6(4), 2056305120984453. https://doi.org/10.1177/2056305120984453
Turner, P., & Turner, S. (2011). Is stereotyping inevitable when designing with personas? Design Studies, 32(1), 30–44. https://doi.org/10.1016/j.destud.2010.06.002
Vitak, J., Chadha, K., Steiner, L., & Ashktorab, Z. (2017). Identifying Women’s Experiences With and Strategies for Mitigating Negative Effects of Online Harassment. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 1231–1245. https://doi.org/10.1145/2998181.2998337
What are Affordances? (2019). The Interaction Design Foundation. https://www.interaction-design.org/literature/topics/affordances