Volume 6, Issue 1, https://doi.org/10.1037/tmb0000149
Online platforms are increasingly investing significant resources into the systems used to report and remove unwanted content on the platforms. However, building these systems in ways that strengthen trust and are seen as fair by those who are engaging directly with them—either as a reporter of content or as an individual having content removed—still remains a challenge for many platforms. Using two surveys—one sent to individuals who have recently reported content and an identical survey to those who had content removed from the platform—paired with logged platform data in the 6 months before and 3 months following the surveys, we explore the associations between people’s perception of the fairness of Nextdoor’s moderation system and later behaviors on the platform including content removals, future reporting, and future visitations to the platform. We find that those who felt their moderation experience was relatively more fair are more likely to report content to the platform again and choose to visit the platform more frequently in the months that follow. These findings demonstrate the connection between fair moderation systems and engagement on the platform more broadly, pointing toward opportunities for platforms to build trust and legitimacy through better design of the systems used for reporting and removal.
Keywords: social media, procedural justice, content moderation
Acknowledgements: The authors thank individuals at Nextdoor who helped to make this work possible including Haris Tajani, Fay Johnson, and Laura Bisesto; the Justice Collaboratory, especially Tracey Meares, for their support; and finally, Vivian Zhao for her assistance in analysis of open-ended responses within the survey which helped inform the work presented here.
Funding: This work was supported by a grant from the Stavros Niarchos Foundation.
Disclosures: The authors report there are no competing interests to declare.
Data Availability: The data used in this study as well as the code used to produce the analyses presented here are available for replication use only and can be found on the Open Science Framework at https://osf.io/pf3u4/. Also included in the repository are additional online materials including the full survey text, additional figures for structural equation models presented, and results from the replicated analysis including outliers.
Open Science Disclosures: The data are available at https://osf.io/pf3u4/?view_only=b5677b81 e7ef4037a886298bf1a59686. The experimental materials are available at https://osf.io/pf3u4/?view_only=b5677b81e7ef4037a886298bf1a59686.
Open Access License: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND). This license permits copying and redistributing the work in any medium or format for noncommercial use provided the original authors and source are credited and a link to the license is included in attribution. No derivative works are permitted under this license.
Correspondence concerning this article should be addressed to Matthew Katsaros, Yale Law School, 127 Wall Street, New Haven, CT 06511, United States. Email: [email protected]
Video Abstract
Online platforms spend considerable resources managing undesirable content, problematic behaviors, and the users of their platforms on a range of topics from harassment to hate speech. As just one example of this, the CEO of TikTok, Shou Zi Chew, stated in his written testimony in a hearing with the Senate Committee on the Judiciary focused on “the Online Child Sexual Exploitation Crisis” (United States Senate Committee on the Judiciary, 2024) that his company has:
More than 40,000 people globally work on trust and safety issues for TikTok. … This year we expect to invest more than two billion dollars in trust and safety efforts, with a significant portion of that investment in our US operations. (Chew, 2024, p. 5)
”A major portion of platform resources on these issues is dedicated to supporting the content moderation apparatus—a set of human and computational systems aimed at allowing the report of undesirable content to be reviewed against the platform’s policies and rules followed by a corresponding enforcement action (Gillespie, 2018).
In the past, platforms have relied heavily on command and control approaches in these content moderation systems which work by motivating rule following through identifying and sanctioning rule violators through various punitive actions from temporary suspension of features to demonetization (Goldman, 2021). Increasingly, researchers and platforms have been searching for alternative governance models sometimes motivated by a desire to provide justice for targets of unwanted online activity while other times motivated by a platform’s desire to reduce recidivism in rule-breaking behavior (Badalich, 2023; Katsaros et al., 2022; Newton, 2023; Schoenebeck & Blackwell, 2020). One alternative approach is to build these content moderation systems using a self-regulatory model drawn from theories of social psychology leveraging insights from the extensive implementation of such models within criminal legal settings (T. R. Tyler, 2006).
The self-regulatory model is based upon the capacity of legitimacy to motivate people to accept a platform’s rules as appropriate and reasonable and voluntarily follow these rules. Self-regulatory approaches are valuable because, for a majority of people who pass through a rule-enforcing system, authorities and institutions do not need to create and maintain credible models of sanctioning in order to manage user behavior. The key question is whether (and how) it is possible for platforms to be viewed as legitimate.
A key contribution of research in this field has been to identify procedural justice (PJ) as a central antecedent of legitimacy. This research suggests that platforms can motivate self-regulatory actions on the part of users through the design and architecture of their platform, specifically their content moderation system. Simply put, by designing procedures for enforcing rules which users experience as fair, platforms can lower recidivism in the future. Prior work has shown correlational and causal support for this argument (Katsaros et al., 2022; T. Tyler et al., 2021). This suggests the viability of a self-regulation approach.
Two gaps exist in the prior literature which we hope to address here. First, much of the prior research focuses on the experience of individuals who have broken rules. Undoubtedly, that is a critical perspective to consider; however, there also exist many individuals who are engaging with a platform’s moderation system in the form of reporting unwanted behaviors and users. Second, while research has explored how people perceive moderation experiences as fair or unfair and has even demonstrated the relationship between that perceived fairness during moderation and their likelihood to follow platform rules, less is known about how people’s experiences with a platform’s moderation process can impact a broader set of platform actions like visitations to the platform or general content creation on the platform. How might the way a platform treats individuals through these moderation experiences relate to a broader set of platform experiences? This distinguishes efforts to motivate rule following which focus on eliminating and punishing undesirable actions from efforts to create an online platform upon which people are positively motivated to engage.
Using surveys sent to individuals who have recently reported content or had content removed from the Nextdoor platform paired with logged platform data in the 6 months before and 3 months following the survey, we aimed to begin to address these gaps in the research. In this study, we explore correlational associations between people’s experience with the fairness of Nextdoor’s moderation system and later behaviors on the platform. We find that those who felt their moderation experience was relatively more fair are more likely to choose to visit the platform more frequently in the months that follow.
Since the earliest days of social online spaces, the sites, servers, and platforms hosting these social interactions have had to find ways to manage user-generated content and behaviors (Klonick, 2018; Seering, 2020; Zuckerman & Rajendra-Nicolucci, 2023). Today’s largest online platforms like Instagram or TikTok primarily manage these issues through a top-down form of governance leveraging complex content moderation systems. Across platforms, these systems vary greatly and are constantly changing, making it difficult to accurately and exhaustively describe them all; however, many of these systems share a set of common building blocks: a set of rules set forth by the platforms articulating content and behaviors unallowed on the platform, a way for potentially violative content to be reported to the platform (either by individual users or through “proactive” algorithms deployed by the platform), a way for the platform to review reported content against its rules, and a way to enforce and punish those who break rules (Gillespie, 2018; Goldman, 2021; Grimmelmann, 2015; Jiang et al., 2020; Roberts, 2019).
In contrast to this top-down governance approach, another common platform governance structure is community-driven governance used by platforms like Reddit, Discord, Wikipedia, and even within Groups on Facebook (Zuckerman & Rajendra-Nicolucci, 2023). In this community-driven governance model, the specific strategies and structures of governance vary greatly; however, at the core are individuals who are part of a given online community (sometimes referred to as “mods”) actively participating in the governance of their community. Here, these individuals are screening members, creating the rules and norms, diffusing conflict, and enforcing rules (Fiesler et al., 2018; Seering, 2020; Seering et al., 2019, 2022; Seering & Kairam, 2023).
Much attention has been paid in recent years, across many scholarly disciplines, to the way that platforms approach content moderation and governance more broadly. Some of this work has looked at the efficacy of particular actions taken by platforms to enforce its rules including down-ranking (Gillespie, 2022), removing entire communities from a platform (Chandrasekharan et al., 2022), or using automation to filter and remove offensive comments (Horta Ribeiro et al., 2023; Katsaros et al., 2023). Other scholars, like Raphael Jimènez-Durán, have focused on trying to understand the economics of content moderation, highlighting the tension between content that might be highly engaging generating advertising revenue for a platform while also being toxic or violative of platform rules (Beknazar-Yuzbashev et al., 2022, 2024; Jiménez-Durán, 2023). Ma, You, et al. (2023) conducted a systematic literature review of research exploring how people experience content moderation across platforms. Among many findings in this work, the researchers highlight the way the platform’s “moderation tends to structure platforms and their users as opposing parties” (Ma, You, et al., 2023, p. 19). They highlight many opportunities for platforms to better engage with moderated users as stakeholders in the design process.
When it comes to enforcing rules, many platforms emphasize the importance of fairness within their content moderation systems (SafetyPhilosophy Twitter, 2021; Twitch, n.d.). Justice and fairness (the terms “justice” and “fairness” may be used interchangeably throughout this article) is a topic that has been studied for decades across disciplines spanning many domains (both online and offline) and cultural contexts. Here, we highlight some of the different theories of justice that have emerged from this literature.
Procedural justice is a theory that suggests when coming to conclusions about a decision’s fairness, people are concerned with the quality of a decision-making process, beyond any specific outcome it renders. Studies suggest that people consider four primary factors when considering a system of rules (T. R. Tyler, 2006). All are linked to lay conceptions of just ways for an authority to behave. Two factors involve how decisions are made—voice and neutrality. For voice, people want to have a chance to state their case, present their evidence, and tell their side of the story. For neutrality, people want to know that the decision makers are being unbiased, fact-based, and consistent. Next is dignity and respect—people feel that they are entitled to be treated with courtesy and dignity, that is, as members in good standing in the community who are entitled to just treatment. Last, the principle of trust—people want to feel that the people exercising authority over them have benevolent and sincere motivations and are trying to do what is best for them and others like them.
Many scholars have examined the intersection of content moderation and procedural justice both from a theoretical and empirical perspective. One aspect of procedural justice which has received outsized attention in the literature is transparency in content moderation. Here, we are talking not about “Transparency Reports” which are increasingly common (and now required by recent regulation like the European Union’s Digital Services Act) reporting out the quarter–yearly aggregate statistics of content moderation actions across a platform. Instead, we are talking about efforts for platforms to communicate on an individualized level with people who come in contact with a platform’s content moderation apparatus (Suzor et al., 2019). Studies have shown that explanations and transparency provided to individuals who have broken rules can shape perceptions of overall fairness (OF; Gonçalves et al., 2023; Jhaver, Appling, et al., 2019; Yurrita et al., 2023), can lead to a decrease in future rule violations (Jhaver, Bruckman, & Gilbert, 2019; Katsaros et al., 2022; T. Tyler et al., 2021), and can increase one’s ability to cope with resulting punishments (Ma, Li, & Kou, 2023). Other studies have focused on proactive communication and transparency of rules to individuals demonstrating that attempts to show individuals rules earlier on can translate to a decrease in norm violations (Kim et al., 2022; Matias, 2019). While transparency is clearly an important aspect of procedural fairness, other work has highlighted the ways in which other procedural elements like consistency, neutrality, and voice contribute toward people’s perceptions of fairness (Ma & Kou, 2022).
While procedural justice is one justice theory, of course scholars have proposed other models of fairness. Distributive justice (DJ), for example, is a theory which focuses on the fairness in the distribution of outcomes which are delivered by decision makers. In a distributive justice model, individuals are more concerned with the outcomes as opposed to the way in which those outcomes are derived. Some studies have shown the way in which outcomes can be a salient aspect in determining fairness in algorithmic decision making (Lee et al., 2019). Restorative justice is a justice theory which focuses on acknowledging and repairing harms caused. Many scholars looking at online content moderation systems have pointed to the way in which these systems are mainly a way for platforms to protect advertisers and offer little in the way of actually addressing or repairing harms caused on the platforms (Schoenebeck & Blackwell, 2020). In that vein, researchers have explored opportunities for moderation systems and platform governance more broadly to move toward a restorative approach (Cai et al., 2024; Gilbert, 2023; Schoenebeck et al., 2021; Scott et al., 2023; Xiao et al., 2022, 2023).
This study seeks to understand people’s experience with Nextdoor’s content moderation system from two different perspectives—those who are reporting unwanted content to the platform hoping to have some action taken and those who are having their content removed from the platform following someone else’s report. In our study, we surveyed these individuals separately to ask specific questions about the reporting and removal experience, and, as such, we analyze these two samples separately. While these reporting and removal experiences are separate experiences, it is less clear whether the populations of people who report content to platforms are distinct from the populations of people having content removed. The frame of “targets” and “perpetrators” is often used to theorize about unwanted online activity, but is this accurate or reductive? Is it the case that individuals who have content reported also act as reporters of unsafe content to platforms? And vice versa—do those who routinely alert platforms of unwanted content also end up authoring problematic content that gets removed? Knowing the answer to this question from the start will help us better contextualize our results later on. Leveraging logged platform data spanning many months observing reporting and content removal actions, we hope to answer the first research question:
Research Question 1: To what extent are individuals who report content and those that have content removed two distinct populations?
Next, we want to better understand the relationship between how people perceive the fairness of Nextdoor’s content moderation system and their subsequent behaviors on the platform. Given the prior research, we are interested in understanding how perceptions of both distributive and procedural justice elements relate to platform behaviors. We are interested in how these perceptions relate to two groups of platform behaviors—engagement specifically with the moderation system and general platform visitation. In measuring engagement with the moderation system, we are interested in two specific measures on a user level—reporting content and content removals. In measuring overall platform visitation, we observe how many “sessions” or visits to the Nextdoor platform a participant has in the months that follow. As such, we propose the following hypotheses:
Hypothesis 1: Increased procedural justice perception is correlated with more positive platform engagement. (a) For individuals surveyed about their recent content removal experience, increased perceptions of procedural justice are correlated with a decrease in future removals. (b) For individuals surveyed about their recent content reporting experience, increased perceptions of procedural justice are correlated with an increase in future reporting. (c) For all individuals, increased perceptions of procedural justice are correlated with an increase in overall platform visitation.
Hypothesis 2: Increased distributive justice perception is correlated with more positive platform engagement. (a) For individuals surveyed about their recent content removal experience, increased perceptions of distributive justice are correlated with a decrease in future removals. (b) For individuals surveyed about their recent content reporting experience, increased perceptions of distributive justice are correlated with an increase in future reporting. (c) For all individuals, increased perceptions of distributive justice are correlated with an increase in overall platform visitation.
In collaboration with the Nextdoor platform, we sent two, nearly identical, surveys out to individuals who had recently engaged with the moderation system. One survey was sent to individuals who had reported a piece of content in the 30 days prior to our survey collection period. The other survey was sent to individuals who had a piece of content removed in the 30 days prior to our survey collection period. Aside from the recent reporting/removal experience, the participant had to be using an account on Nextdoor with an address in the United States to be eligible for our survey. These inclusion criteria (recent engagement with moderation system and use of the platform with a U.S. address) were determined through logged platform data obtained by Nextdoor as part of their routine business process of logged data collection. Qualifying participants were sent emails from the Nextdoor platform recruiting them for the study where the final two inclusion criteria were met while taking the survey: The participant had to be over 18 years old and provide informed consent to participate in the study. The total number of individuals eligible for the reporter survey was 111,701 and 14,157 for the removal survey. Of those eligible, 19,217 were sent an email for the reporter survey and 14,157 for the removal survey. We received 2,536 starts for the reporter survey and 1,004 for the removal survey. The survey data were collected over a 12-day period from August 30, 2023, to September 11, 2023. No demographic details of the participants were collected during this study.
This project was reviewed by Yale’s institutional review board (No. 2000035588) and was approved on July 13, 2023. Participants provided informed consent before commencing with the research study.
Participants completed a survey through the online surveying software Qualtrics. The survey included many items asking about the participant’s recent experience with Nextdoor’s content moderation system.
One item asked generally about the overall fairness of their experience asking “Overall, how fair was your most recent experience having your post or comment removed from Nextdoor?” (1 = very unfair; 5 = very fair). Ten items asked participants about procedural elements of their moderation experience like transparency, consistency, voice, and being treated with dignity and respect. Examples of these items include asking the participant to agree or disagree with the statements “During the process of having my post or comment removed from Nextdoor, I was treated with respect” and “Reported posts and comments are reviewed in a way that is unbiased and neutral.” Internal consistency of the 10 procedural fairness items was excellent with Cronbach’s α = .90 and McDonald’s ω = .890. Other items asked them to agree or disagree with statements regarding the distributive fairness elements of their experience like “I received an outcome proportional to the harm caused.” Internal consistency of the two distributive fairness items was also good with Cronbach’s α = .85 and Spearman’s correlation = 0.738. The procedural fairness and distributive fairness questions were answered on a scale from strongly disagree (1) to strongly agree (5). While there is not yet a standardized survey instrument to measure these constructs, the survey items asking about overall, procedural, and distributive fairness of the moderation experience were leveraged from prior studies exploring these topics (Gonçalves et al., 2023; Katsaros et al., 2022; T. Tyler et al., 2021; T. R. Tyler & Huo, 2002).
Two items asked the participant to indicate how important it was to “Feel safe on Nextdoor” and “Speak your mind freely on Nextdoor” (1 = not at all important; 5 = extremely important). One item asked about their trust in the platform asking participants to agree or disagree with the statement “I trust Nextdoor to make decisions that are best for my neighbors and my neighborhood” (1 = strongly disagree; 5 = strongly agree).
The exact text for all question items was altered to ask about either their recent experience reporting content to Nextdoor or having content removed from Nextdoor depending on the survey the participant qualified for and received. The full questionnaire text for the two surveys used is available in the additional online materials accessible through the Open Science Framework link at https://osf.io/pf3u4/ provided in the Data Availability statement.
As mentioned, the survey data collected were paired with logged platform data in the months preceding and following the survey collection period. This logged platform data were collected by Nextdoor as part of their routine business data collection processes. The logged platform data were provided to the researchers with a way of connecting individual survey responses to logged platform behavioral data for analysis. The logged platform data collected include the following: prior 6 months of content reporting, prior 6 months of content removals, following 3 months of content reporting, following 3 months of content removals, prior 3 months of platform visitations, and following 3 months of platform visitations. The logged platform data were nonnormally distributed with extreme outliers. To handle the extreme nonnormality of the data set, we identified outliers for the prior and subsequent platform behavior being measured in each of the following models using modified Z scores calculated with median absolute deviation, consistent with Iglewicz and Hoaglin (1993). Presented in the Results section are analyses where identified outliers have been removed. However, we replicated these analyses while including these outliers, which produce consistent findings with the models presented below. Results from these supplemental analyses including outliers can be found in the public repository of additional online materials (Open Science Framework: https://osf.io/pf3u4/) linked to in the Data Availability statement.
To answer the questions and test the hypotheses outlined above, we collaborated with the local social media platform Nextdoor to launch a set of surveys measuring people’s attitudes toward the platform’s content moderation system while pairing participant’s responses with logged platform data in the months preceding and following the survey. Using these data, we are able to build correlational models helping to understand how people’s attitudes about their enforcement experience correlate to particular outcomes and actions taken on the platform in the months that follow.
Nextdoor is a local social media platform which aims to “bring neighbors and organizations together, we can cultivate a kinder world where everyone has a neighborhood they can rely on” (https://about.nextdoor.com/). The platform requires users to sign up using real names and addresses and is built with privacy boundaries aimed at reflecting offline neighborhoods such that users can only see and interact with users who live physically nearby. Importantly for this study, the platform has a unique approach toward content moderation we will briefly describe. Content which is reported by users is sometimes reviewed by the Nextdoor platform, though more often these reports are reviewed by other users in their neighborhood who volunteer to participate in moderation decisions. Content which is sent to these volunteer reviewers is voted on (volunteer reviewers are given options to “Remove,” “Maybe Remove,” or “Keep”), and through a proprietary and opaque algorithm, a determination is made on the content once enough votes have been cast (https://help.nextdoor.com/s/article/Community-Reviewers-and-Moderation?language=enUS).
As described in the Participants section, individuals using the platform with a U.S. address who had either reported content or had content removed were sent a survey. Logged platform data were collected 6 months before and 3 months following the survey collection period.
An analysis was conducted in R (Version 4.3.2) using the combined survey and logged platform data. The data used in this study as well as the code used to produce the analyses presented here are available for replication use only and can be found in the link provided in the Data Availability statement.
The hypotheses outlined above were explored using three structural equation models (SEMs). One model focuses on the content removal experience using data from the survey asking about content removals. The next model focuses on the content reporting experience using data from the survey asking about reporting. And the final model looks at visitations to the platform using data from both surveys together. Each of these models uses a similar structure.
First, a latent construct for procedural justice is created using the 10 individual procedural justice survey items. A latent construct for distributive justice is created using the two individual distributive justice survey items. A confirmatory factor analysis was conducted across the full set of survey respondents using these 12 items loaded on two factors (10 for procedural justice, two for distributive justice), which had a comparative fit index (CFI) = 0.854, Tucker–Lewis index = 0.818, and standardized root-mean-square residual = 0.061. The factor correlation between the procedural justice and distributive justice factors was 0.773 indicating these are distinct but related latent constructs with all individual survey items having factor loadings >0.5. A latent construct of overall fairness was created using the procedural and distributive justice constructs along with two individual survey items—one asking about overall fairness of their moderation experience (“Overall, how fair was your most recent experience having your post or comment removed from Nextdoor?”) and another asking generally about trust in Nextdoor (“I trust Nextdoor to make decisions that are best for my neighbors and my neighborhood”). In the model looking at the removal experience, the number of content removals in the 3 months after the survey was predicted using the overall fairness construct, the number of content removals 6 months prior to the survey, and the two survey items asking about the importance of speaking freely and feeling safe. The same structure was used in the model looking at the reporting experience replacing the platform variables of prior and subsequent removals with reports. The same structure was used in the final model looking at platform visitations, except prior removals and prior reports were added alongside prior platform visitations as predictors of subsequent platform visitations.
To construct these SEMs, we use the lavaan R package Version 0.6-17 (Rosseel, 2012) using maximum likelihood estimation with robust (Huber–White) standard errors and full information maximum likelihood estimation. Full descriptive statistics for all of the variables for the data used in these three models are included in Table 1, Table 2, and Table 3.
Descriptive Statistics for Removal SEM Used for Hypotheses 1a and 2a | |||
Characteristic | M (SD) | Mdn | Min.–Max. |
---|---|---|---|
Trust Nextdoor | 2.50 (1.25) | 2.00 | 1.00–5.00 |
Values safety | 3.76 (1.28) | 4.00 | 1.00–5.00 |
Values speech | 4.23 (0.97) | 5.00 | 1.00–5.00 |
Overall fairness item | 1.94 (1.30) | 1.00 | 1.00–5.00 |
PJ Item 1 | 2.07 (1.43) | 1.00 | 1.00–5.00 |
PJ Item 2 | 1.80 (1.23) | 1.00 | 1.00–5.00 |
PJ Item 3 | 2.55 (1.39) | 3.00 | 1.00–5.00 |
PJ Item 4 | 1.93 (1.32) | 1.00 | 1.00–5.00 |
PJ Item 5 | 2.16 (1.29) | 2.00 | 1.00–5.00 |
PJ Item 6 | 1.91 (1.19) | 1.00 | 1.00–5.00 |
PJ Item 7 | 2.14 (1.46) | 1.00 | 1.00–5.00 |
PJ Item 8 | 2.64 (1.50) | 2.00 | 1.00–5.00 |
PJ Item 9 | 3.66 (1.33) | 4.00 | 1.00–5.00 |
PJ Item 10 | 2.69 (1.45) | 3.00 | 1.00–5.00 |
DJ Item 1 | 1.88 (1.31) | 1.00 | 1.00–5.00 |
DJ Item 2 | 1.87 (1.22) | 1.00 | 1.00–5.00 |
Presurvey content removals | 20 (17) | 15 | 1–72 |
Postsurvey content removals | 6 (7) | 3 | 0–24 |
Note. SEM = structural equation model; Min. = minimum; Max. = maximum; PJ = procedural justice; DJ = distributive justice. |
Descriptive Statistics for Reporter SEM Used for Hypotheses 1b and 2b | |||
Characteristic | M (SD) | Mdn | Min.–Max. |
---|---|---|---|
Trust Nextdoor | 2.99 (1.15) | 3 | 1.00–5.00 |
Values safety | 4.14 (1.05) | 4 | 1.00–5.00 |
Values speech | 3.60 (1.07) | 4 | 1.00–5.00 |
Overall fairness item | 3.79 (1.29) | 4 | 1.00–5.00 |
PJ Item 1 | 3.77 (1.33) | 4 | 1.00–5.00 |
PJ Item 2 | 3.35 (1.24) | 3 | 1.00–5.00 |
PJ Item 3 | 3.74 (1.15) | 4 | 1.00–5.00 |
PJ Item 4 | 2.65 (1.45) | 3 | 1.00–5.00 |
PJ Item 5 | 3.15 (1.37) | 3 | 1.00–5.00 |
PJ Item 6 | 3.06 (1.31) | 3 | 1.00–5.00 |
PJ Item 7 | 2.90 (1.47) | 3 | 1.00–5.00 |
PJ Item 8 | 3.43 (1.42) | 4 | 1.00–5.00 |
PJ Item 9 | 4.12 (1.06) | 4 | 1.00–5.00 |
PJ Item 10 | 3.41 (1.39) | 4 | 1.00–5.00 |
DJ Item 1 | 3.41 (1.27) | 3 | 1.00–5.00 |
DJ Item 2 | 3.02 (1.17) | 3 | 1.00–5.00 |
Presurvey content reports | 15 (16) | 8 | 1–70 |
Postsurvey content reports | 8 (10) | 4 | 0–43 |
Note. SEM = structural equation model; Min. = minimum; Max. = maximum; PJ = procedural justice; DJ = distributive justice. |
Descriptive Statistics for Platform Visits SEM Used for Hypotheses 1c and 2c | |||
Characteristic | M (SD) | Mdn | Min.–Max. |
---|---|---|---|
Trust Nextdoor | 2.86 (1.20) | 3 | 1.00–5.00 |
Values safety | 4.07 (1.11) | 4 | 1.00–5.00 |
Values speech | 3.71 (1.09) | 4 | 1.00–5.00 |
Overall fairness item | 3.40 (1.51) | 4 | 1.00–5.00 |
PJ Item 1 | 3.48 (1.53) | 4 | 1.00–5.00 |
PJ Item 2 | 3.01 (1.39) | 3 | 1.00–5.00 |
PJ Item 3 | 3.48 (1.31) | 3 | 1.00–5.00 |
PJ Item 4 | 2.51 (1.45) | 2 | 1.00–5.00 |
PJ Item 5 | 2.90 (1.44) | 3 | 1.00–5.00 |
PJ Item 6 | 2.76 (1.39) | 3 | 1.00–5.00 |
PJ Item 7 | 2.78 (1.54) | 3 | 1.00–5.00 |
PJ Item 8 | 3.35 (1.48) | 4 | 1.00–5.00 |
PJ Item 9 | 4.07 (1.15) | 4 | 1.00–5.00 |
PJ Item 10 | 3.32 (1.45) | 4 | 1.00–5.00 |
DJ Item 1 | 3.10 (1.44) | 3 | 1.00–5.00 |
DJ Item 2 | 2.79 (1.29) | 3 | 1.00–5.00 |
Presurvey content removals | 15 (42) | 1 | 0–574 |
Presurvey content reports | 42 (110) | 10 | 0–1,734 |
Presurvey platform visits | 427 (304) | 346 | 4–1,434 |
Postsurvey platform visits | 389 (306) | 301 | 1–1,359 |
Note. SEM = structural equation model; Min. = minimum; Max. = maximum; PJ = procedural justice; DJ = distributive justice. |
In our study, all participants took one of two surveys. These two surveys were identically structured; however, changes to question phrasing were made to ask about either people’s experience reporting content or their experience having content removed (see the Data Availability statement for full survey text). To qualify for the reporter survey, participants had to have made one or more reports in the month prior to sending the survey out. To qualify for the removal survey, participants had to have had one or more pieces of content removed in the month prior to sending the survey out. Individuals who qualified for both were randomly sent one or the two surveys. Regardless of which survey was taken, we collected approximately 9 months of data (6 months prior to survey and 3 months following) logging how many reports and removals each individual had. From this, we can get an understanding of how often individuals who report content also have content removed and vice versa.
The reporter survey had 2,536 participants start the survey. Of those participants, we see that 1,212 (48%) participants had one or more pieces of content removed during the 9 months of collected data, while the remaining 1,324 (52%) participants only ever submitted reports. The removal survey had 1,004 participants start the survey. Of those participants, 640 (64%) also submitted one or more reports, while the remaining 364 (36%) participants only had content removals during the 9 months period. These results are shown in Figure 1.
Participants in Our Study Took a Survey Asking About Reporting or Removal Experiences
Note. Shown here, 64% of participants who took the removal survey and 48% of individuals who took the reporting survey had both content removed and reports submitted during the 9-month data collection period.
These results indicate that many of the individuals who are encountering content moderation systems are doing so in different ways. Two thirds of those who have content removed are also engaging with the moderation system by reporting content to the platform. And nearly half of those who are reporting content to the platform are also having content removed. In many of the remaining analyses, we look at the results of the two surveys separately as they are, in fact, asking about different experiences. However, it is important to consider that the populations from which these two survey samples were drawn are clearly overlapping—the individuals reporting and those having content removed are not necessarily mutually distinct groups of users.
To analyze the relationship between people’s experience with the moderation system and platform behaviors, we used both logged platform data and survey responses to construct three different SEMs as described in the Analysis section.
Many of the survey items used in these models are questions which ask about people’s most recent time having content removed or submitting a report. Before asking specific questions about their most recent experience with the moderation system, we asked participants a recall question phrased either as “Has a post or comment that you have made ever been removed from Nextdoor?” for the removal survey or as “Have you ever reported a post or a comment for review?” for the reporter survey, with answer options “Yes,” “No,” and “I’m not sure.” Those who answered “No” were sent to the end of the survey as it is of no use to ask questions about their experience with the moderation system if they cannot recall it (despite having had a logged reporting or removal event qualifying them for the survey). Those who answered “I’m not sure” continued taking the survey, which included a follow-up open-ended question asking them to describe this most recent experience. Upon inspection of these responses, those who answered “I’m not sure” were also not included in these models. In the reporter survey, nearly all participants recalled their prior reporting; 2,328 participants answered “Yes,” while only 120 answered “No” and 20 answered “I’m not sure.” However, in the removal survey, only 651 participants answered “Yes” with 213 participants answering “No” and another 82 “I’m not sure.” Approximately one third of participants in our removal survey did not recall ever having had a piece of content removed.
To answer Hypotheses 1a and 2a, we look at our removal model (n = 459; CFI = 0.847) shown in Figure 2. In this model, we do observe that both our procedural justice and distributive justice latent variables are significantly positively correlated with our overall fairness latent variable. However, there is no significant relationship between the overall fairness latent variable and the total removals in the 3 months following the survey (standardized = −0.032; SE = 0.480; p = .532). As such, Hypotheses 1a and 2a are not supported. Here, we observe that, as expected, the prior 6-month content removals are significantly positively correlated with the subsequent 3 months of removals (standardized = 0.370; SE = 0.018; p < .001). A regression table for this model is shown in Table 4. A factor loading table for this model is shown in Appendix Table A1.
Removal SEM
Note. SEM = structural equation model; PJ = procedural justice; DJ = distributive justice.
*** p < .01.
Regression Table for Postsurvey Content Removals in the Removal SEM Used for Hypotheses 1a and 2a | ||||||
|
|
|
|
| Standardized | |
---|---|---|---|---|---|---|
LV | All | |||||
Postsurvey content removals ~ | ||||||
OF | −0.300 | 0.480 | −0.625 | .532 | −0.223 | −0.032 |
Presurvey content removals | 0.148 | 0.018 | 8.274 | <.001 | 0.148 | 0.370 |
Values safety item | −0.127 | 0.248 | −0.513 | .608 | −0.127 | −0.023 |
Values speech item | 0.555 | 0.297 | 1.872 | .061 | 0.555 | 0.078 |
Note. SEM = structural equation model; SE = standard error; LV = latent variables; OF = overall fairness. |
In order to answer Hypotheses 1b and 2b, we look at our reporter model (n = 1,750; CFI = 0.833) shown in Figure 3. As in the prior model, we do observe that both our procedural justice and distributive justice latent variables are significantly positively correlated with our overall fairness latent variable. Unlike the prior model for removals, in the reporter model, there is a significant positive relationship between the overall fairness latent variable and the total reports in the 3 months following the survey (standardized = 0.068; SE = 0.231; p = .002). In this case, Hypotheses 1b and 2b are supported—increases in the fairness of people’s reporting experience are associated with an increase in reporting new content over the next 3 months. And furthermore, our fairness construct in the model is shaped in relatively equal parts by procedural and distributive elements. We also see that, like our prior model, the prior 6 months of report variable is significantly positively correlated with subsequent reporting (standardized = 0.585; SE = 0.015; p < .001). A regression table for this model is shown in Table 5. A factor loading table for this model is shown in Appendix Table A2.
Reporter SEM
Note. SEM = structural equation model; PJ = procedural justice; DJ = distributive justice.
** p < .05. *** p < .01.
Regression Table for Postsurvey Content Reports in the Reporter SEM Used for Hypotheses 1b and 2b | ||||||
|
|
|
|
| Standardized | |
---|---|---|---|---|---|---|
LV | All | |||||
Postsurvey content reports ∼ | ||||||
OF | 0.726 | 0.231 | 3.148 | .002 | 0.667 | 0.068 |
Presurvey content removals | 0.352 | 0.015 | 23.158 | <.001 | 0.352 | 0.585 |
Values safety item | 0.232 | 0.166 | 1.393 | .164 | 0.232 | 0.025 |
Values speech item | −0.048 | 0.174 | −0.275 | .783 | −0.048 | −0.005 |
Note. SEM = structural equation model; SE = standard error; LV = latent variables; OF = overall fairness. |
Finally, to answer Hypotheses 1c and 2c, we look at our platform visitation model (n = 2,710; CFI = 0.853) shown in Figure 4. As in the prior two models, we do observe that both our procedural justice and distributive justice latent variables are significantly positively correlated with our overall fairness latent variable. And like in the reporter model, there is a significant positive relationship between the overall fairness latent variable and the total number of platform visits in the 3 months following the survey (standardized = 0.047; SE = 3.933; p = .002). In this case, Hypotheses 1c and 2c are supported—increases in the fairness of people’s reporting or removal experience are associated with an increase in visiting the platform over the next 3 months. And furthermore, our fairness construct in the model is correlated in relatively equal amounts by procedural and distributive elements. We also see that, like our prior model, the total prior 6 months of platform visits is significantly positively correlated with subsequent reporting (standardized = 0.824; SE = 0.015; p < .001). A regression table for this model is shown in Table 6. A factor loading table for this model is shown in Appendix Table A3.
Platform Visitation SEM
Note. SEM = structural equation model; PJ = procedural justice; DJ = distributive justice.
** p < .05. *** p < .01.
Regression Table for Postsurvey Platform Visits SEM Used for Hypotheses 1c and 2c | ||||||
|
|
|
|
| Standardized | |
---|---|---|---|---|---|---|
LV | All | |||||
Postsurvey platform visits ∼ | ||||||
Presurvey platform visits | 0.824 | 0.015 | 56.286 | <.001 | 0.824 | 0.820 |
Presurvey content reports | −0.222 | 0.137 | −1.613 | .107 | −0.222 | −0.040 |
Presurvey content removals | −0.679 | 0.259 | −2.622 | .009 | −0.679 | −0.046 |
OF | 12.472 | 3.933 | 3.172 | .002 | 14.450 | 0.047 |
Values safety item | 0.869 | 3.268 | 0.266 | .790 | 0.869 | 0.003 |
Values speech item | −8.466 | 3.284 | −2.578 | .010 | −8.466 | −0.030 |
Note. SEM = structural equation model; SE = standard error; LV = latent variables; OF = overall fairness. |
In this study, we take a look at how people experience Nextdoor’s content moderation system—both from the perspective of reporting content for review and in having content removed from the platform. Many prior studies on this topic operate in a laboratory setting—recruiting (through widely varying methods) and surveying individuals about self-reported or hypothetical content moderation experiences. One of the significant strengths and contributions of this study is testing and extending these ideas with field validity. We are able to target individuals who we know actually had recent content moderation experiences and ask questions specifically about those recent experiences. And importantly, rather than relying on self-reported behaviors, we are able to combine logged platform data over an extensive 9-month data collection period alongside our survey measures, allowing us to gather insights about these experiences in a unique and robust way. In this way, we are able to provide greater evidence with field validity to a growing body of research trying to better understand how fairness within online content moderation can shape behaviors on a platform.
One of the first things that becomes clear from the results of Research Question 1 is that the population of people who report content and those who have content removed are not distinct and mutually exclusive. These two groups significantly overlap with about two thirds of those having content removed also having made a report of content and about half of those who report content for review also having a post or comment removed during our 9-month data collection period. We believe this is an important finding that can be leveraged to inform the design of content moderation systems. This finding demonstrates that the various experiences that people have with moderation systems, like reporting a harassing comment, are not single interactions but can be considered part of a broader user journey that can include multiple touchpoints with a moderation system in different contexts. Today, to the extent that platforms provide any transparency to individuals engaging with their moderation systems, they tend to be transactional about a single report submitted or the piece of content being removed. With insights like this, platforms might consider also providing greater education and transparency into the moderation system as a whole—for example, not just saying whether a report was actioned or not but providing information about what happens to the account who posted the violative content as a result of their report. Similarly, when removing content, platforms increasingly provide transparency into specific rules that led to violations; however, platforms may also want to further empower these individuals with information on how to report violations given they now have a greater understanding of platform rules. In this way, platforms can rethink how to engage with individuals who break rules—rather than treating them as bad actors who need careful managing, platforms can engage positively with them enabling them to become stewards of safety on the platform.
A clear limitation of this study is the observational nature of the study design. In this study, we are surveying members of Nextdoor and pairing those data with logged platform actions in the months before and after the survey. As we have presented, we are able to draw correlational conclusions from this study. For example, we can observe that those who feel more fairly treated in their experience with the content moderation system have a relatively higher number of platform visits in the months following that moderation experience. Studies like this one can demonstrate the potential benefit that can be realized from increasing the fairness of moderation, but they are limited in being unable to demonstrate casual relationships.
These correlational conclusions have now been replicated across a number of different platforms including Facebook (T. Tyler et al., 2021), Twitter (Katsaros et al., 2022), Reddit (Jhaver, Appling, et al., 2019), and now here with Nextdoor. What is desperately needed in future research to further demonstrate and articulate the impact—and limits—of the relationships between fair enforcement procedures and user’s subsequent behaviors on the platform are causal studies—namely, platform experiments. In recent years, platforms certainly have made changes to their enforcement experiences that would align with ideas in research like this—providing more transparency into the rules on a given platform, improving the communication and rationale given to individuals who have content removed. Some platforms have been explicit about reimagining their enforcement experiences based off of procedural and restorative justice principles (Newton, 2023). And furthermore, we even see breadcrumbs indicating that some of these changes have been tested experimentally. In one example of this, Meta issued a routine response to the Oversight Board’s recommendation to provide better information to users whose posts are removed where Meta said that:
From September through November 2021, we ran an experiment for a small set of people in which we informed them whether automation or human review led to their content being taken down. We analyzed how this affected people’s experiences (such as whether the process was fair) and behaviors (such as whether the number of appeals or subsequent violations decreased). (Meta, 2021, p. 17)
However, the results of this study have, as far as we know, not been publicly released. There is much to be learned about the effectiveness of these ideas that can really only be tested through platform experimentation.
Correlational research with field validity, like the study presented here, provides the theoretical ground for future platform experiments. We hope that researchers and practitioners can draw from the insights that we present here to develop novel interventions that can be experimentally tested. As one broad area for consideration, as large language models (LLMs) are built into the fabric many online platforms and specifically integrated into the content moderation systems (Willner & Chakrabarti, 2024), platforms will have to contend with insights from this research—that trust and legitimacy in content moderation are important factors in shaping future platform behaviors. Platforms will have to understand how they can leverage the benefits of LLMs in making moderation decisions on content (like reducing the psychological impact of this work on moderators or making moderation decisions more quickly at scale) while rendering decisions to the platform users that are seen as fair and trustworthy. Beyond just using LLMs to actually make decisions on content, a direction which many in the field are aggressively pursuing at the moment, this research points to a very different direction on how to leverage LLMs within content moderation systems—as a means to building trust and legitimacy. For example, LLMs can be leveraged to change the nature of moderation away from a purely transactional system where users report content and platforms take an action toward a more dialogic system where a platform user can converse directly with an LLM to get more information about the platform’s rules, who was involved in making any decisions, how rules get applied to others, or even to help a platform user better structure their appeal before submitting it to a platform for review. In this way, we are envisioning the LLMs as an agent to build trust with users through their moderation experience in an attempt to improve one’s sense of fairness.
Another limitation of this study is that the SEMs used to test Hypotheses 1 and 2 have relatively poor model fit indices (CFI between 0.833 and 0.853). Of course, in modeling something as complex as visitations to a platform like Nextdoor, there are sure to be many exogenous factors that are not captured in our models and can contribute to poor model fit. Given the low CFI of these models, one should be cautious in relying heavily on the conclusions.
In our study, we observed a different response rate between the reporter survey and removal survey with the reporter survey having a higher response rate than the removal survey. Of course, it is unlikely that we would have seen an equal response rate among the two surveys, but it is worth drawing attention to the difference observed here. Prior moderation research focusing on the removal population has found decreases in platform activity and engagement levels or even increases in leaving the platform altogether (Jhaver, Bruckman, & Gilbert, 2019; Thomas et al., 2021; Vaccaro et al., 2020). This work might suggest that the removal population may contain more individuals who are less engaged with the Nextdoor platform and, thus, less motivated to respond to a survey like ours. By contrast, the reporter population is, by definition, actively engaged in improving the platform through their reporting which is likely associated with an increased willingness to participate in our survey. Regardless of the reasons, it is important to take note of the differential response rates and contextualize this as another limitation of our study design. As we mentioned earlier, we hope to see future research leveraging causal experimental designs; these studies should seek to measure impacts across the entire populations to avoid such nonresponse bias that is inevitable in so many survey-based study designs.
There are trade-offs made in designing a study for any sufficiently complicated environment like we have here evaluating behaviors on a social media platform. In our case, our study design is trying to understand platform behaviors before and after someone engages with the moderation system on Nextdoor while also including self-reported survey measures about people’s experience with the moderation system. However, one practical limitation is that we were only able to use a survey sent out at a single period for all participants which was sent approximately close to the time they engaged with the moderation system. As such, in our analysis, we analyze platform behaviors before and after the survey-taking period for all participants rather than being able to more precisely measure those behaviors before and after the actual event of engaging with the moderation system (which would differ slightly for each participant). While it is a limitation that we are only able to measure activities before and after an approximate date of engaging with the moderation system, we also believe that the extensive period of time that we collected data before (6 months) and after (3 months) significantly mitigates this risk by ensuring that the majority of the data collected are guaranteed to have occurred before targeted moderation system engagement event.
The lack of support in our results for Hypotheses 1a and 2a was relatively surprising given the support in prior studies testing very similar hypotheses. There are certainly many factors that one could point to as reasons for why we did not see the same correlation between fair removal experiences and subsequent removals. However, a valuable, albeit obvious, takeaway in the lack of support seen in this study is that context matters. Unlike major platforms like Twitter and Facebook where moderation decisions are made by the platform itself, or even like Discord and Reddit, where moderation decisions are made by a small set of “mods,” Nextdoor has a unique moderation structure whereby moderation decisions are made mostly by regular users of the platform. While we cannot be sure that this is the reason why we see differing results, our findings here demonstrate the importance of testing ideas across various platforms.
Decades of research have shown support for the link between improved fairness in decision making and increased self-governance. This work spans many domains of human behavior from improving compliance in police–citizen interactions to increasing patient adherence to a doctor’s medical advice. Recent research has demonstrated the portability of procedural justice into the context of online governance showing support for this theory on platforms as varied in size and moderation structure as Facebook (T. Tyler et al., 2021), Twitter (Katsaros et al., 2022), and Reddit (Jhaver, Bruckman, & Gilbert, 2019) and even in other online contexts like online gaming (Ma, Li, & Kou, 2023) or online public discourse over political issues (Chang & Zhang, 2021). Here, we add to this growing research by studying a new platform—Nextdoor. In many ways, Nextdoor is designed to function just like many other online social platforms with an engagement-based algorithmic newsfeed and many other conventional social media design features. As such, it would reason that this growing body of prior research results would be consistent on a platform like Nextdoor. However, it is worth considering the ways that Nextdoor is unique to help contextualize our results. On Nextdoor, people only ever interact on the platform with individuals who actually live nearby. Relative to a platform like Twitter or Reddit, where your interactions with others on the platform may only be limited to those that occur within that confines of that online space, on Nextdoor, it is possible that many people have much deeper relationships with those they interact with on Nextdoor that exist both on and off the platform. In the context of content moderation on Nextdoor, the issue of who is making moderation decisions may be a relatively more important issue to consider (it is worth noting that one of our procedural survey items asks I have a clear understanding of who made the decision behind my report). Additionally, the points of conflict and contention making their way through the content moderation system on a platform like Nextdoor might be of stronger salience to individuals on the platform—the local issues being actively debated on Nextdoor can often have serious and meaningful impact on the lives of the people discussing them. By contrast, other online platforms are more focused on entertainment and gaming which perhaps results in less serious conflict. In this study, across our three SEMs, we see that the distributive justice construct has an almost equal, yet slightly stronger correlation with the overall fairness construct compared to the procedural justice construct. Stated more plainly—in drawing conclusions about fairness of their moderation experience on Nextdoor, people seem to be equally concerned with the outcome fairness as they are with the fairness of the process. By contrast, in a very similarly structured study conducted on Twitter (Katsaros et al., 2022), the research found that procedural elements were much more strongly correlated with the overall fairness construct than the distributive elements.
An ideal system of conflict management has multiple goals. The first is to lessen the future occurrence of rule breaking. This is frequently the primary or even the exclusive goal of a regulatory system. While prior studies have suggested that procedural justice heightens later rule following, that is not found in the case in this study for reasons that we detail. The findings support the value of procedural justice approaches in other ways. First, if a person reported upon feels justly treated, the regulatory experience does not push them away from the platform. In fact, they engage more. At the same time, both the reporter and those having content removed are also motivated to visit the platform more frequently in the future if their case is fairly managed. Procedural justice is effective in achieving the goal of resolving a conflict in a way the leads both parties to engage more in the platform in the future.
Efforts to manage rule violations have been the primary focus of most content moderation programs created by online platforms. That is a desirable goal, but not the only goal. It is also desirable to manage conflicts about online content in ways that do not drive away those who feel victimized by online posts and those who post content that others find objectionable. This is particularly true on platforms like the one studied here where the distinction between the reporter and the person reported upon is very porous, and most users are at some time members of both groups.
An ideal content moderation system achieves a variety of goals and is not simply focused upon the violators future rule following. In this study, the results show that reports can be managed in ways that lead both the reporter and the person reported upon to continue constructive engagement on the platform in the future. The way to do that is to manage handling of the initial report through just procedures.
https://doi.org/10.1037/tmb0000149.supp
Table A1 | ||||||
Factor Loadings for Latent Variables in the Removal SEM Used for Hypotheses 1a and 2a | ||||||
|
|
|
|
| Standardized | |
---|---|---|---|---|---|---|
LV | All | |||||
PJ = ∼ | ||||||
PJ Item 1 | 1.000 | 0.876 | 0.609 | |||
PJ Item 2 | 0.957 | 0.070 | 13.688 | <.001 | 0.838 | 0.680 |
PJ Item 3 | 1.140 | 0.090 | 12.653 | <.001 | 0.999 | 0.719 |
PJ Item 4 | 1.108 | 0.079 | 13.991 | <.001 | 0.970 | 0.732 |
PJ Item 5 | 1.032 | 0.093 | 11.122 | <.001 | 0.904 | 0.699 |
PJ Item 6 | 1.067 | 0.089 | 11.924 | <.001 | 0.934 | 0.785 |
PJ Item 7 | 0.962 | 0.079 | 12.130 | <.001 | 0.842 | 0.579 |
PJ Item 8 | 1.070 | 0.096 | 11.185 | <.001 | 0.937 | 0.624 |
PJ Item 9 | 0.498 | 0.072 | 6.917 | <.001 | 0.436 | 0.328 |
PJ Item 10 | 1.041 | 0.088 | 11.896 | <.001 | 0.912 | 0.630 |
DJ = ∼ | ||||||
DJ Item 1 | 1.000 | 1.152 | 0.873 | |||
DJ Item 2 | 0.869 | 0.046 | 18.900 | <.001 | 1.000 | 0.813 |
OF = ∼ | ||||||
PJ | 1.000 | 0.847 | 0.847 | |||
DJ | 1.449 | 0.145 | 9.964 | <.001 | 0.934 | 0.934 |
Overall fairness item | 1.358 | 0.136 | 9.967 | <.001 | 1.007 | 0.770 |
Trust Nextdoor item | 0.745 | 0.100 | 7.438 | <.001 | 0.553 | 0.441 |
Note. SEM = structural equation model; SE = standard error; LV = latent variables; PJ = procedural justice; distributive justice; OF = overall fairness. |
Table A2 | ||||||
Factor Loadings for Latent Variables in the Reporter SEM Used for Hypotheses 1b and 2b | ||||||
|
|
|
|
| Standardized | |
---|---|---|---|---|---|---|
LV | All | |||||
PJ = ∼ | ||||||
1.000 | 0.812 | 0.611 | ||||
PJ Item 2 | 1.169 | 0.043 | 27.312 | <.001 | 0.949 | 0.767 |
PJ Item 3 | 1.010 | 0.038 | 26.523 | <.001 | 0.820 | 0.714 |
PJ Item 4 | 1.212 | 0.054 | 22.602 | <.001 | 0.984 | 0.680 |
PJ Item 5 | 1.142 | 0.050 | 22.740 | <.001 | 0.927 | 0.679 |
PJ Item 6 | 1.136 | 0.049 | 23.353 | <.001 | 0.923 | 0.708 |
PJ Item 7 | 1.201 | 0.060 | 20.065 | <.001 | 0.976 | 0.664 |
PJ Item 8 | 1.291 | 0.059 | 21.805 | <.001 | 1.049 | 0.739 |
PJ Item 9 | 0.656 | 0.041 | 16.094 | <.001 | 0.532 | 0.501 |
PJ Item 10 | 1.252 | 0.059 | 21.084 | <.001 | 1.017 | 0.731 |
DJ = ∼ | ||||||
DJ Item 1 | 1.000 | 1.093 | 0.862 | |||
DJ Item 2 | 0.845 | 0.027 | 31.603 | <.001 | 0.924 | 0.794 |
OF = ∼ | ||||||
PJ | 1.000 | 0.839 | 0.839 | |||
DJ | 0.708 | 0.038 | 18.508 | <.001 | 0.800 | 0.800 |
Overall fairness item | 1.128 | 0.038 | 29.835 | <.001 | 1.035 | 0.800 |
Trust Nextdoor item | 0.567 | 0.038 | 14.798 | <.001 | 0.520 | 0.451 |
Note. SEM = structural equation model; SE = standard error; LV = latent variables; PJ = procedural justice; distributive justice; OF = overall fairness. |
Table A3 | ||||||
Factor Loadings for Latent Variables Used for Hypotheses 1c and 2c | ||||||
|
|
|
|
| Standardized | |
---|---|---|---|---|---|---|
LV | All | |||||
PJ = ∼ | ||||||
PJ Item 1 | 1.000 | 1.051 | 0.689 | |||
PJ Item 2 | 1.068 | 0.025 | 43.158 | <.001 | 1.122 | 0.806 |
PJ Item 3 | 0.946 | 0.023 | 41.330 | <.001 | 0.994 | 0.760 |
PJ Item 4 | 0.956 | 0.028 | 34.420 | <.001 | 1.005 | 0.692 |
PJ Item 5 | 0.985 | 0.029 | 34.366 | <.001 | 1.035 | 0.720 |
PJ Item 6 | 0.999 | 0.027 | <.001 | 1.050 | 0.759 | |
PJ Item 7 | 0.932 | 0.031 | 29.704 | <.001 | 0.979 | 0.638 |
PJ Item 8 | 1.001 | 0.030 | 33.537 | <.001 | 1.053 | 0.710 |
PJ Item 9 | 0.514 | 0.024 | 21.137 | <.001 | 0.541 | 0.471 |
PJ Item 10 | 0.976 | 0.030 | 32.405 | <.001 | 1.026 | 0.706 |
DJ = ∼ | ||||||
DJ Item 1 | 1.000 | 1.289 | 0.897 | |||
DJ Item 2 | 0.822 | 0.016 | 50.678 | <.001 | 1.060 | 0.826 |
OF = ∼ | ||||||
PJ | 1.000 | 0.899 | 0.899 | |||
DJ | 0.780 | 0.024 | 32.575 | <.001 | 0.860 | 0.860 |
Overall fairness item | 1.114 | 0.020 | 55.509 | <.001 | 1.291 | 0.856 |
Trust Nextdoor item | 0.492 | 0.022 | 22.274 | <.001 | 0.570 | 0.475 |
Note. SEM = structural equation model; SE = standard error; LV = latent variables; PJ = procedural justice; distributive justice; OF = overall fairness. |