Skip to main content
SearchLoginLogin or Signup

Predictors of Willingness to Participate in Survey Interviews Conducted by Live Video

Special Collection: Technology in a Time of Social Distancing. Volume 4, Issue 2. DOI: 10.1037/tmb0000100

Published onMay 03, 2023
Predictors of Willingness to Participate in Survey Interviews Conducted by Live Video
·

Abstract

As people increasingly communicate via live video—even more since the COVID-19 pandemic—how willing will they be to participate via video in large-scale standardized surveys that inform public policy, many of which have historically been carried out in-person? This registered report tests three potential predictors of willingness to participate in a live video interview: How (a) easy, (b) useful, and (c) enjoyable respondents find live video to use in other contexts. A potential survey-specific moderator of these effects is also tested: The extent to which respondents report that they would be uncomfortable answering a particular question on a sensitive topic via live video relative to other survey modes. In the study, 598 online U.S. respondents rated their willingness to take part in a hypothetical live video survey that might ask about personal information, in the context of also rating their willingness to take part in four other survey modes, two interviewer-administered (in-person and telephone) and two self-administered (a text-only web survey and a “prerecorded video” web survey in which respondents play videos of interviewers reading questions and then enter answers). Findings demonstrate that willingness to participate in a live video interview is significantly predicted by the extent to which respondents perceive live video as useful and enjoyable in other contexts and by their relative discomfort disclosing in live video versus other modes.

Keywords: Keywords: survey interview mode, live video, technology acceptance, participation, sensitive questions

Acknowledgments: The authors thank Kerby Shedden, University of Michigan Statistics, for consultation on modeling approaches.

Funding: Michael F. Schober and Frederick G. Conrad received funding from Grants SES-1825194 and SES-1825113, respectively, from the National Science Foundation (U.S.).

Disclosures: The authors declare no conflicts of interest to disclose.

Data Availability: Data for this study are publicly available here.

Open Science Disclosures: The data are available here. The preregistered design and analysis plan (transparent changes notation) is accessible here.

Correspondence concerning this article should be addressed to Michael F. Schober, Department of Psychology, New School for Social Research, 80 Fifth Avenue, New York, NY 10011, United States. Email: [email protected].


A number of important social indicators that quantify public opinion and behavior—and inform policy and decision-making on major societal issues—are collected through standardized survey interviews (see Schober & Conrad, 2015). Even before the COVID-19 pandemic, survey research that relies on in-person interviewing has been facing significant challenges (Schober, 2018): declining response rates, increasing costs, waning trust in survey organizations, and trends toward increasing remote and mediated interaction among the public. With the advent of COVID-19, survey researchers who have relied on in-person data collection quickly needed to consider alternatives that could be implemented safely during the pandemic and that will not disrupt important policy-relevant time series on, for example, trends in employment and health.1 The need for alternative data collection methods is likely to persist as public expectations and norms of interaction in a range of settings—work, medical, education, personal—evolve and perhaps change permanently as a result of the pandemic and major global transformations.

The increase in use of live video calls and meetings during and following the pandemic—at least among members of the public who have access to the technology and sufficient connectivity—makes it particularly relevant to consider whether live video interviews might plausibly substitute for in-person survey data collection today2 and moving forward. But little is known about whether and when people would be willing to participate in a live video survey interview, and what predicts their acceptance of live video as a technology for survey interviews in comparison with the modes in which current survey interviews are administered. The existing research on people’s survey mode preferences (e.g., Mulder & de Bruijne, 2019; Smyth, Olson, & Kasabian, 2014; Smyth, Olson, & Millar, 2014), which can vary substantially, and on predictors of willingness to participate in survey interviews in different modes (e.g., Revilla et al., 2018), has not yet included live video as a survey mode—which makes sense given how recently live video has become a more common and viable mode of interaction in other domains. It is unknown how people’s preferences for (or aversion to) video in communicative tasks that are not surveys (e.g., work meetings, telehealth visits, family conversations, education, etc.) will or will not extend to live video survey participation, particularly because the specific task structure of standardized survey interviews—providing personal data to a stranger—can lead to quite different interactive dynamics than other communicative situations (see e.g., Schaeffer, 2002; Schober & Conrad, 2002).

Beginning to understand the psychological factors predicting people’s willingness to participate in live video interviews (which is a different question than studying who exactly has the right connectivity and access to video) is the central goal of the present study. The focus is on willingness to participate rather than actual participation, as part of exploring the viability of live video surveys, based on evidence from other technologies that the behavioral intention to use a technology can directly affect actual usage behavior (Davis & Wiedenbeck, 2001). The point of the study is not to generalize to a full demographic analysis of U.S. preferences at the moment of data collection, nor to test the quality of survey responses in live video versus other modes in actual interviews (e.g., Conrad et al., 2022; Endres et al., 2023)—which are currently rare—but rather to test targeted hypotheses about what may affect people’s behavioral intention—their willingness to participate—as new norms of video usage emerge.

In the study, respondents rate their willingness to participate in a hypothetical survey interview about opinions and behaviors conducted via live video, in the context of also rating their willingness to participate in hypothetical surveys conducted in four other survey modes. Respondents are told that the hypothetical survey might ask about personal information because, as Tourangeau et al. (2000) note, almost every major survey includes questions that are likely to seem intrusive or personal for at least some respondents, for example, about income, age, or marital status. A number of surveys also include highly personal questions about sexual behaviors, illegal drug use, financial status, and so forth that are likely to feel intrusive to most respondents and can lead to socially desirable responding.

The four other modes are selected because they differ from live video in theoretically and practically important ways: Two are “live” interviewer-administered modes—in-person and phone—and two are “self-administered” modes, a traditional text-only web survey and a “prerecorded video” web survey in which respondents play embedded video recordings of interviewers reading questions (Fuchs, 2009; Fuchs & Funke, 2007; Haan et al., 2017; see Table 1). Three of the modes (in-person, phone, and web survey) are widely used in large-scale standardized surveys.3 The prerecorded video mode, which is not widely used—if at all—in typical surveys, is included because it shares important features with live interviewer-administered modes (audio and visuals of the interviewer that unfold over time), even though the interviewer is not live, that have been shown in previous laboratory and field survey mode comparisons to affect data quality (e.g., Lind et al., 2013; Pickard & Roster, 2020), and particularly the disclosure of sensitive information.

Table 1

Features of the Five Survey Modes Included in the Study

Live interviewer-administered

Self-administered

Feature

Live video

In-person

Phone

Prerecorded video

Web survey

Interviewer and respondent physically copresent (breathing same air)

No

Yes

No

No

No

Full-body views

Possible but not typical

Yes

No

Possible but unlikely

No

Interviewer speaks question

Yes

Yes

Yes

Yes

No

Two-way spoken interaction (respondent answers in speech)

Yes

Yes

Yes

No

No

Facial representation of interviewer

Yes

Yes

No

Yes

No

Yes

Yes

N/A

No

N/A

Interview is self-administered

No

No

No

Yes

Yes

Questions persist beyond respondent’s first exposure to them

No

No

No

Depends on implementation

Yes

How questions are re-presented

Interviewer re-reads upon respondent’s verbal request

Interviewer re-reads upon respondent’s verbal request

Interviewer re-reads upon respondent’s verbal request

Respondent replays video

Respondent re-reads when needed

Interviewer has perceptual capability (can see and hear respondent)

Yes

Yes

Hearing only

No

No

Interviewer has evaluative capability (can judge respondent’s answers)

Yes

Yes

Yes

No

No

Note. Adapted and expanded from “Video in Survey Interviews: Effects on Data Quality and Respondent Experience,” by F. G. Conrad, M. F. Schober, A. L. Hupp, B. T. West, K. M. Larsen, A. R. Ong, and T. Wang, 2022, Methods, Data, Analyses. Advance online publication (https://doi.org/10.12758/mda.2022.13). CC-BY.

Hypotheses

The study tests preregistered hypotheses about factors that may affect willingness to participate in live video survey interviews (https://osf.io/nswc3/)—three predictor factors and one potential moderating factor. The predictor factors are based on the technology acceptance model (TAM) literature, which for several decades has documented what makes people more and less likely to intend to use (“behavioral intention”) and actually use (“use behavior”) a range of new and potentially transformative information and communication technologies, from computers in the workplace and classrooms to software platforms to email to cell phones (Granić & Marangunić, 2019; Marangunić & Granić, 2015). Since live video surveys are not yet in widespread use, our investigation of willingness to participate in a hypothetical live video survey focuses on behavioral intention rather than use, following common practice in TAM studies.4

We include two core factors from the TAM literature that have been predictive of behavioral intention and rejection across many technologies (Venkatesh & Bala, 2008), and particularly communication technologies and platforms (Facebook, Rauniar et al., 2014; email, Serenko, 2008; fax, Straub, 1994): perceived ease of use and perceived usefulness. These factors are plausibly relevant to willingness to participate in a live video survey: People who find live video hard to use and do not find it useful in general will plausibly be less likely to be willing to participate. We include a third factor of perceived enjoyment that has been characterized as reflecting intrinsic motivation to use a technology (Davis et al., 1992) and which has been shown to be particularly important for users adopting mobile video (Zhou & Feng, 2017); Zhou and Feng’s evidence also suggests there may be video-specific effects, in that perceived ease of use predicted mobile video adoption in personal (“leisure”) but not in work contexts. These TAM factors do not assess what leads any individual to find a technology easy or hard to use, useful or nonuseful, or enjoyable or not, which for any individual can result from their particular experiences with this and other technologies, their social setting, and their intrinsic and extrinsic motivations (Venkatesh & Bala, 2008). Rather, they abstract to a level that empirically has been demonstrated to accurately predict technology adoption.

The potential moderating factor we include in our study is specific to the survey response task and not necessarily relevant to all technologies and modes of communication: relative discomfort answering a particular question on a sensitive topic in live video relative to other survey modes (relative discomfort disclosing via live video). We measure this by comparing respondents’ ratings of their discomfort answering the same intrusive question in different modes. We include this potentially moderating factor because, as Table 1 outlines, live video includes features that have been argued to be components of social presence across different theoretical perspectives (e.g., Brennan, 1998; Daft & Lengel, 1986; Gergle, 2017), and the evidence in the survey literature is that modes of survey administration that highlight the social presence of an interviewer can lead respondents to provide socially undesirable—probably less truthful—answers at a higher rate (e.g., Kreuter et al., 2008; Lind et al., 2013; Tourangeau & Smith, 1996), even if they may be more engaged (as demonstrated by, e.g., greater response rate with in-person than telephone recruitment, Cannell et al., 1987; Groves & Kahn, 1979). Decreased disclosure has now also been documented in live video survey interviews compared with two self-administered modes (Conrad et al., 2022), which suggests that respondents may have particular concerns about being negatively evaluated by an interviewer and the pressure to reveal sensitive information (Tourangeau et al., 2000, Chapter 9) in live video interviews. If discomfort disclosing in live video surveys does indeed affect people’s willingness to participate, future research can test hypotheses about which features or combinations of features in live video that contribute to the feeling of social presence (e.g., the presence of a face, see Lind et al., 2013) may be involved.

The resulting hypotheses are as follows:

  1. Perceived ease of use: People who find using live video easier in general will be more likely to say they are willing to participate in a live video survey interview than people who find it more difficult.

  1. Perceived usefulness: People who find live video more useful in general will be more likely to say they are willing to participate in a live video survey interview than people who find it less useful.

  1. Perceived enjoyment: People who say they enjoy live video more in general will be more likely to say they are willing to participate in a live video survey interview than people who enjoy live video less.

  1. Relative discomfort disclosing sensitive information via live video, as a moderator: Relative discomfort with answering a sensitive question via live video will affect the extent to which people’s (a) perceived ease of use, (b) perceived usefulness, and (c) perceived enjoyment predict their willingness to participate in a live video interview.

Method

Design

The study is a cross-sectional observational study that measures the potential impact of respondent characteristics (how they perceive ease of use, usefulness and enjoyment of live video) on their willingness to participate in a hypothetical live video interview. Respondents answer about their willingness to participate in the context of also answering about their willingness to participate in four other survey modes (in-person, telephone, text-only web survey, and prerecorded video web survey) so as to ensure that their judgment is particular to the video mode. These comparisons do not include all existing or possible survey interview modes, but rather a set of modes that allow feature-based comparison that can be informative about other modes. For example, because web surveys share all features in Table 1 with paper-and-pencil mail surveys, any exploratory analyses demonstrating that willingness to participate in live video differs from willingness to participate in web surveys in a particular way could also be informative about willingness to participate in mail surveys versus live video, even though there are additional featural differences between web and mail surveys.

Participants

Six hundred online panelists were recruited through Prime Panels to match 2018 U.S. Current Population Survey distributions on age (<65, ≥65), gender (male, female), race (White, non-White) and education (≤high school, >high school). The sample size was selected based on analyses of pilot data of a sample of 300 respondents that tested the usability of the interface, respondents’ ability to answer the questions, and statistical power.5

Table 2 presents the demographic composition of the actual sample, based on respondents’ self-reported characteristics at the end of the questionnaire. Participants who completed the survey received compensation in the amount they had agreed to with the platform from which PrimePanels recruited them into the study.

Table 2

Self-Reported Demographic Characteristics of Sample and August 2021 Percentages From the U.S. Current Population Survey (CPS)

Demographic characteristic

N

N (%)

August 2021 CPS distributions

Age

 18–24

50

8.4%

11.3%

 25–34

147

24.6%

17.7%

 35–44

113

18.9%

16.6%

 45–54

80

13.4%

15.6%

 55–64

91

15.2%

16.6%

 65–74

93

15.6%

13.3%

 75–84

21

3.5%

6.6%

 85 and up

3

0.5%

2.3%

Gender

 Female

312

52.2%

51.1%

 Male

281

47.0%

48.9%

 Nonbinary

1

0.2%

 Another

2

0.3%

 Rather not say

2

0.2%

Hispanic, Latino, or Spanish origin

 Yes

53

8.8%

18.9%

 Rather not say

2

Race

 Black or African American

80

13.4%

13.4%

 Asian

24

4.0%

1.2%

 White

485

81.1%

75.9%

 Native American or Alaska Native

12

2.0%

1.2%

 Native Hawaiian or other Pacific Islander

3

0.5%

0.4%

 Another

7

1.2%

 Rather not say

3

0.5%

 More than one category

16

2.6%

2.8%

Education

 High school graduate (or GED)

111

18.6%

33.9%

 Some college

99

16.6%

13.0%

 Vocational or associate degree

86

14.4%

7.6%

 Bachelor’s degree

223

37.3%

17.2%

 Graduate degree

68

11.4%

9.9%

 Something else

7

1.2%

18.4%

 Rather not say

4

0.7%

Procedure

Recruits were instructed to respond on their own device in a time and place of their choosing. They were first presented with consent language approved by The New School Human Research Protection Program (protocol 2020-124) and only proceeded to the study if they agreed. Data, which are publicly available (Schober & Conrad, 2022), were collected in August 2021.

Questionnaire

The online questionnaire, implemented on the Qualtrics platform, followed the flow presented in Figure 1. It began with a set of survey questions asking about respondents’ willingness to participate in five different interview modes: in-person, live video, prerecorded video, telephone, and web survey, presented in random order through the Qualtrics randomization feature. Each item about willingness to participate had an associated graphic to help clarify how the mode is defined (see Interactive Figure 1 and also Supplemental Materials, for the full questionnaire).

Figure 1

Sections of Survey

Interactive Figure 1

Example of questionnaire with associated graphics.

Respondents then were asked a set of questions, also presented in random order, about how uncomfortable they would be if they were asked one particular sensitive question (the same across all modes) in each of those modes (“How many sex partners have you had in the last twelve months?”). Each of these mode-discomfort items was accompanied by a brief video (in the case of telephone, by an audio recording) designed to clarify what it’s like to be asked this question in this mode—for example, by a video of a live interviewer asking the question while seated on a couch in a home, a common location for an in-person household interview; by a video that zooms into a computer screen display of an interviewer asking a question via live video, etc.). See Interactive Figure 2 to view the five videos.

Interactive Figure 2

Example of questionnaire with associated brief videos.

The particular sensitive survey question “How many sex partners have you had in the last 12 months?” was chosen based on data from a norming study that collected ratings on a pool of sensitive questions taken from ongoing large-scale U.S. government and social scientific surveys (Conrad et al., 2022; see Fail et al., 2021; Feuer et al., 2018, for other examples using this technique that show the same pattern). The norming study asked online panelists to rate the extent to which most respondents would feel uncomfortable being asked each of a series of questions (without mentioning mode of communication), as well as the extent to which most respondents would feel uncomfortable providing different responses for each question. (The responses to be rated were chosen based on distributions of actual answers in ongoing surveys with representative samples.) We selected this particular question for the present study because the Conrad et al. (2022) norming data showed that not only is the topic of the question judged as uncomfortable for most people to be asked but also all potential answers to this question were rated as uncomfortable to provide (even reporting one partner was rated as uncomfortable by nearly 50% of raters, and all other answers were rated as highly uncomfortable); this is therefore a good exemplar of a sensitive question that will lead most respondents to an uncomfortable situation.6 Asking our respondents to answer the same survey question in all five modes allows us to control the content of the sensitive question so that we can test the effect of mode on discomfort; whatever a respondent’s starting level of comfort or discomfort with the question, we can measure effects of mode above and beyond that.7

Following these two blocks of items was a set of follow-up items presented only when respondents’ ratings of willingness or discomfort differed between live video and any other mode. In each case, respondents were asked to explain why they rated live video as they did in comparison to each other mode for which their ratings had differed, selecting from a set of reasons (presented to each respondent in one of four different random orders) or adding their own (for the willingness comparisons) or in an open-ended text box (for the discomfort comparisons concerning the sex partners question). All respondents were then asked two questions about their frequency of using live video in daily life; 11 TAM rating items (presented in a randomized order for each respondent using the Qualtrics platform’s randomization feature); and six questions about their demographic characteristics. The online survey platform was programed to automatically collect information about respondents’ device (mobile or not), operating system, and browser.

Constructing the Predictors

Following the analytic approach in previous TAM studies (e.g., Zhou & Feng, 2017), the TAM factors (perceived ease of use, perceived usefulness, perceived enjoyment) were constructed by summing the values of responses to the relevant 11 items (four items for perceived ease of use, four for perceived usefulness, and three items for perceived enjoyment). Since the ratings for each item were on a 7-point scale from strongly agree (1) to strongly disagree (7), scores could range from 4 to 28 for perceived ease of use and perceived usefulness, and from 3 to 21 for perceived enjoyment. Relative discomfort disclosing via video was calculated as a relative score ranging from −4 to +4, so that a respondent who was more comfortable disclosing in live video than in any of the other four modes received a score of +4, and a respondent who was more uncomfortable disclosing in live video than in all other four modes received a score of −4. A respondent who would be equally comfortable disclosing in live video as in all other modes would receive a score of 0, and one who was somewhat more comfortable disclosing in live video than in two other modes and equally comfortable in two others would receive a score of 2.

Results

Tests of Hypotheses

We used a single multiple linear regression model, using R Version 4.1.0, to test the independent contributions of the three proposed TAM factors (perceived ease of use, perceived usefulness, and perceived enjoyment) and the potential moderating survey-specific factor relative discomfort disclosing sensitive information via live video on willingness to participate in a live video interview. (Testing for effects of the moderating factor includes, by necessity, testing for a direct effect of relative discomfort disclosing sensitive information via live video.) The regression was significant, F(7, 590) = 35.80, p < .001, with these factors and interactions accounting for R2 = 29.8% of the variance in willingness to participate in a live video survey interview. Tests to see if the data met the assumption of collinearity indicated that multicollinearity was not a concern, with Variance Inflation Factors, VIF’s < 5.

As Table 3 shows, two of the three TAM factors significantly predicted willingness to participate in a live video survey interview. Hypothesis 1 was not supported: We did not find evidence that perceived ease of use of live video in other contexts predicted willingness to participate in a live video survey. Hypothesis 2 was supported: people who reported finding live video more useful in other contexts were also more likely to be willing to participate in a live video survey interview (β = 0.074, SE = 0.023, p = .001). Hypotheses 3 was also supported: people who reported enjoying live video more in other contexts were more likely to be willing to participate in a live video survey interview (β = 0.071, SE = 0.027, p = .008).

Table 3

Multivariate Analysis of Factors Related to Willingness to Participate in Live Video Interview

Factor

Unstandardized β

SE

Standardized β

t value

p

VIF

(Intercept)

1.028

.156

3.176

6.60

.001

H1: Perceived ease of use of live video in other contexts

.016

.019

.126

.852

.395

3.107

H2: Perceived usefulness of live video in other contexts

.074

.023

.362

3.293

.001

4.777

H3: Perceived enjoyment of live video in other contexts

.071

.027

.369

2.648

.008

4.504

Relative discomfort disclosing via live video

−.201

.088

−.215

−2.287

.023

1.024

H4a: Relative Discomfort × Ease of Use

−.009

.009

−.089

−.963

.336

2.525

H4b: Relative Discomfort × Usefulness

.019

.011

.205

1.753

.080

4.314

H4c: Relative Discomfort × Enjoyment

−.007

.013

−.063

−.557

.577

3.825

Hypothesis 4, testing potential moderating effects, was not supported; we did not find evidence of a statistically significant moderating impact of relative discomfort disclosing via live video on perceived ease of use (H4a), perceived usefulness (H4b), or perceived enjoyment (H4c). Beyond what we had hypothesized, the model—which includes a test of direct effects of relative discomfort disclosing via live video in order to test for interactions—shows that relative discomfort disclosing via live video was a direct predictor of willingness to participate, independent of the TAM factors: people who find disclosing in live video particularly uncomfortable relative to other modes are significantly less willing to participate in a live video interview (β = −0.201, SE = 0.088, p = .023). Figure 2 summarizes these findings diagrammatically.

Figure 2

Results of Preregistered Hypothesis Tests (Significant Predictors Bolded) With the Additional Significant Predictor Added (Dotted Line)

* p ≤ .05. ** p ≤ .01. *** p ≤ .001.

Additional Analyses

Although the primary purpose of this study was to test the hypothesized predictors of willingness to participate, based on the TAM framework and survey-specific considerations, the additional information we collected from our respondents allowed us to carry out exploratory analyses that help to further contextualize the findings. For the rating scale analyses, we carry out categorical analyses (responses were on a 5-point scale with radio buttons, rather than a continuous slider) so as to avoid making any “assumptions about the ‘shape’ of the population from which the study data are drawn” (Sullivan & Artino, 2013, p. 541). This also allows us to focus on the variability and range of responses across our sample, and to be able to test where the significant differences in judgments really fall, using chi-square post hoc tests.

  1. How did respondents’ willingness to participate in a live video survey interview compare with their willingness to participate in the other survey modes?

As Figure 3 shows, respondents in this sample ranged in their willingness to participate in a live video survey interview, with about half giving a rating of 1, 2, or 3 on the 5-point scale. (The fact that members of our sample varied in their willingness to participate in a live video survey gives us confidence that our data set allows reasonable tests of our hypotheses about factors affecting willingness to participate.) The proportion of respondents giving ratings of 1, 2, or 3 on the 5-point scale was lower for participating in video than for participating in the other two interviewer-administered modes: live video versus in-person McNemar’s χ2(1) = 5.32, p = .021, OR = 0.649, 95% CI [0.446, 0.939] and live video versus phone McNemar’s χ2(1) = 33.83, p < .0001, OR 0.279, 95% CI [0.170, 0.443],8 although the effect sizes are small. The vast majority of respondents (94.2%) gave ratings of 1, 2, or 3 for being willing to participate in a web survey, which makes sense given that they were recruited through a web survey panel aggregator and so were well acquainted with this type of survey. Significantly fewer respondents gave ratings of 1, 2, or 3 for participating in the other self-administered mode, prerecorded video, web versus prerecorded video McNemar’s χ2(1) = 48.40, p < .0001, OR = 8.111, 95% CI [4.047, 18.443], a large effect size.

  1. Was there any evidence that respondents’ demographic characteristics predicted their willingness to participate in a live video survey interview?

Figure 3

Willingness to Participate in Each Mode

The fact that our respondents ranged in self-reported age, gender, race, Hispanic ethnicity, and education allows us to explore if these characteristics predicted their willingness to participate in a live video interview. We added these demographic characteristics to a subsequent regression model as control variables, with age as a continuous predictor and the other categories included as contrasts relative to a base category (base categories: gender = female; race = White; Hispanic ethnicity = no; education = high school or less). This regression was conducted on the 588 cases for which respondents had no missing values (“rather not say”) on responses to any questions about demographic characteristics. The regression was significant, F(17, 570) = 15.23, p < .001, with these factors and interactions accounting for 31.2% of the variance in willingness to participate in a live video survey interview. Tests to see if the data met the assumption of collinearity indicated that multicollinearity was not a concern, with VIF’s < 5.

As Table 4 shows, the predictors observed in testing our hypotheses remain significant despite the addition of the demographic features: perceived enjoyment, perceived usefulness, and relative discomfort disclosing in live video all still significantly predict willingness to participate. We see no evidence that age, gender, education, race, or Hispanic ethnicity predict willingness to participate in a live video interview.

  1. How did respondents’ relative willingness to participate in live video differ in comparison with the four other modes?

Table 4

Regression Values Including Respondents’ Demographic Characteristics

Factor

Unstandardized β

SE

Standardized β

t value

p

VIF

(Intercept)

1.067

0.246

3.143

4.344

0.001

Perceived ease of use of live video in other contexts

0.021

0.02

0.148

1.062

0.289

3.278

Perceived enjoyment of live video in other contexts

0.066

0.028

0.349

2.394

0.017

4.621

Perceived usefulness of live video in other contexts

0.083

0.023

0.404

3.596

0.001

4.848

Relative discomfort disclosing via live video

−0.204

0.089

−0.210

−2.288

0.023

1.042

Age

−0.004

0.004

−0.065

−1.048

0.295

1.271

Gender (not female vs. female)

0.009

0.112

0.009

0.077

0.939

1.040

Education (some college vs. high school or less)

0.111

0.185

0.111

0.599

0.549

1.171

Education (vocational or associate degree vs. high school or less)

−0.011

0.198

−0.011

−0.057

0.955

1.171

Education (bachelor’s degree vs. high school or less)

0.007

0.158

0.007

0.045

0.964

1.171

Education (graduate degree vs. high school or less)

−0.143

0.21

−0.143

−0.679

0.497

1.171

Race (Black or African American vs. White)

−0.025

0.175

−0.025

−0.144

0.886

1.201

Race (Asian vs. White)

0.577

0.303

0.577

1.903

0.058

1.201

Race (other and multiple responses vs. White)

−0.112

0.303

−0.112

−0.371

0.711

1.201

Hispanic ethnicity

0.038

0.204

0.038

0.188

0.851

1.107

Relative Discomfort × Ease of Use

−0.008

0.009

−0.008

−0.891

0.373

2.514

Relative Discomfort × Enjoyment

−0.010

0.013

−0.010

−0.724

0.469

3.843

Relative Discomfort × Usefulness

0.021

0.011

0.021

1.867

0.062

4.356

Based on respondents’ ratings of willingness to participate in each mode, at least some people preferred live video (gave higher willingness ratings to live video) than to each other mode. As Figure 4 shows, the majority of respondents either preferred live video or had no preference for live video relative to the other interviewer-administered modes (77.5% for in-person and 71.1% for phone), but fewer preferred live video or had no preference relative to the self-administered modes (49.7% for prerecorded video and 36.8% for web). The greatest preference for live video was in comparison with in-person interviewing; significantly more respondents reported being more willing to participate in a live video interview than in an in-person interview (20.1%) than reported being more willing to participate in a live video than a telephone interview (12.7%), McNemar’s χ2(1) = 18.87, p < .001, OR = 2.630, 95% CI [1.667, 4.252], a medium effect size.4What reasons did people give for being more and less willing to participate in live video than the other modes?

Figure 4

How Ratings of Willingness to Participate in Live Video Compared With Ratings of Willingness to Participate in the Other Four Modes

When respondents’ ratings of willingness to participate in a live video interview differed from their ratings of willingness to participate in the other modes, they were asked to select the most important reason from a menu or add an open-ended other reason. (Participants would thus contribute from 0 to 4 reasons depending on how many of their willingness ratings for live video differed from their willingness ratings for the other four modes.) The top three reasons respondents selected for being more willing to participate in live video than in other modes were convenience (29.3% of the 287 reasons given for preferring live video to the other modes), social connection with the interviewer (13.2%), and a three-way tie between comfort using the technology, health or safety, and my ability to hear or see (12.2%). Table 5 lists the top three reasons selected for each mode comparison, which are different for the different modes. Convenience was the most frequent reason for preferring live video to in-person and phone interviews, while social connection with the interviewer was the most frequent reason selected for preferring live to prerecorded video, and comfort using the technology was the most frequent reason for preferring live video to a web survey. Health and safety was only listed as a top reason for preferring live video to in-person.

Table 5

Most Frequently Selected Reason for Being More Willing to Participate in a Live Video Interview Than in the Other Modes

More willing to participate in a live video interview than …

In-person interview (n = 120)

Phone interview (n = 76)

Prerecorded video survey (n = 55)

Web survey (n = 36)

Convenience (n = 51, 42.5%)

Convenience (n = 19, 25%)

Social connection with the interviewer (n = 11, 20%)

Comfort using the technology (n = 9, 25%)

Health or safety (n = 22, 18.3%)

My ability to hear or see (n = 11, 14.5%)

My ability to hear or see (n = 10, 18.2%)

Convenience (n = 6, 16.7%)

Social connection with the interview (n = 13, 10.8%)

Comfort using the technology (n = 11, 14.5%)

Convenience (n = 8, 14.5%)

Privacy (n = 6, 16.7%)

The top three of the 987 reasons that respondents selected for being less willing to participate in live video than in the other modes were privacy (35.2% out of those who preferred another mode), social connection with the interviewer (16.4%), and convenience (13.5%). Table 6 lists the top three reasons selected for each mode comparison. Privacy was the top reason selected for preferring all four other modes to live video.

Table 6

Most Frequently Selected Reason for Being Less Willing to Participate in a Live Video Interview Than in the Other Modes

Less willing to participate in a live video interview than …

In-person interview (n = 135)

Phone interview (n = 173)

Prerecorded video survey (n = 301)

Web survey (n = 378)

Privacy (n = 36, 26.6%)

Privacy (n = 65, 37.6%)

Privacy (n = 113, 37.5%)

Privacy (n = 133, 35%)

Social connection with the interview (n = 31, 23%)

Convenience (n = 23, 13.3%)

Social connection with the interviewer (n = 50, 16.6%)

Social connection with the interviewer (n = 59, 15.6%)

Access to the technology (e.g., network connection, camera; n = 18, 13.3%)

Social connection with the interviewer (n = 22, 12.7%)

Convenience (n = 40, 13.2%)

Convenience (n = 57, 15%)

As Supplemental Tables 1A and 1B document, every reason was selected at least once for a respondent’s being more willing or less willing to participate in a live video interview than in each other mode, but many of the selections were infrequent. The open-ended responses to the “something else” category provide more nuance (see Supplemental Tables 1C and 1D for the complete list). Open-ended reasons for being less willing to participate in a live video interview than in other modes explicitly referenced sensory abilities (e.g., “I am deaf”), temperament (e.g., “shyness”), concerns about providing strangers with technology access (“dislike giving others access to my laptop, camera, and microphone”), and being unwilling to schedule a video appointment (“having to schedule an event with a person is inconvenient”). Open-ended reasons for preferring live video included specific health and safety concerns (“COVID-19”), the interactive potential of having a live interviewer (“The live interviewer would be able to answer questions, repeat questions if needed, and I think it would be faster and go more smoothly vs. the recorded interviewer”), and more details about convenience (“Feels like a hassle to hold a phone for that long. So would be easier to just put on speaker, but if I’m going to put it on speaker, then might as well just do a video call.”)5Did respondents’ frequency of experience using live video in other contexts affect their willingness to participate in a live video survey interview?

As Figure 5 shows, our sample has only a small percentage of respondents who use live video daily or more (44 of 598, 7.4%), and a majority (336 of 598, 56.2%) who use live video monthly or less. So our August 2021 respondents may be a bit less live video experienced than United States national estimates from the American Trends Panel for April 2021 (Pew Research Center, 2021), when 19% of the general population reported using live video daily or more, but not so far off; in the American Trends Panel data 50% of respondents reported using video “every few weeks” or less.

Figure 5

Frequency of Experience Using Live Video in Other Contexts

Our analyses show that frequency of experience using live video in other contexts does have a relationship with reported willingness to participate in a live video survey, χ2(12) = 50.01, p < .001, Cramér’s V = 0.167, in an omnibus test with five levels of willingness to participate and four categories of respondent experience—a moderate effect. Post hoc comparisons using Bonferroni correction with an α level of 0.0025 show three significant differences: significantly more respondents who use video less than once a month report being very unlikely (5 on the 5-point scale) to participate in a live video interview than those in all the other experience categories, χ2(1) = 40.66, p < .0001, Cramér’s V = 0.23, a moderate effect. They also report being very willing to participate in a live video interview (1 on the 5-point scale) significantly less often than those in all the other experience categories, χ2(1) = 16.33, Cramér’s V = .166, a moderate effect. The proportion of respondents who use video weekly who reported being very unlikely (1 on the 5-point scale) to participate in a live video interview was significantly lower than expected, relative to the other experience categories, χ2(1) = 17.03, p < .0001, Cramér’s V = .149, also a moderate effect.6Did respondents’ change in live video experience over the past year affect their willingness to participate in a live video survey interview?

As Figure 6 shows, about a third of our respondents (193 of 598, 32.3%) reported that their live video usage at the time of data collection had increased over the previous year; about half (296, 49.5%) reported that their usage had not changed, and the remainder (109, 18.2%) reported that their usage had decreased. An omnibus chi-square test with five levels of willingness to participate and three levels of change in respondent video usage (less, the same, more) showed that change in video experience did significantly affect respondents’ willingness to participate in a video interview, χ2(8) = 37.47, p < .001, Cramér’s V = 0.177, a moderate effect.

Figure 6

Willingness to Participate in a Live Video Interview Based on Reported Change in Frequency of Live Video Use in Other Contexts Over the Past Year

Post hoc comparisons using Bonferroni correction with an α level of 0.0033 show four significant differences. Respondents who reported using video more often at the time of data collection than in the previous year were more likely to report being very willing to participate in a live video interview (1 on the 5-point scale) than respondents whose usage had not changed or had decreased, χ2(1) = 10.83, p = .0010, Cramér’s V = 0.135, a moderate effect. They were also less likely to report being very unwilling to participate in a live video interview (5 on the 5-point scale) than the other respondents, χ2(1) = 27.45, p < .0001, Cramér’s V = 0.189, a moderate effect. Conversely, respondents who reported that their usage had decreased were less likely to report being very willing to participate in a live video interview (1 on the 5-point scale) than respondents whose usage had not changed or had increased, χ2(1) = 8.64, p = .00328, Cramér’s V = 0.139, a moderate effect. They were also more likely to report being very unwilling to participate in a live video interview (5 on the 5-point scale) than the other respondents, χ2(1) = 11.59, p < .001, Cramér’s V = 0.14, a moderate effect.7Did respondents’ most frequent context of live video use (work or education, personal, medical, or something else) affect their willingness to participate in a live video survey interview?

As Figure 7 shows, about half our sample (301 of 598, 50.3%) reported using live video most for personal reasons, and almost a third (171, 28.6%) reported using video most for work or education. A smaller percentage reported using it most for medical purposes (70, 11.7%), and a very few (20, 3.3%) listed other reasons (e.g., church, fraternity meetings). Thirty-six respondents (6.0%) reported that they do not use video. An omnibus chi-square test with five levels of willingness to participate and five most frequent contexts of live video use (work or education, personal, medical, something else, never use) showed that the most frequent context of use did significantly affect respondents’ willingness to participate in a live video interview, χ2(16) = 61.45, p < .001, Cramér’s V = 0.16, a moderate effect.

Figure 7

 Willingness to Participate in a Live Video Interview Based on Most Frequent Context of Live Video Use

Post hoc comparisons using Bonferroni correction with an α level of 0.002 show four significant differences. Respondents who reported using live video most for work or education were less likely than the other respondents to report being very unwilling (5 on the 5-point scale) to participate in a live video interview, χ2(1) = 16.42, p < .0001, Cramér’s V = 0.131, a moderate effect. Respondents who reported using live video most for personal reasons were more likely than the other respondents to report being very willing (1 on the 5-point scale) to participate in a live video interview, χ2(1) = 11.01, p < .0001, Cramér’s V = 0.122, a moderate effect. Consistent with the findings about frequency of video use, respondents who reported never using live video at all were more likely to report being very unwilling to participate in a live video interview (5 on the 5-point scale) than the other respondents, χ2(1) = 28.24, p < .0001, Cramér’s V = 0.171, a moderate effect. And these respondents were also less likely to report being very willing to participate in a live video interview (1 on the 5-point scale), χ2(1) = 10.66, p = .0011, Cramér’s V = 0.120, a moderate effect.

Discussion

As of August 2021 in our online sample, people ranged substantially in their willingness to take part in a live video survey interview: A majority were at least somewhat willing to participate in a live video interview, a few preferred live video to all other modes, and some were quite unwilling. Our findings show that their willingness can be predicted by two of three hypothesized factors from the TAM: The extent to which they perceive live video as useful in other contexts and the extent to which they enjoy live video in other contexts. We did not find evidence supporting the hypothesis that perceived ease of use of live video in other contexts would predict respondents’ willingness to participate in a live video interview. Given that Zhou and Feng (2017) found that perceived ease of use predicted intention to use mobile video in leisure but not in work contexts, a plausible interpretation of this pattern of findings is that our respondents see taking part in a live video survey more as work than leisure (even if enjoying video in other contexts does predict their willingness to participate).

More generally, the pattern of findings—including the fact that these factors predicted willingness to participate even when considering our respondents’ demographic characteristics (age, gender, race, Hispanic ethnicity, and education did not significantly predict willingness to participate)—demonstrates the utility of the TAM model in the survey context: The kinds of factors investigated in studies of people’s acceptance of technologies (perhaps particularly in work-related settings) are likely to be relevant for understanding people’s willingness to participate in surveys using those technologies. As such our findings expand existing approaches to studying survey mode preference (e.g., Smyth, Olson, & Millar, 2014).

Our other hypothesized survey-specific factor—the extent to which respondents would find answering a sensitive question in live video more uncomfortable than answering that question in other survey modes—turned out to be a direct predictor of willingness to participate in a live video survey, rather than a moderator of the TAM factors. To our knowledge, this is a first demonstration in the survey context of mode-specific reluctance to disclose (discomfort particular to a communication mode) affecting willingness to participate, although there is substantial evidence in other survey modes that mode of participation can affect data quality (including disclosure of sensitive information) among those who do agree to participate (e.g., Kreuter et al., 2008; Tourangeau & Smith, 1996, among many others), and that topic sensitivity is a strong predictor of willingness to participate in in-person surveys (Couper et al., 2008, 2010). We propose that our approach to measuring mode-specific reluctance allows detection of mode-specific self-presentation concerns more directly than the kinds of proxy measures used in previous studies on survey mode preference that did not include live video (e.g., Smyth, Olson, & Millar, 2014, who used a depression score as a proxy predictor). In any case, our finding raises questions about how mode-specific disclosure reluctance might contribute to observed mode differences (benefits or drawbacks) in data quality: The extent to which selection effects from those unwilling to participate in a particular mode because of mode-specific disclosure reluctance amplify or dampen how the mode affects those who agree to participate.

This finding of mode-specific discomfort as a predictor of willingness to participate also raises questions about what it is about live video that leads particular potential respondents to be uncomfortable. The current results only allow speculation, but we propose that closer examination of the features of live video that lead to the video-specific experience of social presence (views of an interviewer’s face, synchronous spoken dialog, the interviewer’s ability to see the respondent’s face and environment, etc.—see Table 1) will be critical. Based on the logic from Lind et al.’s (2013) study of disclosure of sensitive information in interviewer- and self-administered survey interviews, it may be that the presence of an interviewer’s face—even a cartoon-like animated recording of an interviewer—leads to enough social presence to generate discomfort for some potential participants.

The reasons that respondents gave for their preferences give more texture to the pattern of results. Health and safety was selected as a top reason for preferring a live video interview to an in-person interview, perhaps unsurprisingly in the pandemic era and consistent with our finding that more of our panelists reported (based on their relative ratings of willingness to participate) preferring a live video interview to an in-person interview than reported preferring live video to the other interviewer-administered mode (telephone) or the two self-administered modes. Convenience and social connection with the interviewer were other top reasons for preferring live video to other modes; clearly some respondents see live video as convenient and social connection with the interviewer as desirable. And for some respondents live video was preferable for improving their ability to hear or see. How health and safety concerns may evolve postpandemic is unknowable—the impact may extend into the future—but we assume that the accessibility issues, as well as convenience and social connection, will be important in the long term.

The reasons that respondents gave for preferring other modes to live video also bring texture to the findings. Privacy was the top reason selected for preferring all four other modes to live video, though privacy is likely to mean different things in the different comparisons, whether it is concern about surveillance and who else could have access to what happens in a live video stream (as opposed to what happens in a presumably unrecorded in-person interview), concern about providing external access to the device’s camera and microphone, or concern about a stranger’s seeing a respondent’s face or their home environment. Presumably those who report access to the technology and convenience as the reasons for preferring other modes to live video do not have good or easy video access. We speculate that people who report preferring an in-person interview to live video because of social connection with the interviewer see the greater social connection of in-person interviewing as desirable, while those who give that reason for preferring self-administered modes want less or no social connection with an interviewer.

We see this study as complementary to recent studies of data quality in live video surveys relative to other modes, as well as field tests exploring the viability of live video surveys that will allow longer term comparisons of data comparability with other modes (e.g., Guggenheim et al., 2021; Hanson & Ghirelli, 2021; Jonsdottir et al., 2021). The evidence thus far demonstrates both benefits and drawbacks to live video surveys among respondents who agree to participate. Respondents assigned to live video interviews in one study (Conrad et al., 2022), for example, provided more differentiated answers to a battery of questions (they “straightlined” less, presumably reflecting more thoughtful responding) and they reported more satisfaction with the interview experience than respondents assigned to two self-administered modes, but they also disclosed less sensitive information and they provided less precise (more rounded) numerical answers. In another study (Endres et al., 2023), respondents assigned to live video interviews produced survey data of similar quality relative to a previous web survey, by several measures, as respondents assigned to in-person interviews. Another study (West et al., 2022) investigated whether the kinds of interviewer effects (particular interviewers producing significantly different or outlier patterns of responses) that have been of concern in in-person surveys occur in live video surveys and found little to no evidence of there being a problem. In this context, understanding willingness to participate will be essential for understanding when and for which populations live video might be particularly effective to deploy: when adding it as a primary or optional data collection mode might lead to improved access for some populations—or selection biases from those unable or unwilling to participate.

This study is, of course, a first effort carried out at a particular moment, and it raises as many questions as it answers. Our particular sample of respondents was recruited through an online panel aggregator—a typical way to recruit participants to web surveys today—but whether and how research respondents recruited in this way represent the broader public, and how their findings compare with those from probability samples (Cornesse et al., 2020), remains an important question. Given that our sample consisted of recruits who regularly participate in self-administered surveys, it makes sense that more respondents reported being willing to participate in self-administered surveys (web, prerecorded video) and fewer in interviewer-administered surveys (in-person, live video, phone), but whether this would be the pattern in a representative sample is unknown. That said, findings from online panelists may be particularly useful given the feasibility of recruiting in this way; other plausible ways to recruit into live video research (e.g., address-based sampling, or offering panelists in a multiwave study who have been recruited in other modes the option to participate via video9) are likely to be unavailable or too costly for many researchers.

Another question is whether people’s reported willingness to participate in live video surveys—their behavioral intention—will end up predicting their actual participation in the ways it has been shown to with other technologies in other domains (e.g., Tao, 2009; Turner et al., 2010). There are also practical questions for survey researchers about how they can best apply these findings to their recruitment methods, when they are more likely to have access to demographic data about their pool of potential respondents—which in our data set did not predict willingness to participate—than to the factors measured here. And how evolving norms for invitations and scheduling video calls (Larsen et al., 2021; Schober et al., 2020) will affect recruitment options for survey researchers will also need attention.

Our findings are focused specifically on survey participation, but we see them as raising questions about willingness to participate in live video interactions in other contexts—work, education, medical, personal—that may be useful to consider. In particular, we see the predictive power of video-specific disclosure reluctance as compelling to explore further. We speculate that people who find live video particularly aversive as an arena for disclosing personal information to a stranger in the survey setting may bring some of the feelings and thoughts that contribute to that discomfort into other arenas, even if those interactions are less likely to concern highly sensitive topics. If this is the case, then understanding for whom participation in live video feels burdensome and potentially challenging may help inform when and for whom to best provide live video as an option. Perhaps the “protective barrier” that has been proposed to be therapeutically useful in video telepsychiatry relative to in-person therapy (Miller & Gibson, 2004) is beneficial only for some patients.

In any case, the need for members of the public to be willing to participate in surveys that inform policy remains critical, and this requires understanding the psychological, social and technological factors that affect participation choices and the quality of the data that people provide, even as modes of interaction and new communication technologies emerge. Survey modes continue to evolve—see for example, Höhne (2021) on the addition of two-way voice to web surveys—which will raise further questions about the effects of different survey modes and their specific features. Our findings suggest that some factors are likely to be general—extending from other contexts of interaction to the survey arena—but some may well be specific to the unusual kind of interaction in surveys of providing data to a stranger.

Supplemental Materials

https://doi.org/10.1037/tmb0000100.supp

Received September 29, 2020
Revision received November 2, 2022
Accepted November 4, 2022
Comments
1
?
Anna Finkelstein:

Thank you for this fascinating research Flower delivery Canada. Here in Vancouver, we are getting more and more surveys conducted via video, and it usually bears better results than via phone.