Special Collection: Behavioral Addiction to Technology. Volume 4, Issue 3. DOI: 10.1037/tmb0000117
As digital media technology expands, so does the need for an instrument capable of measuring how human psychology interacts with it. The Digital Media Overuse Scale (dMOS) is a new instrument designed to index problematic overuse of digital media while remaining capable of adapting to changes in the digital media landscape as they emerge. The dMOS is modularly structured, meaning that each domain contains highly similar language reflecting behaviors and attitudes often observed in individuals reporting clinical issues or disorders related to digital media. Here the dMOS was used to investigate clinically relevant behaviors and attitudes as they relate to five digital media domains: general smartphone use, internet video consumption, social media use, gaming, and pornography use. Latent class analysis was conducted on data from 800 college-aged dMOS respondents. Latent class analysis revealed—two to three latent classes for each digital media domain, with notable differences. There was little indication of an overall “digital media overuse” profile, but instead, domain-specific patterns of overuse likely reflect design differences in technologies interacting with human psychology. Initial indications are that the dMOS is a reliable, valid, and extendible clinical instrument capable of providing clinically relevant scores within and across digital media domains. A manual for interpreting subject-level data from the dMOS is being produced, and replicative extensions of the questionnaire into new domains are underway.
Keywords: digital media, psychometrics, social media, gaming, behavioral addiction
Disclosures: No conflicts of interest exist for the authors of this article at the time of writing and submission.
Correspondence concerning this article should be addressed to D. Hipp, Digital Media Treatment and Education Center, 2299 Pearl Street, #310, Boulder, CO 80302, United States. Email: [email protected]
The creation and evolution of digital media technology consistently outpaces scientific efforts directed toward understanding how these technologies impact human psychology and well-being. The past decade has seen entire digital media domains rise, fall, and alter their essential functionality, and this pace of change is not slowing down. The gaming industry has adopted a new profit model (selling within games) and has added whole platforms (e.g., Steam), facilitating and encouraging a rapid proliferation of new games by the users themselves. This development occurred alongside the addition of caming disorder to the World Health Organization Eleventh Revision of the International Classification of Diseases, and a similar condition was added to the index section of the Diagnostic and Statistical Manual of Mental Disorders, fifth edition, text revision, suggesting considerations for further research. Pornography is also now much more mainstream and monetized (e.g., OnlyFans), online shopping requiring only single thumb swipes on a smartphone is now a reality, and smartphones generally have become much more deeply integrated into everyday life. As the science that measures our psychological response to these developments lags far behind, the need is greater than ever for a measurement tool capable of both measuring clinical issues around digital media overuse and quickly adapting to shifts in the digital media landscape as they emerge. The Digital Media Overuse Scale (dMOS) was created to fill this need for a new clinically oriented and flexible instrument in the toolkit of clinicians and researchers.
Psychology is certainly not devoid of tools capable of measuring digital media overuse, even if those tools are too static to adapt to novel domains. Internet gaming disorder was first introduced to the Diagnostic and Statistical Manual of Mental Disorders, fifth edition, in 2013 (American Psychiatric Association, 2013). In the Diagnostic and Statistical Manual of Mental Disorders, fifth edition, text revision, internet gaming disorder was identified in Section III as a condition warranting more clinical research and experience before it might be considered for inclusion in the main book as a formal disorder. Behavioral addictions of this type were first included in early diagnostic manuals starting with “pathological gambling,” which was introduced in the Diagnostic and Statistical Manual of Mental Disorders, third edition in 1980. More recently, the World Health Organization accepted gaming disorder in its 11th revision of the International Classification of Diseases for Mortality and Morbidity Statistics as an officially recognized diagnosis as of January 2022. Gaming disorder is in a subcategory in the Eleventh Revision of the International Classification of Diseases “disorders due to substance use or addictive behaviors” (World Health Organization, 2007). Compulsive sexual behavioral disorder, a disorder that includes problematic use of pornography, was also added to the Eleventh Revision of the International Classification of Diseases in January 2022. There is a clear impetus to study and consider behavioral addictions as sources of psychological distress and dysfunction.
Measurement tools associated with internet gaming disorder (IGD; e.g., the Internet Gaming Disorder Test by Pontes et al., 2014; the Internet Addiction Test by Young, 1998; and the Internet Gaming Disorder Scale; Lemmens et al., 2015) represent critical advancements in clinicians’ ability to measure difficulties with technology overuse in specific domains. Although these instruments are reliable, useful, and reasonably well studied, by their design, these tests are limited in scope to one particular domain and static with respect to changing digital technologies. For example, many instruments exclusively refer to a wide swathe of digital media use as “online” or “the internet,” which limits the scope of the instrument. Thus, these tools and others like them are not capable of expanding to accommodate new platforms or technologies without substantively altering the basic structure of the instrument. Many instruments also have gaps in their capacity to clearly distinguish between normal everyday use and heavy or problematic users, lacking specific cutoff scores that compare normal cohorts to clinical/problematic cohorts (Lortie & Guitton, 2013). Further, few validated measures of digital media or internet overuse include embedded validity measures to distinguish distortion, including symptom exaggeration, careless responding, or inconsistent responding. The dMOS was constructed with these needs in mind.
IGD as a diagnostic category did lay a conceptual framework for understanding digital media generally, even if it was itself oriented only toward gaming. Of course, problematic digital media overuse is not restricted to gaming, and given the well-established mental health issues surrounding excessive preoccupation with social media, there is a need for an instrument that indexes more than gaming. Nonetheless, IGD classification in the Diagnostic and Statistical Manual of Mental Disorders, fifth edition contains nine criteria that were useful in the construction of the dMOS. These criteria were summarized succinctly by the useful review of IGD’s cognitive effects by King and Delfabbro (2014). IGD requires (a) preoccupation with internet games; (b) withdrawal symptoms when internet gaming is taken away; (c) tolerance, or needing to spend greater and greater amounts of time engaged in internet gaming; (d) unsuccessful attempts to limit internet gaming; (e) loss of interest in other leisure activities as a result of, and with the exception of, internet gaming; (f) continuing excessive internet gaming despite knowledge of; (g) deceiving family members, therapists, or others about the amount of internet gaming in which one engages; (h) use of internet gaming to escape or relieve a negative mood; and (i) loss of a significant relationship, job, educational, or career opportunity because of participation in internet games (American Psychiatric Association, 2013). These criteria, together with others gleaned from clinical insights, form the skeletal basis of the questions comprising the dMOS. An example of one such clinical insight relates to what clinicians at the Digital Media Treatment and Education Center have come to call “information overload,” defined as “the compulsive desire to search for and consume information on the internet, including ‘binge watching’ entertainment through feeds and streaming platforms (i.e., YouTube, Netflix), immersing into discussion sites or forums (i.e., Reddit, 4-chan, Quora), and/or obsessively engage in non-job-related computer programming.” These behaviors often lead to isolation, poor sleep, and disengagement from others. Table 1 lists the questions comprising the skeleton of the dMOS and the intended conceptual goal of each.
The Indicator Questions of the dMOS, Listed by the Behaviors and Attitudes They Are Meant to Index | |
Behavior/attitude | Question description |
---|---|
Q1: Cessation | I have a lot of trouble stopping myself from using (domain) even when I know I should. |
Q2: Deception | When talking with friends and family, I tell them I use less (domain) than I actually do. |
Q3: Displacement | My (domain) use has diminished my school or work opportunities and success. |
Q4: Distress | I feel shame and guilt about how much time I spend on (domain). |
Q5: Escape | When life gets me down, I use (domain) to distract myself from my problems. |
Q6: Overload | I spend time learning new things on/about (domain) at the expense of my responsibilities. |
Q7: Persistence | I find it difficult to cut down the time I spend on (domain). |
Q8: Preoccupation (activities) | I find myself accessing (domain) even when I am engaged in other activities. |
Q9: Preoccupation (social) | I find myself thinking about (domain) even when I am socializing in person. |
Q10: Problems | I use (domain) even when I know it is hurting other areas of my life. |
Q11: Spending | I spend money on (domain) things I don’t really need. |
Q12: Substance | I use caffeine, alcohol, or other substances to enhance my time spent on (domain). |
Q13: Tolerance | I have noticed I spend more time on (domain) than I used to. |
Q14: Withdrawal | When I don’t have access to (domain), I feel restless, anxious, or depressed. |
Smartphone only | Question description |
Q15: FOMO | I feel anxious when I forget to bring my cell phone with me when I leave my house. |
Q16: Danger | I look at my cell/smartphone while driving. |
Q17: Proxy overuse | People tell me I’m on my cell phone/smartphone too much. |
Q18: Distress | When I don’t have access to my cell phone/smartphone, I feel restless, anxious, or depressed. |
Note. FOMO = Fear of Missing Out. Each digital media domain is indexed by each of these cross-domain items. |
IGD as a diagnostic category was novel in that previous attempts to measure internet gaming issues lacked a shared conceptual foundation and clinical framework. Even internet gaming has changed dramatically since 2014. Tiktok was only created in 2018 and now rivals the media giants of Facebook and Twitter in terms of scale of use. dMOS was designed to address the need for a questionnaire that is (a) capable of measuring the same set of problematic behaviors and attitudes across digital media domains, (b) capable of revealing clinically relevant data points for individual participants, (c) adaptable in theory and structure to novel digital media domains as they emerge in society, and (d) capable of measuring various forms of self-report distortion, including inconsistent, careless, or under-/overresponding. These aims serve a primary goal, to establish the dMOS as a reliable questionnaire capable of measuring digital media overuse and its behavioral and psychological sequelae in college-age participants. Whether college students are an especially vulnerable population to “internet addiction,” remains an open question. College students per se may be at heightened risk because of their increased freedom, greater social pressures, and unfettered exposure to the internet relative to same-aged individuals not attending college (Young, 2004). On the other hand, college students may be insulated against digital media overuse because of lack of protective factors, especially those not attending college, such as access to health insurance, socioeconomic status, and other support structures. Either way, this is an example of an empirical question well-suited for an investigation that could use the dMOS to index digital media overuse in each sample.
By asking similar questions about behaviors and attitudes across five digital media domains, the dMOS should reveal important similarities and differences in how these domains intersect with human psychology. These domains—video games, pornography, internet video sites, social media, and general smartphone use—were chosen not because they carve digital media into some deep and essential category structure; rather, they were selected due to their observed relevance in the clinic (i.e., students entering the clinic espouse using and having difficulty with these domains) and their general cultural ubiquity. Some authors (e.g., Widyanto & Griffiths, 2006) have claimed that instead of framing internet addiction per se as a unitary phenomenon and research target, researchers should focus on particular activities on the internet. Generally speaking, the argument goes, people do not become addicted to the medium of the internet itself but to the set of actual behavioral feedback loops with which they engage online. This logic underpins the modular structure of the dMOS as an instrument capable of measuring behaviors and attitudes relevant to overuse. If there were indications that there was something essential and immutably important about these five domains, we would not have constructed the dMOS with an extendible structure. These are simply five clinically relevant research domains on which we are attempting to establish the dMOS as an instrument capable of measuring behaviors and attitudes relevant to overuse.
Secondarily, it is hoped to preliminarily establish the dMOS as externally valid and capable in principle and practice of distinguishing clinical or vulnerable individuals from normal individuals. The intent is that the structure of the dMOS, paired with the subtlety of latent class analyses (LCAs), will help reveal nuanced relations within and across the five chosen domains of digital media usage. It is this structure that in principle allows the dMOS to be extended arbitrarily to novel digital media domains as they emerge, so success in this first step may indicate this as a useful direction for follow-up research.
After an initial page asking demographic questions and questions about which digital media platforms people used, the main body of the dMOS contains 81 questions (including five foils and two that were asked twice) indexing behaviors and attitudes related to digital media overuse. The skeletal structure of the instrument consisted of 14 questions for each digital media domain (see Table 1). Smartphone use was indexed by three additional questions (18 total), as smartphones are hardware and thus have a physical presence in the world worth asking about. Each question had five possible responses: 0 (not at all), 1 (sometimes), 2 (often), and 3 (always), and 4 (recoded as N/A in R). N/A responses were not included in any analyses.
The digital media domains indexed by the dMOS, motivated in part by the experience of therapists working in this area and by evaluating market share, include (a) video games, (b) pornography, (c) internet video sites, (d) social media, and (e) general smartphone use. When the questionnaire was first conceived and produced, these domains were agreed upon as the most likely problematic domains of digital media use in society at large. By no means is this meant as an exhaustive list; indeed, the dMOS is intended to be extendible to any digital media domain by importing its skeletal structure. Importantly, no explicit definition of each domain was provided for participants. This was to account for concerns that certain sites (e.g., YouTube) blurred the line between domains. For example, YouTube is clearly and primarily a video streaming site, but it contains elements of social media as well, and “softcore” pornography often slips through YouTube’s content filtration algorithms and ends up posted and viewable. Since exact patterns of use within each site were expected to vary between participants as a function of their intentions, it was left up to participants to decide how to categorize their use (although on rare occasions participants completing the questionnaire in person in the clinic did ask for and receive clarification from D. Hipp).
Respondents included 1,117 college students, 806 (74%) of whom were female, attending Binghamton University in New York State. The survey was administered anonymously using the Qualtrics survey site, and students could only complete the survey once. By responding in a nonsensical manner to the foil questions, some respondents failed to pass the “foil” test (N = 68) and were eliminated. Others were dropped for failure to complete (N = 212), which was calculated as having more than five questions left as blank or having not been flagged as “finished” in Qualtrics. A small number of responses indicating nonbinary gender were provided (N = 7); as this was considered too small a sample, this category was not included in the gender analysis. A total of 800 subjects remained after these removals. The final sample was heavily weighted toward female respondents (reflecting the overall population of undergraduate psychology students from which the sample was drawn), and in part to address this imbalance, analyses on the main sample were conducted both overall and split by sex.
A small clinical comparison sample (N = 28, Nfemale = 2) was obtained from patients seeking clinical services at Boulder, Colorado, and included only individuals seeking clinical help for digital media overuse or a suspected internet use disorder. These were excluded from the LCA (see below), as the intention was to characterize a sample of putatively typical college students.
Prior to completing the survey, all respondents provided informed consent in a manner approved by the institutional review board at Binghamton University.
Smartphone devices used by participants were heavily weighted toward users of Apple products. Ninenty-four percent of the college sample used an iPhone as their primary smartphone and 66% used a Mac as their primary computer.
Participants were asked to list which social media sites/applications they commonly viewed, as well as those to which they commonly contributed. Participants viewed between 0 and 7 social media sites/applications, with an average of 4 sites (SD = 1.3), and contributed to between 0 and 6 social media sites, with an average of 2.4 sites (SD = 1.2). The most commonly viewed social media sites were Instagram (96% of participants), Snapchat (93% of participants), Facebook (69%), Twitter (69%), and Pinterest (39%). The social media sites to which participants most commonly contributed were Snapchat (88%), Instagram (76%), Facebook (28%), Twitter (23%), and Dating Sites (14%).
Participants were also asked which platforms they commonly played digital games on. Participants played video games on between 0 and 4 different platforms, with a mean of 1.7 (SD = .95). Devices used were cell phone/smartphone (82% of participants), computers (48%), console (28%), handheld (7%), and virtual reality (2%). Eight percent of participants did not report playing any games.
Cronbach’s α was used to test internal reliability. This test takes responses to questions asked on a Likert scale and takes the average reliability coefficients for all possible combinations of items in split halves in half to create two half tests (Gliem & Gliem, 2003). The current sample, across all 80 individual indicator questions (pooled across domains), produced a Cronbach’s α of 0.948.
Internal reliability was also indexed within each domain. Reliability scores obtained fell well within acceptable limits: (gaming: 0.916; pornography: 0.887; social media: 0.891; videos: 0.898; phone: 0.887).
Several meaningful data points indexing various components of digital media overuse can be produced from the dMOS for each participant. First, an overall score—the summed total of all questions—can be calculated, in principle revealing a gross measure of total vulnerability to the properties measured by the battery of questions. Next, a set of subscores corresponding to each of the five primary digital media domains can be calculated. The frequency histogram of the domain-specific responses is presented below, in the LCA analysis (see Figures 1–5; see the Appendix for the overall score).
Contained within each domain were common question types, each sharing core language meant to reflect knowledge about how behavior and mentation are impacted by digital media overuse. The common skeletal structure of the questions was constructed based on both the characteristics of IGDenumerated in the Diagnostic and Statistical Manual of Mental Disorders, fifth edition and behaviors related to digital media overuse observed in the Digital Media Treatment and Education Center and Collegiate Coaching Services, two clinics with extensive exposure to internet gaming disorder. The structure was then replicated within each digital media domain. These cross-domain scores are structured to indicate general behavioral and mental tendencies, suggesting overuse that persist across digital media domains. The cross-domain traits (which constitute the individual survey items) indexed by the dMOS are listed in Table 1. Each of the cross-domain scores indexes the response to each skeletal question type across all domains of digital media. For each, a specific domain name appeared in the indicated location. During the administration of the questionnaire, the word “cell” was changed to “smart” as a descriptor of “phone” in these questions, in the name of the accuracy of description.
Overall descriptive statistics for each domain are listed in Table 2. The same statistics for each cross-domain question type are presented in Table 3, and descriptive statistics broken out by gender for both domain and cross-domain question type are presented in Table 4.
Descriptive Statistics for Digital Media Domains | |||||||||
Clinical | Controls | ||||||||
Domain | Total possible | M (SD) | Mdn | Mode | Range | M (SD) | Mdn | Mode | Range |
---|---|---|---|---|---|---|---|---|---|
Gaming | 42 | 10.7 (7.2) | 9.5 | 14.0 | 0–22 | 2.9 (4.9) | 1.0 | 0 | 0–31 |
Phone | 48 | 13.0 (4.6) | 14.0 | 16.0 | 6–20 | 17.3 (8.0) | 17.0 | 14.0 | 0–42 |
Social media | 42 | 9.3 (5.4) | 10.0 | 13.0 | 2–18 | 10.8 (6.7) | 10.0 | 12.0 | 0–35 |
Pornography | 42 | 7.7 (7.6) | 5.5 | 0 | 0–19 | 1.3 (3.1) | 0 | 0 | 0–24 |
Videos | 42 | 12.5 (6.2) | 12.2 | 21.0 | 3–22 | 8.3 (6.6) | 7.0 | 0 | 0–33 |
Descriptive Statistics for Cross-Domain Scores Across Gender | |||||||||
Clinical | Control | ||||||||
Subscore | Total possible | M (SD) | Mdn | Mode | Range | M (SD) | Mdn | Mode | Range |
---|---|---|---|---|---|---|---|---|---|
Cessation | 24 | 3.8 (1.8) | 4.6 | 5.0 | 0–6 | 3.5 (2.3) | 3.0 | 3.0 | 0–14 |
Deception | 24 | 3.5 (1.7) | 4.0 | 5.0 | 0–5 | 2.0 (2.3) | 1.0 | 0 | 0–15 |
Displacement | 24 | 4.1 (2.4) | 5.0 | 5.0 | 0–8 | 2.5 (2.2) | 2.0 | 0 | 0–15 |
Distress | 24 | 3.3 (2.7) | 4.5 | 5.0 | 0–7 | 2.5 (2.4) | 2.0 | 0 | 0–15 |
Escape | 24 | 5.8 (2.0) | 5.5 | 4.0 | 4–10 | 4.0 (2.6) | 4.0 | 3.0 | 0–14 |
Overload | 24 | 4.7 (2.0) | 5.0 | 5.0 | 1–8 | 3.2 (2.0) | 3.0 | 3.0 | 0–9 |
Persistence | 24 | 4.1 (2.5) | 4.5 | 5.0 | 0–8 | 3.4 (2.4) | 3.0 | 3.0 | 0–15 |
Preoccupation (activities) | 24 | 3.4 (1.6) | 4.0 | 5.0 | 0–5 | 3.4 (1.8) | 3.0 | 2.0 | 0–11 |
Preoccupation (social) | 24 | 2.5 (1.8) | 2.0 | 2.0 | 0–5 | 2.7 (1.6) | 2.0 | 2.0 | 0–9 |
Problems | 24 | 4.0 (2.4) | 4.5 | 5.0 | 0–8 | 2.3 (2.3) | 2.0 | 0 | 0–15 |
Spending | 24 | 3.4 (1.7) | 4.0 | 5.0 | 1–5 | 1.8 (1.4) | 2.0 | 1.0 | 0–9 |
Substance | 24 | 1.3 (2.0) | 0 | 0 | 0–5 | 0.4 (1.2) | 0 | 0 | 0–11 |
Tolerance | 24 | 4.8 (2.5) | 5.0 | 5.0 | 1–9 | 3.9 (2.4) | 4.0 | 3.0 | 0–15 |
Withdrawal | 24 | 3.0 (2.0) | 3.0 | 1.0 | 1–6 | 2.2 (1.9) | 2.0 | 0 | 0–12 |
Mean Score for Males and Females in Each Digital Media Domain | ||||||||||
Gaming | Smartphone | Pornography | Social media | Video | ||||||
Subscore | Male | Female | Male | Female | Male | Female | Male | Female | Male | Female |
---|---|---|---|---|---|---|---|---|---|---|
Cessation | .42 (.70) | .13 (.40) | 1.10 (.92) | 1.40 (.88) | .30 (.61) | .03 (.21) | .92 (.83) | 1.7 (.88) | .76 (.80) | .70 (.75) |
.21 (.53) | 1.30 (.91) | .11 (.41) | 1.16 (.88) | .72 (.77) | ||||||
Deception | .49 (.70) | .09 (.40) | .47 (.71) | .58 (.72) | .36 (.78) | .13 (.48) | .38 (.62) | .54 (.70) | .64 (.82) | .63 (.85) |
.19 (.50) | .55 (.72) | .19 (.59) | .49 (.68) | .63 (.84) | ||||||
Displacement | .43 (.66) | .16 (.40) | .84 (80) | .96 (.81) | .08 (.30) | .009 (.10) | .54 (.66) | .68 (.73) | .64(.74) | .58 (.72) |
.24 (.52) | .93 (.81) | .03 (.19) | .64 (.72) | .60 (.73) | ||||||
Distress | .33 (.62) | .09 (.33) | .80 (.81) | .92 (.80) | .29 (.61) | .05 (.25) | .60 (.71) | .75 (.79) | .65 (.82) | .61 (.73) |
.17 (.46) | .88 (.81) | .12 (.41) | .71 (.77) | .61 (.77) | ||||||
Escape | .71 (.83) | .28 (.58) | 1.01 (.88) | 1.34 (.81) | .46 (.65) | .13 (.39) | .89 (.83) | 1.35 (.85) | .90 (.82) | .97 (.82) |
.41 (.70) | 1.24 (.85) | .23 (.50) | 1.21 (.87) | .95 (.82) | ||||||
Overload | .35 (.58) | .09 (.35) | 1.10 (.79) | 1.13 (.76) | .16 (.38) | .03 (.19) | .73 (.71) | .95 (.69) | 1.07 (.81) | .90 (.76) |
.17 (.45) | 1.12 (.77) | .07 (.27) | .88 (.71) | .96 (.78) | ||||||
Persistence | .42 (.70) | .11 (.35) | 1.08 (.87) | 1.13 (.86) | .32 (.64) | .02 (.14) | .82 (.80) | 1.10 (.84) | .93 (.88) | .86 (.83) |
.20 (.51) | 1.22 (.89) | .11 (.39) | 1.02 (.84) | .88 (.85) | ||||||
Preoccupation (activities) | .28 (.56) | .10(.33) | 1.14 (.69) | 1.40 (.70) | .07 (.29) | .007 (.08) | 1.11 (.81) | 1.45 (.78) | .58 (.65) | .57 (.71) |
.16 (.42) | 1.32 (.71) | .03 (.18) | 1.35 (.80) | .57 (.69) | ||||||
Preoccupation (social) | .39 (.64) | .09 (.34) | 1.25 (.68) | 1.50(.71) | .17 (.40) | .04 (.23) | .31 (.53) | .63 (.74) | .41 (.59) | .42 (.63) |
.19 (.48) | 1.43 (.71) | .08 (.30) | .53 (.70) | .42 (.62) | ||||||
Problems | .38 (.67) | .11 (.37) | .70 (.81) | .78 (.77) | .25 (.54) | .01 (.13) | .53 (.68) | .69 (.75) | .67 (.82) | .58 (.73) |
.19 (.51) | .76 (.79) | .09 (.33) | .64 (.73) | .61 (.76) | ||||||
Spending | .85 (.73) | .19 (.45) | .92 (.80) | 1.26 (.80) | .02 (.13) | .002 (.04) | .10 (.32) | .15 (.43) | .22 (.55) | .11 (.39) |
.39 (.63) | 1.16 (.81) | .007 (.09) | .14 (.40) | .14 (.45) | ||||||
Substance | .16 (.44) | .02 (.17) | .11 (.37) | .08 (.31) | .10 (.35) | .02 (.17) | .16 (.46) | .15 (.44) | .15 (.39) | .07 (.32) |
.06 (.29) | .09 (.33) | .05 (.26) | .16 (.45) | .10 (.35) | ||||||
Tolerance | .36 (.66) | .23 (.51) | 1.10 (.90) | 1.50 (87) | .39 (.67) | .07 (.27) | 1.13 (.92) | 1.42 (.90) | .73 (.81) | .75 (.81) |
.27 (.57) | 1.37 (.90) | .17 (.46) | 1.33 (.91) | .74 (.81) | ||||||
Withdrawal | .16 (.41) | .03 (.18) | .90 (.90) | 1.24 (.98) | .10 (.31) | .01 (.12) | .40 (.63) | .68 (.74) | .32 (.59) | .33 (.59) |
.07 (.29) | 1.14 (.97) | .04 (.21) | .59 (.72) | .33 (.59) | ||||||
Overall | 5.6 (6.1) | 1.7 (3.4) | 14.8 (7.9) | 18.4 (7.6) | 3.0 (4.5) | .54 (1.8) | 8.6 (6.1) | 11.8 (6.7) | 8.7 (6.6) | 8.1 (6.5) |
2.91 | 17.32 | 1.29 | 10.83 | 8.25 | ||||||
Note. Question scores ranged from 0 to 3 (0 = never, 1 = sometimes, 2 = often, 3 = always). Robust sex differences were apparent in behaviors around social media and gaming, with only the former revealing higher scores with female participants. Parentheses represent standard deviation. Numbers in the center represent the total average, not separated by gender. |
Correlations between overall scores on each of the domains including in the dMOS were calculated (see Table 5). Phone, social media, and video usage are highly correlated with each other. This may suggest that when one of these platforms contributes to media overuse, the other two may also be used in conjunction. This may also suggest that phone, social media, and internet video overuse contribute to one specific type of overuse within a normalized population, potentially implying that there is not a profile of a general “over user,” but instead, that individuals tend toward specific types or domains of overuse, with potentially different diagnostic criteria.
Correlation Coefficients Between Domains Split by Sex | ||||
Measure | Phone | Pornography | Social media | Video |
---|---|---|---|---|
Gaming | ||||
Total | .13 | .41 | .10 | .35 |
Male | .30 | .41 | .27 | .73 |
Female | .18 | .18 | .15 | .27 |
Measure | Gaming | Pornography | Social media | Video |
Phone | ||||
Total | .13 | .18 | .88 | .66 |
Male | .30 | .37 | .85 | .73 |
Female | .18 | .24 | .88 | .66 |
Measure | Gaming | Phone | Social media | Video |
Pornography | ||||
Total | .41 | .18 | .16 | .28 |
Male | .41 | .37 | .40 | .46 |
Female | .18 | .24 | .21 | .16 |
Measure | Gaming | Phone | Pornography | Video |
Social media | ||||
Total | .10 | .88 | .16 | .63 |
Male | .27 | .85 | .40 | .70 |
Female | .15 | .88 | .21 | .65 |
Measure | Gaming | Phone | Pornography | Social media |
Video | ||||
Total | .35 | .66 | .28 | .63 |
Male | .51 | .73 | .46 | .70 |
Female | .27 | .66 | .16 | .65 |
Note. All correlations are significant at p < .05. Bold correlations represent a high degree of correlation (between ±.50 and 1.00). Italics represent a medium degree of correlation (between ±0.30 and 0.47; N = 800). |
Use of phones, social media, and video is also culturally normalized, especially within the college-aged population. Gaming and pornography overuse show a moderate correlation with each other, while exhibiting low or nonexistence correlations with other types of platform use. This was more true for males than females, dovetailing with commonly held beliefs that these domains are used more by males. This may hint to as yet another type of overuse in the normalized population. Gaming and pornography overuse is likely to be stigmatized, meaning that most users will not engage with these types of content on the phone (although gaming users may not use phones for reasons stemming from the display size and other interactive issues with phones). It is possible that pornography/gaming users and phone/social media/internet video users might represent different types of overuse and may have interesting implications regarding acceptance in social situations. While the correlation between gaming and pornography is not extraordinarily high, the level is certainly suggestive of a relation. Others have reported data suggesting that these are separate domains of interest and possible overuse (Rozgonjuk et al., 2023), which makes sense in light with the sex differences within these domains. Regardless, these correlations suggest that different domain-specific profiles, rather than a single “digital media overuser” profile, are present in the data.
All individual measures, excluding substance use and problematic spending, are at least moderately correlated with each other (see Table 6). This indicates that the internal validity of the measurement is reasonable and that inferential analysis can engage techniques aimed at uncovering latent classes of participants based on these correlations. Note that substance use as well as problematic spending correlate only mildly with other scales; this is likely an indication that these are more extreme indicators, which are anticipated to emerge to a greater degree in a clinical sample.
Correlation Coefficients Between Question Types | ||||||||||||||
Measure | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1. Cessation | — | .51 | .66 | .69 | .60 | .61 | .79 | .60 | .52 | .74 | .24 | .21 | .61 | .53 |
2. Deception | .51 | — | .41 | .53 | .42 | .42 | .51 | .36 | .34 | .48 | .17 | .20 | .48 | .39 |
3. Displacement | .66 | .41 | — | .61 | .49 | .58 | .60 | .49 | .46 | .76 | .25 | .22 | .46 | .44 |
4. Distress | .69 | .53 | .61 | — | .54 | .54 | .75 | .47 | .44 | .72 | .15 | .17 | .56 | .50 |
5. Escape | .60 | .42 | .49 | .54 | — | .55 | .63 | .54 | .52 | .61 | .30 | .23 | .53 | .57 |
6. Overload | .61 | .42 | .58 | .54 | .55 | — | .58 | .48 | .45 | .62 | .26 | .23 | .49 | .43 |
7. Persistence | .79 | .51 | .60 | .75 | .63 | .58 | — | .57 | .50 | .72 | .21 | .15 | .63 | .55 |
8. Preoccupation | .60 | .36 | .49 | .47 | .54 | .48 | .57 | — | .67 | .57 | .29 | .20 | .52 | .51 |
9. Preoccupation (social) | .52 | .34 | .46 | .44 | .52 | .45 | .50 | .67 | — | .51 | .33 | .27 | .48 | .53 |
10. Problems | .74 | .48 | .76 | .72 | .61 | .62 | .72 | .57 | .51 | — | .22 | .24 | .52 | .50 |
11. Spending | .24 | .17 | .25 | .15 | .30 | .26 | .21 | .29 | .33 | .22 | — | .20 | .21 | .24 |
12. Substance | .21 | .20 | .22 | .17 | .23 | .23 | .15 | .20 | .27 | .24 | .20 | — | .19 | .24 |
13. Tolerance | .61 | .48 | .46 | .56 | .53 | .49 | .63 | .52 | .48 | .52 | .21 | .19 | — | .42 |
14. Withdrawal | .53 | .39 | .44 | .50 | .57 | .43 | .55 | .51 | .53 | .50 | .24 | .24 | .35 | — |
Note. All correlations are significant beyond the 0.05 α level. Bold correlations represent a high degree of correlation (between ±.50 and 1.00). Italics represent a medium degree of correlation (between ±0.30 and 0.49). |
Upon inspection of these data, specific distributional properties were consistently observed for data from specific questions and scores within and across domains. In addition to the more formal latent class analyses described below, basic knowledge of the distributional properties of the dMOS items and subscales proffers insight as to how a problematic digital media user may fit on the scale compared to a typical user. Three types of distributions are generally obtained at various scales of analyses: normal distributions, staircase distributions, and what we are calling “clinical” distributions (but may more technically be referred to as “pareto” distributions). Brief descriptions of how these distribution types contextualize individual scores follow below, and examples of these can be viewed in Figure 6.
Normally distributed scores were obtained for the majority of questions and for several subscales. A participant who is a nonproblematic user will likely display a score that is at or near the median (or potentially below) for one of these items or subscales, even if that score may be higher in these domains than in other questions. A nonzero score can be achieved without falling into a problematic usage group. A pattern of high scores across normally distributed items may indicate problematic, but a single high score itself may not.
Scores distributed as a “staircase” indicate an item that is gradually less likely to be endorsed with each increasing step or a subscale for which higher scores are less and less likely in a stepwise fashion. Questions that fall into the staircase category are particularly interesting, as low scores may not be considered “typical,” even if they describe a lack of problematic behavior. These questions may be asking about more than just an individual’s problematic behavior and more so what problematic behaviors surrounding digital media content use are typical or expected within a normal population.
The majority of the participants in a clinical distribution have a score that is at or near zero, with very few participants (but not zero) having a score that falls in the higher tail of the distribution. Scores that are above zero on these questions may indicate clinically relevant behavior (hence the name), unlike the normal and staircase distribution questions within which a score above zero may. The name “clinical distribution” is meant to indicate this fact, that any nonzero responses to questions with underlying clinical distributions may be clinically relevant. Social preoccupation represents a “staircase” distribution (i.e., it is more negatively skewed), and substance use represents an essentially zero plus clinical distribution.
LCA (Koa-Wing et al., 2015; Linzer & Lewis, 2011; McCutcheon, 1985) detects latent categories of multivariate categorical data by using a set of indicator variables. Probabilities of the most common patterns of observations across the manifest variables (14 subdomain questions per domain) can be grouped into a much smaller set of latent classes that can be used to further discriminate between types of responses and, in the present study, types of digital media users. Exploratory LCA was applied to each of the domains (digital gaming, pornography use, social media, video, and smartphone use). A version constructed for polytomous variables in R (poLCA; Linzer & Lewis, 2011) was applied; this enabled analysis of the data without recoding to binary responses. Each of the 14 cross-domain questions was included as manifest variables in the analysis, and models were run in R (R Core Team, 2017) using the poLCA (Linzer & Lewis, 2011) routine. Maximum iterations were set to 1,000 for all models initially. An expectation that LCA would reveal at least two classes of respondent for each domain was confirmed. Models were tested for a number of classes that exhaustively accounted for different patterns of the input predictors for best fit by starting with four classes and comparing the Bayesian information criterion for the lowest value (Schwarz, 1978). The analyses revealed that all five domains were effectively described using at least two and no more than three classes (see below). The outcome of these analyses was used to classify each participant into a classification of the type of digital media use for each of the five domains, which were then compared against their overall domain scores (see Figures 1–5 below). See Appendix for a graph of the contribution of individual items to each of the categories in each of the five domains.
The sample of college students described in the current work remains too restricted to confidently establish fully generalizable clinical thresholds robust to demographic variation. However, preliminary thresholds can be established by the current poLCA analyses and samples evaluated here. Figures 1–5 graphically indicate these by dint of the changing colors of each bar. Each color represents one of two to three latent classes revealed in each domain by poLCA. A score falling on or above the onset of the red bars in each graph is considered in the “problematic” zone; that is to say, a participant whose responses fall into the “red” is expressing an improbably high value relative to other people of their age, indicating problematic behaviors and attitudes around that domain.
A preliminary comparison was conducted between “clinical” participants and “controls,” to discern whether the dMOS is capable of revealing known overusers as such. External validity was assessed by comparing scores obtained from a small sample of participants seeking clinical services at the Digital Media Treatment and Education Center with the scores obtained from the primary sample of college undergraduates. Prior research indicates anywhere from 1% to 18% of the population is likely to evince digital media overuse, regardless of whether they fit into a specific clinical diagnostic category. Assessing the questionnaire in multiple sequentially collected samples can be difficult in such a rapidly changing domain as a digital media technology. Scores from clinical patients tended, as expected, toward the upper end of the score distributions. Specific patients often present not with general technology overuse issues but rather with difficulties surrounding one or a few particular digital media domains (i.e., social media, gaming, etc.). Figure 7 demonstrates this by displaying clinical participants’ highest single-domain score relative to the normalized cutoff for that domain. This clearly indicates that clinical participants generally had at least one score at or above the clinical cutoff ranges tentatively established in the current work.
The current work establishes the dMOS as a new tool in the toolkit of researchers and clinicians hoping to index problematic behaviors and attitudes around digital media use. The dMOS delivers meaningful scores in five primary domains of digital media usage: overall smartphone usage, social media, gaming, pornography, and internet video and streaming sites. This is achieved by asking questions indexing a number of critical factors related to problematic overuse, including (1) cessation, (2) deception, (3) displacement, (4) distress, (5) escape, (6) information overload, (7) persistence, (8) preoccupation; activities, (9) preoccupation; social, (10) problems, (11) spending, (12) substance, (13) tolerance, and (14) withdrawal.
Data from the dMOS reveal telltale patterns for college-age adults. First and foremost, dMOS data reveal that the normative pattern of digital media use in college-aged adults involves a great deal of attitudes and behaviors that, if they were derived from drug use or sex, would be deemed clinically problematic. This was especially true for social media, the domain that seemed to have the largest impact, but was generally true for all domains. For example, the average social media cessation score for putatively normal, college-aged females was 1.7 out of a possible 3, with a significant portion of scores falling higher than that. The two classes revealed by LCA for social media, unlike those for pornography and gaming, reveal that few respondents reported no problematic behaviors and attitudes within that domain. That the dMOS reflected this pattern is considered a strength of the instrument, considering the overarching and obvious societal pattern with respect to social media use; however, this alone carries implications that should be considered within the typical population separately from any clinical sample. The fact that negative attitudes and behaviors derived from social media are fairly normal and universal does not indicate the absence of a problem. Rather, it indicates that a great deal of normal individuals are at risk for behavioral and social consequences of social media overuse. While other domains revealed less severe response patterns on the average, for the subsets of problematic users revealed by LCA in each domain, the logic is the same: A significant proportion of college-aged individuals are having difficulty managing how they relate to digital media.
Rather than simply identifying overusers and normal users across all domains, domain-specific classes of dMOS respondents were empirically derived using LCAs. These classes clustered respondents based on the similarity of their overall pattern of responses across question types, revealing distinct classes of users for each domain. The same technique, when applied across domains based on one of the dMOS’s skeletal questions types—deception, for example—revealed categories/classes of digital media deceivers. These techniques work best with substantial amounts of data, but the sample considered here is not small, and recourse exists for the fact that certain questions may be contributing more or less to the categories defined by LCA analysis. Examination of the LCA clustering of respondents in each class can shed light on this issue, but the best way to do so will be to release the dMOS instrument into the field and acquire data from a diverse set of normative and clinical samples. It is possible that this approach may reveal third or fourth categories for some domains. In a similar vein, the external validity of the instrument by dint of further clinical comparisons requires further establishment, but the small sample of clinical responses we have obtained appear to overlap well with the most extreme LCA class in each domain (see Table 2 for descriptive statistics illustrating this point). This indicates not only the external validity of the dMOS instrument but also the existence of a latent class of college students with digital media overuse issues at a similar level to those seeking help in a professional psychology clinic (e.g., gaming, Lemmens et al., 2015; smartphone use, Fırat et al., 2018). Social media use reflects to some extent the same shifted perception of reasonable use. This is remarkable in isolation; 15 years ago, this was simply not an issue, and now it is considered normal behavior, rendering the assessment of atypical use in these categories more complex. The “screen use” applications that manufacturers resisted adding to their devices and which now give a weekly report of device usage underscore this point. These data make clear that most people (78%) underestimate their weekly usage; 65% underestimating their usage by an hour or more (Smartphone Usage and Screen Time Statistics 2021). The emergence of only two classes in the LCA for social media may indicate that there are few if any users in a “risky” intermediate category; that is, users may be at risk or not. Alternatively, however, it may be that most/all users of this domain are at risk or approaching at-risk levels of behavior. The latter seems more likely, especially given the way social media addicts its users with Skinnerian behavior modification techniques (Lanier, 2018), the fact that most young people are online almost constantly (Anderson & Jiang, 2018) and the known mental health consequences of social media overuse (Allcott et al., 2020; Braghieri et al., 2022; Golin, 2022).
Three classes of response were evident in the gaming domain: a set of low-concern users (or nonusers), a set of high-concern or high-risk, but likely not clinically challenged users, and a small percentage of the set that reflected a clinically significant level of problematic use of digital games. Pornography use differed in that only two classes emerged, and reports from clinical members of the research team indicate that, in line with the instrument’s results, almost any level of responding above zero was cause for concern, confirming the finding that there exist only two groups in this domain: nonconcern or nonusers and users exhibiting problematic levels of use. This change is speculatively due to the perceived social stigma surrounding this use; even as the online culture normalizes many behaviors, some level of concern remains for most high-end users across most of the domains and particularly for pornography use. This implies that heightened responding on almost any subset of questions for this domain may indicate an issue to be followed up.
The skeletal structure common to the questions embedded in each digital media domain makes the dMOS in principle extendible to novel digital media domains as they emerge. Work is currently being undertaken to demonstrate this capacity within a newer digital media domain, TikTok-style “shorts,” which are short looping videos distinct from (and now embedded within) longer form video applications like YouTube. Online sports betting, a rapidly proliferating and digitally mediated form of gambling, will also be targeted in work to follow. This aspect of the survey is of particular interest, as the set of online applications that is currently attracting the most attention (i.e., time) from users (Tiktok and Instagram; Flynn, 2022) is not represented in the current instantiation of the dMOS. A rapid reorientation of the instrument is enabled by its structure, with a prediction of similar outcomes prompted by the present findings.
These findings include a general tendency to reveal markers suggesting anxiety surrounding phone use overall, across a wide swath of the control sample. Additionally, as expected, the range of use of social media was shown to be substantial, indicating that for this measure to indicate a clinical level of overuse, either the individual level itself would need to be quite high or the use level would need to connect to one of the additional markers shown to correlate with social media use.
Too few males (26% of the sample) were recruited to fully evaluate sex differences in digital media overuse using latent class analyses, which requires a large sample size and similarity in the sizes of the groups being compared to make more granular claims. However, it was apparent simply from qualitative assessment of the descriptive statistics around sex that digital media domains impacted males and females differently and in a manner that was fairly predictable a priori (see Tables 3 and 5). Social media had a greater impact on females, with the highest score for either sex in any domain being the cessation score for females (M = 1.7). Gaming and pornography, on the other hand, had a greater impact on males. For gaming, this was not driven by a difference in usage statistics, as 92% of our sample reported playing video games. Rather, while nearly all our participants reported gaming to some extent, only 9% scored above a cutoff suggested by the LCA for highly problematic use; this subgroup included far more males than females. The low base rates for substance use in our data suggest that anyone espousing higher responses to these questions may be evincing behaviors of clinical concern—a useful data point for clinicians to be integrated into the eventual manual for interpreting dMOS data from individuals.
The dMOS is a highly promising new instrument capable of helping researchers and clinicians to understand the psychological dynamics around digital media overuse. The current work does have its limitations, of course. The five domains chosen were meant to be representative but obviously do not exhaust all internet-based applications. As the dMOS is extended into novel domains (e.g., TikTok, virtual reality), external and internal validity will likely need to be evaluated anew. Several editorial decisions in the construction of the instrument were debated, and it is perhaps the case that some of these decisions will end up being suboptimal. For instance, allowing users to self-define what constitutes social media, gaming, and our other domains does limit the certitude with which results can be presented, but the structure of the dMOS allows for rapid construction of a variant of the instrument capable of addressing this and similar questions. Future work will also more deeply assess validity in its various forms by comparing results from other questionnaires to those obtained by the dMOS, expanding the demographic diversity of our sample, and more concretely establishing the clinical thresholds necessary for delineating problematic behaviors and attitudes from normal ones.
There are of course limitations to the current work. First and foremost, this work does not speak to positively valenced experiences for any of the digital media domains. This is a feature, not a bug, but does limit the scope of the instrument. It is not expected that the various domains included in the current work are equivalent in their capacity for producing positive experiences in users. Also, the current instantiation of the dMOS does not define domains at the level of individual apps (Facebook, Twitter, TikTok, etc.) or games or gaming types (first-person shooter, Massively Multiplayer Online Role Playing Games, etc.). Vast distinctions in content exist between applications, and while current data cannot speak more directly to these nuances, these differences likely matter a great deal at the individual level. If a specific research team was motivated to conduct that sort of research, adapting the dMOS to address more directly one such platform would not only be possible but relatively easy, with the same scoring system as created for each current dimension applying to the adapted scale. In fact, such adaptation is meant to be one of the main reasons for the design of the dMOS with a skeletal structure of core questions. A study using the dMOS to compare social media applications in terms of their negative outcomes, for example, may indeed be fruitful. The dMOS was also intended to present face-value questions to respondents, leading to potential self-report distortions if participants are incentivized to answer in a particular way. If such motivations are suspected, the dMOS might be more usefully administered in a dual fashion, to a proxy and the main respondent. Another, perhaps complimentary approach would be to ask respondents questions that were less transparent.
Nonetheless, the dMOS is a useful and valid new instrument based on established criteria of concern and is also by design capable of adapting to novel digital media domains as they emerge. Future research to examine the criterion validity of the dMOS may include a comparison of responses to informant/collateral reports (parents, partners, peers), screen usage data (likely ScreenTime application data given the high proportion of Apple users), and cross validation with established measures of technology overuse as well as comorbid impairments (psychological measures of anxiety, depression, functional impairment, etc.).
Polytonous latent class analysis graphs for each of the five digital platforms examined, ranking the likelihood of participant answers to each of the 14 questions by their likelihood to contribute to a pattern of scores that would classify an individual as belonging to a class, are presented here. These are the basis for class differentiation scores calculated using the range and the distribution of actual scores in the sample.