Skip to main content
SearchLoginLogin or Signup

The Effect of Mere Presence of Smartphone on Cognitive Functions: A Four-Level Meta-Analysis

Volume 5, Issue 1. DOI: 10.1037/tmb0000123

Published onJan 30, 2024
The Effect of Mere Presence of Smartphone on Cognitive Functions: A Four-Level Meta-Analysis
·

Abstract

As smartphones have become portable and immersive devices that afford social, informational, and recreational conveniences unbounded by physical restrictions, most daily activities have become closely intertwined with the presence of smartphones. This constant presence of smartphones in daily activities, however, may be concerning as some studies have suggested that smartphones—even their mere presence—can be distracting and can impair cognitive outcomes. However, such findings have not been consistently observed. To reconcile mixed findings, the current meta-analysis synthesized 166 effect sizes drawn from 53 samples and 33 studies including 4,368 participants on the effect of mere presence of smartphone on cognitive functions. It was found that the mere presence of smartphone had no significant effect on cognitive outcomes (d = −0.02, SE = 0.02, 95% CI [−0.06, 0.01], p = .246). Further, the effect of mere presence of smartphone was not moderated by demographics, trait smartphone dependency, or various methods for manipulating smartphone presence and assessing cognitive outcomes. These findings indicate that there is little reason at present to think that complete isolation from smartphones in a work environment would improve productivity and performance.

Keywords: smartphone, mere presence, cognitive performance, executive functions, meta-analysis

Funding: This research was supported by grants awarded to AndreeHartanto by Singapore Management University through research grants from the Ministry of Education Academy Research Fund Tier 1 (21-SOSS-SMU- 023 and 22-SOSS-SMU-041) and Lee Kong Chian Fund for Research Excellence.

Disclosures: The authors declare that there is no conflict of interest.

Data Availability: All screening records and analytic codes have been made publicly available on ResearchBox (No. 463; https://researchbox.org/463).

Open Science Disclosures: The data are available at https://researchbox.org/463. The experimental materials are available at https://researchbox. org/463.

Correspondence concerning this article should be addressed to Andree Hartanto, School of Social Sciences, Singapore Management University, 10 Canning Rise, Level 4, 179873, Singapore. Email: [email protected]

Video Abstract


Supported by rapidly evolving technology, phones of today no longer purely serve as communication tools. Commonly known as smartphones, they now possess the ability to serve a myriad of functions including navigation, assisting in financial transactions, facilitating work-related correspondence, and even playing a pivotal role in the COVID-19 pandemic by operating as a contact-tracing tool (Abbas & Michael, 2020). With everyday activities intertwined with the threads of smartphone technology, this constant interaction with smartphones has forged feelings of dependency (Gonçalves et al., 2020; Lapierre et al., 2019). For instance, studies have shown that the majority of users consult their smartphone at least once every hour and report that they cannot live without it (Hartanto & Yang, 2016; Kara et al., 2021; Orange, 2019; Rodríguez-García et al., 2020). Given that smartphones have undoubtedly permeated our daily lives, there has been growing interest among researchers to understand the cognitive impacts of these devices.

One potential impact of smartphones, which has fostered growing concern from the general public and researchers, is that it poses severe distraction to our daily activities (Leynes et al., 2018; Mourra et al., 2020; Shelton et al., 2009). Smartphones have been commonly argued to cause exogenous interruptions due to their incessant alerts and notifications (Agrawal et al., 2017; L. D. Turner et al., 2015). The constant picking up and checking of the device can be a persistent source of interruptions and distractions from tasks that require decision making and deep thinking (Duke & Montag, 2017; Hartanto et al., 2022; Wajcman & Rose, 2011). In fact, these distractions caused by smartphones have been observed even when the smartphone user ignores the distracting stimuli and focuses on the ongoing task. For instance, studies have suggested that the sound of a smartphone’s notification alone may interrupt students’ ability to remain focused and impair academic performance (End et al., 2010; Shelton et al., 2009). Similarly, Stothart et al. (2015) found that distractions caused by smartphone notifications elicited task-irrelevant thoughts and impaired cognitive performance, even when the smartphone was not actively being used.

Beyond the distractions caused by smartphones’ alerts and notifications, the mere presence of smartphones—even without notifications—could serve as a distracting stimulus in the environment and impair cognitive performance. Given the multifunctionality of a smartphone and its increasing integration into users’ daily activities, smartphones tend to be associated with many rewarding experiences (Bayer et al., 2016). Individuals may slowly develop a habitual repertoire to look for smartphones due to their constant expectation of incoming notifications that may be accompanied with rewards. As a result, smartphones have become a salient, high-priority stimuli in the environment that could exert a gravitational pull on the orientation of attention (Ward et al., 2017).

Furthermore, the saliency of the mere presence of smartphones may also elicit task-irrelevant thoughts and internal attention to smartphones in the form of smartphone vigilance (Johannes et al., 2018; Koessmeier & Büttner, 2022). For instance, the mere presence of smartphones may induce smartphone-related thoughts such as those related to using smartphones and their potential rewards. These are the possible reasons why the mere presence of smartphones may reduce attentional resources available for actual goal-relevant cognitive processes, leading to impaired task performance. In support of this, a number of studies have found that the mere presence of smartphones is sufficient to have a significant detrimental impact on task performance. For instance, in a seminal study by Thornton et al. (2014), participants in the experimental condition were exposed to an experimenter’s smartphone while completing a series of cognitive tasks. In contrast, participants in the control condition were exposed to a spiral notebook of similar size to the smartphone. Participants in the experimental group demonstrated poorer performance in more demanding tasks, such as an additive cancelation task and the Trail Making Test–Part B, compared to participants in the control group. Further, Thornton et al. (2014) replicated these findings in another study where participants in the experimental condition were instructed to place their own smartphone beside them on the table. Consistent with Thornton et al. (2014), subsequent studies by Ward et al. (2017) also found evidence that the mere presence of smartphones was sufficient to impair working memory capacity and fluid intelligence.

Despite aforementioned studies supporting the detrimental effect of mere presence of smartphones on cognitive performance, several other studies have failed to find any significant differences in task performance between those who were randomly assigned to smartphone presence conditions and smartphone absence conditions (e.g., Bianchi Bosch, 2018; Boila et al., 2020; Foreman-Tran et al., 2019). For instance, a replication study of Thornton et al. (2014) by Lyngs (2017) did not find any evidence that the presence of smartphones reduces performance in additive cancelation tasks. Similarly, another conceptual replication by Foreman-Tran et al. (2019) did not find a significant negative cognitive effect of smartphone presence when assessed by the 12 Cambridge Brain Science tasks, a battery of computerized tasks that measure planning, reasoning, attention, and working memory abilities (Hampshire et al., 2012). These mixed findings have thus raised questions about whether there truly exists an effect of mere presence of smartphones on cognitive performance.

Given the high prevalence as well as the frequency of worldwide smartphone usage (Cha & Seo, 2018; Sohn et al., 2019), there is a growing concern from the general public and researchers that smartphone poses severe distraction to our daily activities (Leynes et al., 2018; Mourra et al., 2020; Shelton et al., 2009). This is especially prominent after studies showed the deleterious effect of mere presence of smartphone on cognitive functions (e.g., Thornton et al., 2014; Ward et al., 2017). For instance, several schools in France and the United States have started to ban smartphones in the classroom, motivated partly by research on the negative cognitive implications of mere presence of smartphones (Boston Globe, 2022; Burns, 2019; Consumer News and Business Channel, 2019). Thus, there has been an increased effort in recent years to reconcile the mixed findings related to the negative implications of mere presence of one’s smartphone on cognitive performance.

Although numerous recent studies have been conducted (e.g., Aguila, 2019; Johannes et al., 2019; Stone, 2020; Tanil & Yong, 2020; Tarantino, 2019), efforts to reconcile the equivocality have been hindered by heterogeneity in experimental procedures, sample characteristics, and cognitive tasks. For instance, there has been significant variability in the way studies have manipulated smartphone presence. Smartphones have been required to be placed face up in some studies (e.g., Ward et al., 2017) but face down in other studies (e.g., Tanil & Yong, 2020). Moreover, some studies have operationalized smartphone absence or separation as participants keeping the device apart from themselves but still in the same room (e.g., Thornton et al., 2014), whereas other studies have required that participants’ smartphones were kept by researchers or left in a separate room altogether (e.g., Hartmann et al., 2020). Further, in inducing smartphone absence, there have been discrepancies regarding whether smartphones were switched off completely (e.g., Thornton et al., 2014), left on airplane mode (e.g., Hartmann et al., 2020), left on silent mode with vibration enabled (e.g., Tanil & Yong, 2020), or left on silent mode without vibration (e.g., Ward et al., 2017). In addition, some studies chose to introduce a cover story to reduce demand characteristics as a potential confound (e.g., Stahl, 2018; Thornton et al., 2014), while others did not (e.g., Canale et al., 2019; Johannes et al., 2019). This methodological variance in the manipulation of smartphone presence may affect the salience of smartphones as a momentary distractor of cognitive functions. For instance, enabling ringtone and vibration during the experiment may increase the possibility that participants’ smartphone was ringing and vibrating, which may distract and confound the cognitive effect of mere presence of smartphone (Johannes et al., 2018; Stothart et al., 2015).

Variance in sample characteristics and the cognitive measures used in existing studies have also been observed. For example, the single-probe task (Rouder et al., 2011) was used in some studies (e.g., Canale et al., 2019), while the operation–reading span task (OSPAN; Unsworth et al., 2005) and Raven’s Standard Progressive Matrices (Raven et al., 1998) were used in others (e.g., Hartmann et al., 2020; Ward et al., 2017). In addition, individual differences such as the fear of missing out (FoMO) and smartphone dependency were controlled for in some studies (e.g., Stahl, 2018), but not in others (e.g., Boila et al., 2020). This is problematic given that the saliency of smartphones as a momentary distractor is likely to be higher for those with higher smartphone dependency and FoMO. This is consistent with existing studies have shown that individuals with high FoMO and smartphone dependency are more likely to be distracted by smartphone such as engaging in smartphone-related bedtime procrastination (e.g., Scott & Woods, 2018; Zhang et al., 2023) and experiencing smartphone-related traffic accidents (e.g., Appel et al., 2019; S. R. Rosenthal et al., 2022). These variations create difficulties in making comparisons across studies, therefore impeding accurate interpretations of the literature as a whole.

Summary of Goals

As smartphones expand in function and sophistication and play an increasingly pivotal role in our lives, it is crucial that we acquire a more precise understanding of their impact on cognition. One potential source of smartphone-related cognitive interference, which has garnered some concerns from the general public and researchers, is simply the mere presence of smartphone that may attract the orientation of attention due to its growing saliency (Leynes et al., 2018; Mourra et al., 2020; Shelton et al., 2009; Ward et al., 2017). Despite numerous experimental studies having been conducted to examine the cognitive effect of mere presence of smartphone, findings from the existing studies have been mixed with heterogeneity in experimental procedures, sample characteristics, and cognitive tasks (e.g., Aguila, 2019; Johannes et al., 2019; Stone, 2020; Tanil & Yong, 2020; Tarantino, 2019). Hence, the primary goal of this research was to conduct a systematic review and quantitative integration of the current findings on the effect of smartphone presence on cognitive outcomes, estimating an overall effect size of this relationship. A significant negative effect size observed would support the notion that the mere presence of smartphones impairs cognitive function. The secondary goal was to examine the various methodological discrepancies in the current literature as moderators of said relationship. This may serve to explain the mixed findings present in the literature. Following the methodological issues illustrated earlier, we sought to examine several moderators: (a) placement of the smartphone in the smartphone present condition (face up or face down), (b) location of the smartphone in the smartphone absent condition (kept or not kept by participant), (c) smartphone mode (silent, sound on, or powered off), psychological moderators of (d) smartphone dependency and (e) FoMO, and (f) the type of cognitive tasks used (e.g., executive functioning, episodic memory). These moderator investigations would shed light on the boundary conditions in which individuals may or may not be affected by the mere presence of smartphones.

The current meta-analysis of smartphone presence and cognition is informative in three crucial ways. First, a meta-analysis aggregates effect sizes across various studies by considering variances in study sample sizes, thus yielding a more robust test with greater precision in estimating the overall effect of smartphone presence on cognitive functions. Second, the coding of methodological and sample characteristics across various studies allows for a systematic examination of whether method and sample factors potentially underpin differences in effect size and mixed patterns in current literature (Hartanto et al., 2021). Last, the comprehensive integration of existing studies on smartphone presence and cognitive function can serve as a useful basis for future empirical work in the field, highlighting critical gaps in the literature. The current meta-analysis aims to provide a clearer view of the field, thereby motivating continual discussions and better empirical work, and refining existing theories where necessary.

Method

Transparency and Openness

The design and analysis plan of the current meta-analysis were not preregistered. All screening records and analytic codes have been made publicly available on ResearchBox (No. 463; https://researchbox.org/463; Hartanto, Lua et al., 2023). Automatic deduplication of search records was performed using Mendeley Desktop Version 1.19.8 (Mendeley, n.d.). Analyses were conducted in R Version 3.6.3 (R Core Team, 2020) using metafor Version 3.0-2 (Viechtbauer, 2010) with restricted maximum likelihood estimation. The Practical Meta-Analysis Effect Size Calculator (https://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-SMD9.php) was used to calculate standardized difference d and variance using the logit method.

Search Strategy

Searches were conducted across various sources for all records available by June 7, 2023, in order to maximize our reach. No filters were applied to any searches.

The main portion of the literature search was conducted in five key databases (i.e., EBSCOhost ERIC, EBSCOhost APA PsycInfo, Pubmed, Scopus, and Web of Science) using the following keywords:

(“smartphone” OR “phone” OR “handphone” OR “cellphone” OR “nomophobia” OR “mobile device”) AND (“availability” OR “presence” OR “visibility” OR “location” OR “separation”) AND (“cognition” OR “memory” OR “cognitive flexibility” OR “wisconsin Card” OR “stroop task” OR “cognitive control” OR “executive function” OR “trail making test” OR “pre-potent response” OR “prepotent response” OR “flanker task” OR “intelligence” OR “sustained attention to response task” OR “cognitive performance” OR “simon task” OR “complex span” OR “operation span” OR “ospan task” OR “digit span” OR “inhibitory control” OR “self-control” OR “interference control” OR “executive control” OR “task switching” OR “task-switching” OR “cognitive function”).

The five databases, respectively, resulted in a total of 7, 116, 322, 2,805, and 454 retrieved records.

To supplement the main search, manual searches were conducted directly in relevant peer-reviewed journals, from forward searches of two relevant journal articles, and from two additional databases. Specifically, we first searched six technology-related journals (Computers in Human Behavior; Cyberpsychology: Journal of Psychosocial Research on Cyberspace; Cyberpsychology, Behavior, and Social Networking; Behaviour & Information Technology; Technology, Mind, and Behavior; and Computer & Education) using the keywords “presence of smartphone,” resulting in a total of six retrieved records. Second, forward searches from Thornton et al. (2014) and Ward et al. (2017) were conducted, due to their seminal status in examining the effects of smartphone presence on cognition. All records resulting from the forward search were retrieved. Last, to reduce publication bias, a manual search was conducted for unpublished literature in ProQuest Theses & Dissertations and Google Scholar using the keywords “presence of smartphone.” Eighteen records were retrieved from the search.

Inclusion and Exclusion Criteria

The literature search yielded 4,368 potentially eligible records (see Figure 1 for Preferred Reporting Items for Systematic Reviews and Meta-Analysis flowchart; Moher et al., 2009). Following the computerized removal of duplicates, 3,671 unique records were split and screened by four screeners (Manmeet Kaur, K. T. A. Sandeeshwara Kasturiratna, Jonathan L. Chia, and Paye Shin Koh) for potential inclusion, with each of the four screeners assessing approximately 25% of the unique records. First, titles and abstracts were evaluated based on a preliminary set of criteria, which looked at whether each record (a) was published in English, Chinese, Malay, or Bahasa Indonesia, (b) was an empirical and quantitative study, (c) mentioned smartphones, and (d) was experimental. Based on the abstract screening, 2,885 records were removed.

Figure 1

PRISMA Flowchart
Note. m = number of studies; n = number of samples; k = number of effect sizes; PRISMA = Preferred Reporting Items for Systematic Reviews and Meta-Analysis.

The remaining 786 full-text records were then each screened for inclusion by two independent screeners (K. T. A. Sandeeshwara Kasturiratna and Paye Shin Koh) based on the following criteria:

  1. Studies were written in English, Chinese, Malay, or Bahasa Indonesia.

  2. Studies manipulated the mere presence of smartphones. Smartphones had to be visually present in the smartphone presence condition.

  3. Studies reported at least one objective measure of cognitive performance. Acceptable measures of cognitive performance included but were not limited to the Stroop task, sustained attention to response task, and visual search task.

  4. Studies were experimental or quasi-experimental in nature. All randomized controlled trials and studies with quasirandomized methods of treatment allocation (e.g., alternate allocation) were eligible for inclusion.

  5. Studies had at least one control group that isolated the variable of interest (i.e., mere presence of smartphones).

  6. Studies were included if they examined humans. No restrictions were placed on any sample characteristics such as age or health.

  7. No restrictions were placed on the peer review status of studies (i.e., studies were included whether or not they were peer reviewed).

  8. Studies reported appropriate statistics or other quantitative information that allowed us to compute effect sizes. If a study did not report the necessary information, data were requested from the relevant authors via email, ResearchGate, or other online communication channels. In total, out of the 29 authors contacted, 18 provided the requested data (62.07% response rate). The remaining 11 authors did not respond after two email requests were sent. As five of the records written by those 11 authors did not report enough information to compute effect sizes, the records were excluded.

Upon further examination of the 786 potentially eligible records, 29 records (contributing 33 unique studies with 53 samples) met all inclusion criteria,1 of which 19 were journal articles (Boila et al., 2020; Canale et al., 2019; Hartmann et al., 2020; Ito & Kawahara, 2017; Johannes et al., 2018, 2019; Kaminske et al., 2022; Koessmeier & Büttner, 2022; Linares & Sellier, 2021; Liu, Dempo, Kimura, et al., 2022; Liu, Dempo, & Shinohara, 2022; Nakagawa et al., 2022; Niu et al., 2022; Pellowe et al., 2015; Ruiz Pardo & Minda, 2022; Skowronek et al., 2023; Tanil & Yong, 2020; Thornton et al., 2014; Ward et al., 2017), nine were unpublished theses or dissertations (Aguila, 2019; Bailey, 2018; Beijer, 2020; Bianchi Bosch, 2018; de Werd, 2020; Ruiz Pardo, 2022; Stahl, 2018; Stone, 2020; Tarantino, 2019), and one was a conference poster (Lyngs, 2017).

Coding of Variables

All information was directly extracted from the Method and/or Results sections of the respective studies or authors who responded to email requests. Unless otherwise stated, information was coded independently by either K. T. A. Sandeeshwara Kasturiratna and Manmeet Kaur or K. T. A. Sandeeshwara Kasturiratna and Paye Shin Koh, who then discussed and resolved discrepancies after the initial coding process was completed. The interrater agreement for all variables was generally excellent, with 95.73% agreement on average (range = 76.36%–100%) between K. T. A. Sandeeshwara Kasturiratna and Manmeet Kaur, and 92.67% agreement on average (range = 80.00%–100%) between K. T. A. Sandeeshwara Kasturiratna and Paye Shin Koh.

Critical Information

To compute effect sizes, we coded three critical pieces of information—the number of participants, the mean score on the cognition task, and the standard deviation of scores on the cognition task—for each condition (smartphone present/smartphone absent). Where information was not reported directly in the record, the relevant authors were emailed to obtain them.

Cognitive tasks included tasks assessing various aspects of executive functioning (e.g., Stroop task), intelligence (e.g., academic scores), sustained attention (e.g., Sustained Attention to Response task), and decision making (e.g., Iowa Gambling task). For tasks where multiple dependent outcomes were reported (e.g., interference score vs. reaction time for Stroop task), outcomes that past literature deemed as the most representative measurement of these tasks (e.g., interference score for Stroop task) were coded. We defined executive function as a multifaceted construct of higher order cognitive processes responsible for controlling and regulating thoughts and actions to achieve a goal (Diamond, 2013; Hartanto & Yang, 2020; Miyake et al., 2000). Based on the unity and diversity framework for individual differences in executive functions (Friedman & Miyake, 2017; Miyake & Friedman, 2012), any cognitive task that requires at least one of the three core processes that make up executive functions—inhibitory control, task-switching, or updating working memory—will be considered as tasks that demand executive functioning. Cognitive tasks that are traditionally not been considered as executive functions tasks or primarily used to measure basic cognitive processes of perception, response execution, and episodic memory, such as simple reaction time task, lexical decision task, semantic decision task, reading task, sentence comprehension and episodic memory task, were coded as nonexecutive functioning. Our decision to include these nonexecutive functioning tasks rests on three key reasons. First, while these cognitive tasks may not directly test executive functions in the same way as, for instance, a Stroop task or operation span task, they still necessitate a degree of cognitive control, attentional orientation, and goal maintenance—all of which could be susceptible to external distractions such as the presence of a smartphone. Second, by including a broader spectrum of cognitive abilities, it allowed our meta-analysis to examine the ecological validity and practical relevance of the effect of mere presence of smartphone. Third, the inclusion of nonexecutive function task would provide us the opportunity to perform an additional moderation analysis to examine the differential effect of mere presence of smartphones on a wide variety of cognitive tasks—both executive functions and nonexecutive functions tasks.

Record and Study Characteristics

For each record, we coded its publication source (journal article, dissertation/thesis, and conference paper). Theses, dissertations, and conference posters were considered as unpublished studies. Only studies published in peer-reviewed journal articles were considered as published studies. The inclusion of both published and unpublished studies is important to minimize and examine publication bias (Ahmed et al., 2012; McLeod & Weisz, 2004; Wilson, 2009). Research has consistently shown that studies with a significant effect are more likely to be published and cited by other authors than studies without a significant effect (Franco et al., 2014; Koletsi et al., 2009; Mimouni et al., 2015; R. Rosenthal, 1979).

For each study, we coded the country where the study was conducted, and various methodological characteristics (placement of the smartphone in the smartphone present condition [face up vs. face down], location of the smartphone in the smartphone absent condition [kept by the participant vs. not kept by the participant], and whether the smartphones were on silent, sound on, or powered off). Additionally, we reviewed every included study and coded for whether or not a cover story was used during the procedure of each study, as well as the level of cognitive demand (executive functioning vs. nonexecutive functioning) of each cognitive task used.

Participant Characteristics

We coded the sample mean age, sample gender proportion, sample mean smartphone use duration, sample mean amount of time spent on social media platforms, and sample means on two psychological moderators (trait smartphone dependency and trait FoMO) where available. Both moderators were assessed via various self-reports. Specifically, FoMO was assessed through the Fear of Missing Out Scale (Bowman & Clark-Gordon, 2019), while smartphone dependency was assessed using various measures such as the Possession Attachment Scale (Weller et al., 2013), Smartphone Addiction Inventory (Y.-H. Lin et al., 2014), and Smartphone Addiction Scale (Kwon et al., 2013). Given that each measure for smartphone dependency was scored differently and hence not comparable, all scores were transformed such that the minimum score became 0, while the maximum score became 1 for all measures. Based on the Percent of Maximum Possible Score procedure by P. Cohen et al. (1999), the lowest possible score on the measure was subtracted from the mean score obtained by the sample, and the resulting number was then divided by the highest possible score on the measure.

Calculation of Effect Sizes

We computed all effect sizes in terms of standardized mean differences, where negative values denoted that smartphone presence groups showed decreased cognitive performance levels compared to smartphone absence groups. Conversely, positive effect sizes denoted that smartphone presence groups outperformed smartphone absence groups. Depending on whether the study employed a between-subject research design or a within-subject research design, two possible calculations of the effect size were used.

First, for studies employing a between-subject research design, J. Cohen’s (1988) standardized d was used as the effect size index, where d=MpresentMabsentSDpooledd = \frac{M_{\text{present}}\, –\, M_{\text{absent}}}{SD_{\text{pooled}}}. The pooled standard deviation was calculated as per the formula provided by J. Cohen (1988), SDpooled=(npresent1)SDpresent2+(nabsent1)SDabsent2npresent+nabsent2SD_{\text{pooled}} = \sqrt{\frac{{({n_{\text{present}}\, –\, 1})}SD_{\text{present}}^{2}\, + \,{({n_{\text{absent}}\, –\, 1})}SD_{\text{absent}}^{2}}{n_{\text{present}}\, + \, n_{\text{absent}}\, –\, 2}}. Second, for studies employing a within-subject research design, Becker’s (1988) standardized d was used as the effect size index. Becker’s d, an estimate of the population effect size comparable to the previously described Cohen’s d effect size, was calculated as d=MpresentMabsentSDabsentd = \frac{M_{\text{present}}\, –\, M_{\text{absent}}}{SD_{\text{absent}}}. Sampling variances for both between-subject and within-subject studies were calculated using two formulas as per Morris and DeShon (2002) that enabled us to calculate design-specific estimates of sampling variance corrected to be comparable across both between-subject and within-subject designs. Between-subject sampling variance was calculated using the formula v=(1n~)(N2N4)(1+n~(d2))d2(c(N2)2)v = (\frac{1}{ñ})(\frac{N\, - \, 2}{N\, - \, 4})(1 + ñ(d^{2})) - \frac{d^{2}}{({c{({N - 2})}^{2}})}, where N refers to the combined number of observations in both groups, n~=(npresent×nabsent)/(npresent+nabsent)ñ = (n_{\text{present}} \times n_{\text{absent}})/(n_{\text{present}} + n_{\text{absent}}), and c=13(4(npresent+nabsent2))1c = 1 - \frac{3}{{({4{({n_{\text{present}}\, + \, n_{\text{absent}}\, - \, 2})}})}\, - \, 1}. Within-subject sampling variance was calculated using the formula v=(1n)(n1n3)(1+nd2)d2(c(n1))2v = (\frac{1}{n})(\frac{n\, - \, 1}{n\, - \, 3})(1 + nd^{2}) - \frac{d^{2}}{{({c{({n\, - \, 1})}})}^{2}}, where n referred to the number of paired observations in a within-subject design, and c=13(4(n1))1c = 1 - \frac{3}{{({4{({\, n - \, 1})}})}\, - \, 1}.

In one case (Hartmann et al., 2020), performance on cognition tasks was measured in terms of hit/miss counts in each condition. As such, directly calculating a standardized mean difference from means and standard deviations was not appropriate. Instead, we used the 2 (hit vs. miss) × 2 (smartphone present vs. smartphone absent) frequencies to calculate d and v using the logit method.

Meta-Analytic Approach

The assumption that effect sizes were independent of each other was violated since some samples provided multiple effect sizes by completing more than one measure (e.g., Aguila, 2019; Stone, 2020). Additionally, in some studies, control groups were compared with multiple experimental groups (e.g., Canale et al., 2019). As such, the assumption that samples were independent was also violated. Therefore, in accordance with Fernández-Castilla et al. (2020), four-level meta-analyses were conducted with effect sizes nested within samples and samples nested within studies.

In order to ascertain the validity of the current meta-analysis, we tested for publication bias using Egger’s regression test (J. A. Sterne & Egger, 2001), where sampling variances of the effect sizes were included as a moderator of the effect size. A significant slope estimate would imply that bias was present. The methodology has been supported as a valid method to detect publication bias (L. Lin et al., 2018; Rodgers & Pustejovsky, 2021). In addition, to ascertain if publication status was a significant factor in predicting the effect size of the relationship between smartphone presence and cognition, dummy-coded publication status (peer reviewed = 1, nonpeer reviewed = 0) was entered as a predictor of the magnitude of the effect size. Following that, we examined whether various moderators would impact the magnitude of group differences by conducting metaregressions with groups dummy-coded. The methodological moderators included use of cover story (no = 0, yes = 1), smartphone placement in smartphone presence condition (face down = 0, face up = 1), type of smartphone separation in smartphone absent condition (kept by participant = 0, not kept by participant = 1, and kept by participant = 0, both locations = 1), smartphone sound in smartphone presence condition (complete silence = 0, possible sound = 1), smartphone sound in smartphone absent condition (complete silence = 0, possible sound = 1), and cognitive task demand (nonexecutive functioning task = 0, executive functioning task = 1). Sample moderators included age mean, gender proportion, location of sample (non-United States = 0, United States = 1), trait smartphone dependency, and trait FoMO. Since meta-analyses violate the assumption that all studies come from a single population (Schwarzer et al., 2015), all analyses were conducted using random- and mixed-effects models.

Results

Descriptives of Eligible Records

All eligible records were made available from 2014 to 2023 across 12 countries (Australia, Canada, China, France, Germany, Italy, Japan, Malaysia, the Netherlands, Switzerland, the United Kingdom, the United States) in four continents (Australia, Asia, Europe, North America), with 30.30% of studies being conducted in the United States. It is noteworthy that 24 records employed a between-subject research design (Aguila, 2019; Bailey, 2018; Beijer, 2020, Boila et al., 2020; Canale et al., 2019; de Werd, 2020; Hartmann et al., 2020; Ito & Kawahara, 2017; Johannes et al., 2018, 2019; Koessmeier & Büttner, 2022; Linares & Sellier, 2021; Lyngs, 2017; Niu et al., 2022; Pellowe et al., 2015; Ruiz Pardo, 2022; Ruiz Pardo & Minda, 2022; Skowronek et al., 2023; Stahl, 2018; Stone, 2020; Tanil & Yong, 2020; Tarantino, 2019; Thornton et al., 2014; Ward et al., 2017), while only five records (Bianchi Bosch, 2018; Kaminske et al., 2022; Liu, Dempo, Kimura, et al., 2022; Liu, Dempo, & Shinohara, 2022; Nakagawa et al., 2022) employed a within-subject research design. The total sample size was 5,866 unique individuals (Mdn = 91, range = 25–520).2

The mean ages of the samples ranged from 18.54 to 27.59 years (Mdn = 21.16), and the female proportion of the samples ranged from 36% to 94% female (Mdn = 64%). The mean smartphone dependency scores across samples (where available, n = 29) varied from .20 to .70 (Mdn = .44), and the mean FoMO scores varied from 2.28 to 2.71 (Mdn = 2.29).

Overall Meta-Analytic Effect Size

Overall, based on 166 effect sizes drawn from 53 samples from 33 studies, we found that the meta-analytic effect of smartphone presence on cognition was nonsignificant (d = −0.02, SE = 0.02, 95% CI [−0.06, 0.01], p = .246; Figure 2). Cochrane’s Q test for heterogeneity, Q(165) = 76.42, p ≥ .999, and estimates of between-study variance (TLevel22T_{\text{Level}2}^{2} = 0.00, TLevel32T_{\text{Level}3}^{2} = 0.00, TLevel42T_{\text{Level}4}^{2} = 0.00) both indicated that there was no significant heterogeneity within the data. However, we retained our original models given that they were informed by the data structure of the current meta-analysis.

Figure 2

Summary of Included Effect Sizes
Note. The diamond shape denotes the overall meta-analytic effect size. The position of each square represents the effect size derived from each sample for each task. The size of each square indicates the sample size. The whiskers denote 95% confidence intervals. OSPAN = operation span task; SPT = single-probe task; RSPM = Raven’s Standard Progressive Matrices; CI = confidence interval; LCDT = Luminance Change Detection Task; SST = Stop-Signal Task; RAT = Remote Associates Test; CBS-PA = Cambridge Brain Sciences Task–Paired Associates; CBS-SS = Cambridge Brain Sciences Task–Spatial Span; CBS-FM = Cambridge Brain Sciences Task–Feature Match; DCT = Digit Cancellation Task; ACT = Additive Cancellation Task; VST = Visual Search Task; CBS-R = Cambridge Brain Sciences Task–Rotations; WST = Word Search Task; TMT-B = Trial Making Test Part B; ST = Stroop task; DCT(S) = Digit Cancellation Task Simple Version; HB = Heuristic-and-Biases Tasks; CBS-DS = Cambridge Brain Sciences Task–Digit Span; CBS-GR = Cambridge Brain Sciences Task–Grammatical Reasoning; LRT = Letter Recognition Task; TMT-A = Trial Making Test Part A; CRT = Cognitive Reflection Test; CBS-ML = Cambridge Brain Sciences Task–Monkey Ladder; GNG = Go/No-Go Task; CBS-OOO = Cambridge Brain Sciences Task–Odd One Out; CBS-P = Cambridge Brain Sciences Task–Polygon; PMT = Prospective Memory Task; CBS-TS = Cambridge Brain Sciences Task–Token Search; CBS-SP = Cambridge Brain Sciences Task–Spatial Planning; DDT = delay discounting task; d2-R = d2-R Concentration and Attention Test; CSTMT = Complex Short-Term Memory Task; WRAT-4 = Wide Range Achievement Test 4X; SC = sentence comprehension; IGT = Iowa Gambling Task; MA = mathematics; SP = spelling; UCMRT = University of California Matrix Reasoning Task; RE = random effect.

Tests of Bias

We sought to ascertain if the current meta-analysis faced any threats to validity in the form of bias. Two separate sets of analyses were conducted, one containing all records included in the meta-analysis (166 effect sizes, 53 samples, 33 studies), and one with only the published records (i.e., peer-reviewed journal articles) consisting of 95 effect sizes (38 samples, 22 studies). We plotted a funnel plot to visually observe the distribution of effect sizes in relation to design-specific estimates of sampling variance to reflect the precision of the effect size estimates (Morris & DeShon, 2002; Figure 3, top left and right panels). Thereafter, we used Egger’s test (J. A. C. Sterne & Egger, 2005), which accounts for the nested structure of the data, to statistically test for funnel plot asymmetry. The analysis including only published journal articles indicated that there was evidence of significant bias among published works (b = −0.61, SE = 0.29, 95% CI [−1.18, −0.05], p = .034). However, the analysis including all records included in the meta-analysis suggested no evidence for significant publication bias (b = −0.44, SE = 0.23, 95% CI [−0.90, 0.02], p = .062), indicating that the current meta-analysis, as a whole, shows no evidence of bias.

Figure 3

Plots Assessing Evidence of Publication Bias
Note. The top left panel denotes the funnel plot for all records, while the top right panel denotes the funnel plot for only published records. Black dots indicate effect sizes. In the funnel plot (top left and right panels), gray regions represent nonsignificance (.05 ≤ p < 1.00), while white regions represent statistical significance (p < .05).

In addition, we conducted a metaregression of 166 effect sizes (53 samples, 33 studies) with publication status dummy-coded as 1 and 0, respectively, for peer-reviewed journal articles and records that were not peer reviewed (i.e., conference posters, theses, and dissertations). Results indicated that the effect of smartphone presence on cognition did not differ across the two types of records (b = −0.06, SE = 0.04, 95% CI [−0.13, 0.02], p = .137).

Last, we conducted a metaregression of all 166 effect sizes (53 samples, 33 studies) to examine whether the reported effect of smartphone presence on cognition changed over time (operationalized by year of publication/completion). We found that the magnitude of the effect did not show a significant change over time (b = 0.01, SE = 0.01, 95% CI [−0.01, 0.02], p = .423; Figure 3, bottom).

Methodological Moderation Analyses

We conducted metaregression analyses to explore whether each methodological moderator had an effect on the relationship between smartphone presence and cognition (Table 1). None of the methodological moderators (use of cover story, smartphone placement in presence condition, type of smartphone separation in absence condition, smartphone sound in presence condition, smartphone sound in absence condition, cognitive task demand) showed statistical significance.

Table 1

Metaregressions of Methodological Moderators

Moderator

m

n

k

Q

df

b

SE

95% CI

p

Cover story (use vs. nonuse)

33

53

166

1.01

1

−0.04

0.04

[−0.13, 0.04]

.315

Smartphone placement in presence condition (face up vs. face down)

27

47

148

1.09

1

−0.04

0.04

[−0.13, 0.04]

.296

Type of smartphone separation in absence condition

33

53

166

0.21

2

.902

 Not kept by the participant versus kept by the participant

0.02

0.05

[−0.07, 0.11]

.652

 Both locations versus kept by the participant

0.01

0.04

[−0.08, 0.09]

.820

Smartphone sound in presence condition (possible sound vs. complete silence)

27

45

143

0.24

1

−0.04

0.08

[−0.18, 0.11]

.624

Smartphone sound in absence condition (possible sound vs. complete silence)

26

46

154

2.41

1

−0.1

0.06

[−0.22, 0.02]

.120

Cognitive task demand (EF task vs. non-EF task)

33

53

166

0.82

1

0.03

0.04

[−0.04, 0.11]

.366

Note. m = number of studies; n = number of samples; k = number of effect sizes; b = metaregression slope coefficient; SE = standard of b 95% CI = 95% confidence interval of b; EF = executive functions.

Due to the distinct outcomes measured by various cognitive tasks included in the meta-analysis, metaregression analyses were also conducted on cognitive outcomes that were used by at least five studies in the current meta-analysis. Only one task, the OSPAN task, had sufficient data for such analyses. Results for the OSPAN task (22 effect sizes from 20 samples, eight studies) indicated that smartphone presence had a significant negative effect on cognition (d = −0.13, SE = 0.06, 95% CI [−0.25, −0.02], p = .023). However, follow-up probing of the data based only on OSPAN tasks revealed that the funnel plot was asymmetrical and Egger’s test was statistically significant (b = −1.38, SE = 0.69, 95% CI [−2.73, −0.03], p = .045), suggesting significant bias in the OSPAN data. The bias-adjusted effect size based on Egger’s test was smaller and was statistically nonsignificant (d = −0.04, SE = 0.06, 95% CI [−0.17, −0.10], p = .580).

Sample Moderation Analyses

We conducted metaregressions to examine various sample characteristics as moderators. Sample mean levels of trait smartphone dependency, sample mean levels of trait FoMO, sample mean age, sample gender proportions, and sample location did not moderate the effect of smartphone presence on cognition (Table 2). Given that data on participants’ average daily hours of screentime were only available for four unique samples (k = 8), we did not conduct a metaregression with screentime as a moderating variable.

Table 2

Metaregressions of Sample Moderators

95% CI

Moderator

m

n

k

b

SE

LL

UL

p

Smartphone dependency

20

30

107

−0.00

0.00

−0.003

0.001

.415

Fear of missing out

5

5

10

−0.60

0.67

−1.91

0.72

.377

Age

31

51

159

−0.01

0.01

−0.03

0.01

.282

Female proportion

32

52

162

0.36

0.20

−0.03

0.75

.071

Location (United States vs. non-United States)

33

53

166

−0.01

0.04

−0.10

0.08

.799

Note. m = number of studies; n = number of samples; k = number of effect sizes; b = metaregression slope coefficient; SE = standard error of b; 95% CI = 95% confidence interval of b; LL = lower limit; UL = upper limit.

Discussion

Research findings on the effect of mere presence of smartphone on cognitive functions have been mixed, with several recent studies failing to replicate the averse cognitive effects of smartphone presence. To reconcile the mixed findings, we conducted the first meta-analysis to quantitatively examine the effect of smartphone presence on cognitive outcomes assessed by measures of executive functioning, intelligence, sustained attention, and decision making. Overall, the meta-analytic effect of smartphone presence on cognitive outcomes—synthesized from 166 effect sizes drawn from 53 samples from 33 studies—did not reach statistical significance (d = −0.02, SE = 0.02, 95% CI [−0.06, 0.01], p = .246; Figure 2). In addition, we also found no evidence that publication bias or record type (i.e., peer-reviewed journal articles vs. conference posters, unpublished theses, and dissertations) affected the magnitude of the reported effect of smartphone presence on cognition. In clarifying more nuanced features of this association, while taking methodological and other exogenous variations (e.g., time, sample characteristics) into account, several key findings warrant further discussion.

First, considering that various methods for manipulating smartphone presence and assessing cognitive outcomes were utilized across studies, we conducted subgroup analyses to explore whether these methodological inconsistencies influenced the pooled results. Our results showed that none of the manipulation-related differences, including use of cover story, smartphone placement in presence condition, type of smartphone separation in absence condition, smartphone sound in presence condition, and smartphone sound in absence condition, significantly moderated the reported effect of smartphone presence on cognition. In line with this, additional subgroup analyses based on methodology found that none of the subgroups exhibited significant effect sizes, suggesting that pooled estimates of different subgroups based on methodology were consistent with the overall, nonsignificant results. While nascent research hints that the physical accessibility of smartphones closely facilitates its usage that can impair cognitive learning and performance (Briskin et al., 2018; Sorrels, 2018), this result suggests that methodological variances in the manipulation of smartphone presence (e.g., in the same room or not) have a negligible effect on the salience of smartphones as a potential distractor from goal-relevant cognitive tasks.

Second, given that the saliency of smartphones as a potential distractor could also be modulated by numerous demographic factors and smartphone use patterns, we examined if the effect of smartphone presence on cognition was moderated by sample characteristics such as sample mean age, sample gender proportions, and trait smartphone dependency. We found that none of these sample characteristics significantly moderated the effect of smartphone presence on cognitive outcomes. Even for studies with a sample of high smartphone dependency, mere presence of smartphone does not reliably impair cognitive outcomes. In light of our null findings, the results may suggest that the methodological variances, cognitive task demand, and sample characteristics are less likely the underlying reasons for the mixed findings in the existing literature on the effect of mere presence of smartphone on cognitive outcomes.

The null meta-analytic effect and absence of moderating effect of methodological variances, cognitive task demand, and sample characteristics in the current meta-analysis may suggest that significant findings in the earlier studies could be driven by the expectancy effect that causes experimenter bias during data collection (Innes & Fraser, 1971; D. Rosenthal, 1963). This is plausible since the double-blind procedure was not implemented in the majority of the existing studies. Thus, future studies should consider to implement a double-blind procedure when examining the effect of mere presence of smartphone on cognitive functions. Moreover, given that most of the existing studies that found a significant effect of mere presence of smartphone on cognitive functions employed a relatively small sample (Ns < 100), it is also plausible that the significant findings in the earlier studies could be driven by Type I error that was inflated due to small sample size (Loken & Gelman, 2017; B. O. Turner et al., 2018). In addition, it is important to note that null findings observed in the current meta-analysis could be due to the possibility that smartphones may not be salient enough to exert a gravitational pull on the orientation of attention that could affect cognitive performance. Indeed, a recent eye-tracking study by Koessmeier and Büttner (2022) found that people rarely looked at their smartphones when engaging with cognitive tasks. They also found that smartphone presence increased smartphone vigilance but did not negatively impact task performance. Thus, it is plausible that the prevailing theoretical rationale on why mere presence of smartphone affects cognitive functions may not be as robust as previously assumed.

It is also noteworthy that findings from the present study should not be generalized to existing studies on the effect of mere presence of smartphones on social relationships and communication (Misra et al., 2016; Przybylski & Weinstein, 2013) since the theoretical mechanisms underlying the effect of mere presence of smartphones on cognitive functions are likely to be distinct from the effect of mere presence of smartphones on social relationship and communication. However, given that several recent studies have failed to replicate the effect of mere presence of smartphones on social relationship and communication (Crowley et al., 2018; Linares & Sellier, 2021; Roaché et al., 2020), it is also important to conduct more replication on the effect of mere presence of smartphone on social outcomes.

Last, in order to examine time-related changes in effect sizes, we conducted additional metaregression analyses to explore whether the reported effect of smartphone presence on cognitive outcomes differed across time, indexed by year of publication. Overall, we did not find any evidence that the magnitude of effects increased over time. While smartphone has become increasingly present and disruptive in various aspects of our daily life such as sleep, academic activities, socializing, and even during instances of driving or walking (Busch & McCarthy, 2021; Grant et al., 2019; Yang et al., 2020), our subgroup analyses may rule out the possibility of a heightened susceptibility to cognitive disruption from mere presence of smartphones over time.

Throughout our meta-analysis, we also observed that almost all experimental studies on mere presence of smartphone assessed executive functions using scores from singular tasks (e.g., OSPAN, Stroop task). This approach to operationalize cognitive functioning is problematic as studies have consistently reported low intercorrelations among tasks measuring executive functions, driven by the involvement of nonexecutive abilities in most executive function tasks (Friedman & Miyake, 2004; Hartanto, Chua et al., 2023; Miyake et al., 2000). For example, performance on the Stroop task is not purely driven by the ability to resist the distraction from the word name but also involves phonological recoding, associative facilitation, visual recognition, and color discrimination abilities (MacLeod, 1991; Mahon et al., 2012; Naish, 1980). The inherent task impurity problem in most executive function tasks may obscure or inflate the true effect of mere presence of smartphone on executive functions. Thus, future studies examining the effect of mere presence of smartphone on cognitive outcomes should address the task impurity problem by extracting the common variance among multiple tasks assessing similar domains of executive functions to exclude idiosyncratic nonexecutive function processes (Miyake et al., 2000). In addition, due to the task impurity problem and the heterogeneity of the available tasks, we did not conduct a metaregression analysis by different domains of cognitive functions in this meta-analysis, which is a recognized limitation. To facilitate more precise analyses of the effect of the mere presence of smartphones across cognitive domains in future meta-analyses, we encourage future research to standardize cognitive tasks for more effective cross-study comparisons.

In conclusion, the current meta-analysis did not find strong evidence that the mere presence of smartphones negatively impairs cognitive outcomes. Our findings suggest that it is still premature to support that a complete isolation from smartphones could improve academic and work productivity (Chadi et al., 2022). While the distracting effect of smartphones’ notifications has been well documented (Agrawal et al., 2017; Stothart et al., 2015; L. D. Turner et al., 2015), there is little evidence that the mere presence of smartphones affects cognitive processing.

Supplemental Material

https://doi.org/10.1037/tmb0000123.supp


Received October 12, 2022
Revision received September 11, 2023
Accepted September 24, 2023
Comments
1
Jack Ryan:

Interesting Issue, Loved it. I found this from Google