1 Introduction

For a long time, research on everyday technology use was grounded in the core belief that human behavior is “predominantly planned, and performed intentionally” (Clements and Boyle 2018:34). In other words, researchers operated on the (more or less implicit) assumption that human beings use digital technologies in order to accomplish certain premeditated goals: “In interacting with a computer, a user has specific goals and subgoals in mind. The user initiates the interaction by giving the computer commands that are directed toward accomplishing those goals” (Proctor and Vu 2008:44). In the past few years, however, we have seen an increasing awareness that everyday technology use is often impulsive, unthinking, and that it sometimes draws attention away from other activities. In public discourse, there has been a strong push to understand such technology use in terms of addiction. The purpose of this article, however, is to displace this concept with the notion of habit. In making this argument, the article discusses three contemporary approaches to technology use: Neurobehaviorism, dual-systems theory, and phenomenology. The first approach is concerned with addiction, the latter two are concerned with habits. Each section in the article introduces one of these three approaches, provides an account of its key assumptions, and proceeds to discuss its implications. To enhance readability, I here provide a schematic preview of the end-result (see Table 1):

Table 1 A summary of the presently discussed approaches to technology use

2 Neurobehaviorism: Distraction as biological

In the past five years or so, a host of publications have debated the damaging effects of technology on the human brain. While most of these publications proceed with scant regard for theoretical reflexivity (or even cohesion), they rely on an understanding of technology use that is arguably best characterized as neurobehaviorism. One of the best-known examples of this discourse is Adam Alter’s bestseller Irresistible (2017), but many countries are now equipped with a prominent medical spokesperson that warns its citizens about the neurobiological dangers of technology: The UK has Susan Greenfield, Germany has Manfred Spitzer, France has Michel Desmurget, and here in Denmark we have Imran Rashid. While the claims made by these authors tend to circulate in mainstream media and popular science rather than in peer-reviewed science (Bell et al. 2015), they have steadily gained enough traction to affect most aspects of our contemporary society. As Laidlaw et al. (2019) note, examples of this negative discourse can now be found circulating in academic conferences, in teacher professional development settings, and even among stakeholders in ministries of education. It is therefore high time to take a critical look at this strand of thought. To properly assess its arguments, however, we must first understand its theoretical underpinnings.

Neurobehaviorist publications almost always refer to the radical behaviorism of B.F. Skinner, who famously argued that an organism’s behavior is determined by the consequences of its actions: If a given behavior triggers a rewarding outcome known as a ‘reinforcement’ in behaviorist terminology, this raises the probability of the behavior in question being repeated (Skinner 1963). Skinner called this learned association between a behavior and its outcome an ‘operant conditioning’ and his proof of concept was an operant conditioning chamber - later known as a Skinner box (Iversen 1992) - in which he could shape the behavior of lab rats. Whenever these lab rats pressed a lever, the chamber dispensed reinforcers in the form of food pellets. The lab rats quickly learned to associate this behavior (pressing the lever) with a pleasurable outcome (the release of food pellets), which increased the response rate of that behavior. Skinner thereby demonstrated that organismic behavior can be shaped through the systematic use of reinforcers, and because such behavior was ultimately elicited by what he called the contingencies of reinforcement, Skinner saw no need to consider the intentionality of the organism itself (Skinner 1969). In fact, he was quite eager to exorcise the ‘specter of teleology’ from psychology (Skinner 1963). While adopting the basic tenets of this approach, neurobehaviorism adds two twists to the behaviorist framework.

The first twist is that neurobehaviorism shifts the focus from overt organismic behavior to the underlying neurobiological drivers of such behavior.Footnote 1 As Robert Lustig argues in The Hacking of the American Mind (2017), “All our behaviors are manifestations of the biochemistry that drives them” (p. 14). This brings us to another cornerstone in the neurobehaviorist literature: James Olds and Peter Milner’s (1954) classic study, which showed that lab rats with electrodes implanted in specific areas of the brain would press a lever to self-administer electric shocks more than 7500 times in the course of 12 h. This remarkable response rate led the authors to speculate that they had identified a “system within the brain whose peculiar function is to produce a rewarding effect on behavior” (p. 426). In this case, the reinforcer was not an external entity like food pellets, but an internal entity originating in the so-called pleasure centers in the brain (Olds 1956). Building on this idea, neurobehaviorists tend to focus on the neurotransmitter dopamine, which is said to produce an “intense flush of pleasure” upon being released in the brain (Alter 2017:71). As a result, dopamine acts a powerful reinforcement that drives our frequent technology use: Seeing that we have received a ‘like’ on Facebook, it is argued, triggers a burst of dopamine in the brain, which is why it feels so good. Over time, these ecstatic jolts set up an operant conditioning in which we continuously come back for more. We thereby get caught up in short-term, dopamine-driven feedback loops or dopamine loops for short.

The second twist is that neurobehaviorism replaces the behaviorist concept of operant conditioning with the psychopathological concept of addiction. According to this approach, frequent technology use is not just a learned behavior, it is a behavioral addiction. How exactly neurobehaviorists understand tech addiction, however, remains remarkably unclear. In some passages, Alter (2017) describes technology use as a balm for psychological distress. “The substance or behavior itself isn’t addictive until we learn to use it as a salve for our psychological troubles” (p. 73). Here, technology use is best understood as a coping mechanism that offers temporary relief from negative life conditions such as loneliness, depression, or anxiety. In other passages, however, we see a peculiar inversion of causality in which technology use goes from being a symptom of preexisting mental health issues to becoming a cause of problems in an otherwise healthy population. This idea builds on the aforementioned notion of dopamine loops and argues that modern technologies are designed to addict (all of) us. In fact, this assumption is emblematic of neurobehaviorism, which claims that digital technologies are akin to drugs in their effects on the human brain. Writing in the New York Post, Nicholas Kardaras (2016) has compared screens to ‘digital heroin’ that turn kids into junkies compulsively chasing the next high. Here, technology use is both equated with and expressed in the idiom of drug use: Technology use triggers a ‘hit’ or a ‘rush’ of dopamine, which ultimately makes us addicted.

What can be done about tech addictions? Extending the drug terminology, it has become increasingly popular to advocate digital detoxes. To find out exactly what this advice entails, Syvertsen and Enli (2019) have recently analyzed 20 texts that promote this practice. Based on this analysis, they argue that these texts rest on a binary opposition between mediated life and an authentic existence: Technology use is perceived as dangerous and unhealthy, while abstention is seen as a pathway to freedom and self-control. Indeed, many of these publications have titles that recommend (or even instruct) readers to ‘unplug’, ‘log off’, and so forth. What is remarkable about the digital detox literature, however, is that while the concept of detox implies a removal of toxic substances from a living organism (i.e., a detoxification), digital detox seldom means permanent removal, but is mostly recommended as “a short time ‘cleansing’ – a parallel to a juice fast or a colon cleanse” (n.p.). If digital technologies are toxic and addictive, however, why not agitate for a complete rejection of these devices? This remarkable inconsistency becomes especially jarring when the discussion turns to children. Kardaras (2016) advocates limiting children’s access to screens, but if he is correct in comparing screens to ‘digital heroin’, parents surely should not settle for even a limited access to screens (which would be akin to saying, “Just don’t let the kids get too much heroin”).

The neurobehaviorist literature is equally inconsistent regarding the role it ascribes to human agency. Recall that operant conditioning occurs in a closed environment that has been meticulously designed to foster certain behaviors. As a result, the organism’s behavior is ultimately controlled by outside forces. In the case of technology use, such outside forces are said to consist of tech companies in Silicon Valley. “We’re the rats, and Facebook likes are the reward”, as Alexis Madrigal (2013) succinctly puts it. And just as the lab rats’ behavior was ultimately determined by scientists, our behavior is thought to be determined by software developers and data scientists. This leads to concerned talk about our minds being ‘hacked’ (Lustig 2017), ‘hijacked’ (Harris 2016), or equally dramatic verbs that imply hostile takeover. Before giving into such moral panic, however, we may want to interrogate the adequacy of using caged rats as a metaphor for human-technology relations. In Olds and Milner’s experiment, the neurobiological response is directly triggered by physical stimulation of the brain. Extrapolating this logic to human-technology relations (“technology use causes dopamine release in the brain”) means ignoring all questions of human intentionality, interpretation and appropriation. This is not just an abstract philosophical complaint, but points to a tension in the neurobehaviorist literature itself: On one hand, it portrays humans as helpless victims of addictive technology, but on the other hand, it constantly offers advice on how to shape our technology use. Neurobehaviorism, in other words, continues to be haunted by the ‘specter of teleology’.

The main shortcoming of neurobehaviorism is not its inconsistency, however, but its pathologizing rhetoric: Neurobehaviorism deliberately employs the psychopathological concept of addiction as a generic description that applies to the general population. Against the objection that the label of addiction cannot apply to a majority of the population, Alter (2017) argues that, “A symptom affecting so many people is no less real or acceptable simply because it becomes a new norm” (p. 23). But while addressing our collective technology use in terms of addiction is certainly an impactful way of problematizing our uncannily close relationship to digital technologies, it is also a direct way of pathologizing widespread and evidently ‘normal’ human behaviors (Billieux et al. 2015). Comparing a majority of the population to drug addicts is therefore cause for alarm: Not only does it blow the problem of technology-related behaviors in the general population out of proportions; it also trivializes the suffering of actual addicts. As a recent UNICEF (2017) report put it: “Careless use of addiction terminology downplays the very real consequences of the behaviour for those who are seriously affected, while overstating the risk of harm for those who at times engage in somewhat excessive, but ultimately not harmful, use of digital technology” (p. 115). A group of prominent addiction researchers have therefore suggested that we should be more careful about employing the label of addiction: It is imperative to save this term for the small minority of people who experience significant functional impairments due to technology use (Kardefelt-Winther et al. 2017).

Fortunately, there may be a simple and surprisingly straightforward solution to all these problems: Simply replace the concept of addiction with that of habit. The rest of this article will proceed to argue that nothing would be lost (and much would be gained) by making this terminological switch. Indeed, these concepts are already conflated in much of the neurobehaviorist literature. As an example, Alter (2017) writes that, “It’s far easier to prevent people from developing addictions in the first place than it is to correct existing bad habits” (p. 258, my emphasis). Such rhetorical slippage is highly revealing. It is also telling that a book that is devoted to addiction sets aside a full chapter to discuss the subject of “Habits and Architecture”, references Charles Duhigg’s The Power of Habits (2012), explores habit-breaking in some detail, and uses nail-biting as an example of this process. Fully and explicitly embracing the concept of habit would allow us to move beyond the pathologizing rhetoric of addiction. But how are we to discuss such habits in a theoretically helpful way?

3 Intermezzo: The fallacy of defining habits as secretly purposeful

When technology researchers include the concept of habit in their studies, they often treat habits simply as faster versions of conscious choices. A striking example of such intellectualism can be found in Clements and Boyle’s (2018) study of compulsive technology use, which they define as use that is “unintentional, uncontrollable, effortless, and efficient” (p. 36). In contrast to such compulsive use, the authors define habitual technology use as use that is “intentional, planned and reasoned” (p. 36). J.L. Austin (1961) once remarked that ordinary language should have, if not the last word, then at least the first word in scientific studies (p. 185). With that in mind, it is noteworthy that Clement and Boyle’s (2018) conceptualization of habit sounds a lot like (is indeed defined as) a planned action, while their definition of compulsive use is remarkably similar to the everyday definition of habit. Another example of this intellectualist fallacy can be found in Seo and Ray’s (2019) study of habit and addiction in the use of social networking sites (SNS). These authors define habit as an automatic activation of goal-directed behavior that has a “positive relationship with a goal-congruent outcome” (p. 114). If we insist that habits have goal-congruent outcomes, however, we become unable to account for bad habits which by definition have goal-incongruent outcomes. The consequence of defining habits as secretly purposeful (i.e., planned, goal-directed), in other words, is that we effectively exclude bad habits from our analyses. This is a problem. We need to acknowledge that habits may be distinct from (and not just faster than) conscious choices.

4 Dual-systems theory: Distraction as mental

Fortunately, there are other theoretical approaches that acknowledge the distinctive nature of habits. As an example, Turel and Qahri-Saremi (2016) argue that problematic SNS use cannot automatically be ascribed to addiction, because even if a problematic behavior is repeated, this does not necessarily make it an addiction: It may just be a bad habit. To explain problematic SNS use without lapsing into the pathologizing rhetoric of tech addiction, these authors draw on dual-systems theory, which is increasingly used in the study of everyday technology use (e.g., Osatuyi and Turel 2018; Schnauber-Stockmann et al. 2018; Soror et al. 2015; Turel and Qahri-Saremi 2016, 2018).Footnote 2 While the roots of dual-systems theory can be traced all the way back to Plato, Freud, and William James, its modern proponents include Jonathan Evans, Keith Stanovich and Daniel Kahneman, who famously gave the idea its popular breakthrough in Thinking, Fast and Slow (2011). The main assumption of dual-systems theory is that human behavior is guided by two structurally different cognitive systems that operate largely independently of one another and constantly compete for behavioral control: One is automatic, unconscious, and fast, while the other is controlled, conscious, and slow. Following Stanovich (1999), these two systems are often called System 1 and System 2.

Briefly told, System 1 consists of our habits, impulses, and desires. This system is also known as the reflexive system, because it contains behaviors that are automatic, impulsive, and directly activated by stimuli operating outside of conscious awareness (Osatuyi and Turel 2018). Phylogenetically speaking, System 1 is said to be the oldest of the two systems (Soror et al. 2015). The system reflects our tendency to repeat behaviors that have led to desirable outcomes in the past and operates quickly, instinctively, and effortlessly. This means that the processes of System 1 place minimal requirements on cognitive processing. It also means that these processes are not characterized by a sense of voluntary control. In contrast, System 2 consists of more deliberate and effortful behaviors. This system is also known as the reflective system, because it is characterized as being conscious and controlled. System 2 is slower and more rational than System 1, but it is also more demanding and consumes more mental resources. It is responsible for setting higher-order goals and for monitoring, evaluating, and regulating current behavior in accordance with those goals (Soror et al. 2015). Accordingly, System 2 functions as a faculty of self-regulation that inhibits and overrides the automatic responses of System 1 (Turel and Qahri-Saremi 2018). The distinction between the ‘automatic happenings’ of System 1 and the ‘controlled doings’ of System 2 constitutes the backbone of dual-systems theory.

According to dual-systems theory, whether to engage in (or avoid) a given behavior is determined by a cognitive tug-of-war between these two cognitive systems, which in the case of technology use takes the shape of a battle between the habits of System 1 and the self-regulation of System 2 (Soror et al. 2015; Osatuyi and Turel 2018). In this portrayal, problematic technology use is thought to be generated by System 1, while it is inhibited (if not completely hindered) by System 2. Any negative consequences of technology use are therefore ascribed to a power imbalance between the two systems in favor of System 1 (e.g., Osatuyi and Turel 2018; Turel and Qahri-Saremi 2016, 2018). Turel and Qahri-Saremi (2018) state the issue in clear terms: “When system 1 is ‘strong’ and system 2 is ‘weak’, people engage in unplanned and often disadvantageous behaviors” (p. 3052). This statement perfectly encapsulates the guiding assumption of dual-systems theory as currently used in research on everyday technology use, namely that the habits of System 1 should be kept in check by the self-regulation of System 2 to the largest possible extent. As a result of this guiding assumption, dual-systems theory has a tendency to descend into dichotomized discussions of the bad influence of System 1 (which can be likened to the metaphorical devil on one shoulder) versus the good influence of System 2 (which is like the angel on the other shoulder).Footnote 3

Unfortunately, these assumptions often lead researchers to tautology. Let us look at an instructive example. Turel and Qahri-Saremi (2016) set out to explore problematic SNS use, which they define as “unplanned, typically impulsive SNS use instances that are less advantageous to users” (p. 1088). Based on dual-systems theory, they hypothesize that, 1) problematic SNS use like using Facebook during class time is negatively associated with academic performance, 2) problematic SNS use is positively associated with System 1, and 3) problematic SNS use is negatively associated with System 2. While there is empirical support for all three hypotheses, these hypotheses are effectively circular. The first hypothesis expresses the idea that using Facebook during class time is distracting and impairs one’s performance. But we already knew that: Being distracted literally means having one’s attention drawn-away (dis-tracted) from a task, which logically impairs one’s performance on that task (for an extended version of this argument, see Aagaard 2019). Regarding the other two hypotheses, to the extent that problematic SNS use is defined as impulsive, it is hardly surprising that such use is positively associated with the system responsible for generating impulsive behavior (hypothesis two) or that it is negatively associated with the system responsible for inhibiting impulsive behavior (hypothesis three). Once again, these relationships are not empirical, but conceptual and follow directly from the study’s premises: If our data somehow indicated that these variables were unrelated, we would not have discovered something new, we would simply have done something wrong.

The main shortcoming of dual-systems theory is not its circular reasoning, however, but its mentalist implications: Dual-systems theory operates on the unspoken assumption that human behavior ought to be consciously controlled at all times and that problematic behavior is caused by critical lapses in such reflectivity. This is why it attributes problematic technology use to ‘dysfunctional’ inhibitory control and ‘deficient’ self-regulation (see e.g., Soror et al. 2015; Osatuyi and Turel 2018). This is also why it talks about technology users ‘succumbing’ to habit (Zhou et al. 2018). Conscious control is celebrated, while habits are demonized. In a paradoxical inversion of our earlier critique, dual-systems theory thereby becomes unable to account for good habits. The practical implication of this mentalism is that dual-system theorists end up advocating practical interventions that increase conscious control. Against this idea, phenomenologists have argued that increased conscious control can in fact diminish one’s sense of freedom by disturbing the flow of unreflective action (De Haan et al. 2015). Most everyday comportment is and should remain habitual. As A.N. Whitehead (1911) once argued: “It is a profoundly erroneous truism, repeated by all copy-books and by eminent people making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of operations which we can perform without thinking about them” (p. 45f). Accordingly, the most relevant distinction may not be ‘habits versus conscious control’, but ‘good habits versus bad habits’.

5 Phenomenology: Distraction as embodied

We need an approach that lets us discuss bad habits without condemning habits tout court. A more helpful way of discussing habits can be found in the phenomenological tradition, where Maurice Merleau-Ponty (2002) showed that much intelligent human behavior is unmediated by thought and occurs on a prereflective bodily level: “Consciousness is in the first place not a matter of ‘I think that’ but of ‘I can’”, as he put it (p. 159). Merleau-Ponty described such behavior in terms of habit, which can be described as an immediate and prereflective inclination to act in certain ways due to familiarity with that type of situation. Through practice, our bodies become so familiar with performing certain actions that this performance eventually happens outside of conscious awareness (aptly described by Gail Weiss (2002) as ‘going on autopilot’). Merleau-Ponty used the geological concept of sedimentation to describe how, over time, such repeated actions take root in the body: The more we perform certain actions, the more they become part of what we just do. The point is not to deny that we sometimes make conscious deliberations about what to do, but to emphasize that most of our waking moments are spent in the prereflective mode of habit. We are creatures of habit. This certainly includes technology use, and Merleau-Ponty claimed that learning to use new technologies literally changes our existence (p. 166). He gave various examples of this phenomenon such as a woman automatically dodging doorframes when wearing a feathered hat, a blind man skillfully using his stick to navigate, and an experienced driver effortlessly parking his car.

In the phenomenological tradition that followed Merleau-Ponty, the concept of habit has often been replaced by the notion of skill. This terminological shift is largely attributable to Hubert Dreyfus, who explains that he deliberately substitutes ‘habit’ with ‘skill’ to avoid any connotations of rigid behavior (Dreyfus 2004). Dreyfus famously codeveloped the five-stage model of skill acquisition, which argues that the more proficient a person becomes at an activity like playing chess or driving a car, the less they need to rely on conscious deliberation (Dreyfus and Dreyfus 1986). In the foreword to David Sudnow’s Ways of the Hand, Dreyfus (2001) argues that the book’s detailed account of learning to improvise jazz on the piano is a paradigm case of such skill acquisition: The novice starts by slowly, painstakingly, and consciously hunting for notes on the keyboard and by locating each individual note by sight. Through accumulation of experience, however, this situation gradually evolves as the piano’s configuration of keys slowly becomes embodied in the pianist’s hands. Finally, after years of practice, the pianist reaches an intuitive stage in which there is “no longer an I that plans, not even a mind that aims ahead, but a jazz hand that knows at each moment how to reach for the music” (p. x). At this level of expertise, the pianist’s activity is governed by an anonymous, prepersonal agency and conscious decision-making becomes irrelevant and even harmful to their performance. Dreyfus calls this phenomenon ‘egoless agency’.

Inspired by the Dreyfus model of skill acquisition, phenomenologists have since gone on to discuss the egoless agency of skills like those involved in sports, music, and dancing. Indeed, contemporary phenomenology is rife with knowing bodies and thinking hands (see e.g., Radman 2012, 2013). As a result of this development, phenomenologists tend to celebrate the self-transcendence of egoless agency (or flow, as it is also called). Although this Dreyfusian skill-story certainly constitutes an eloquent and forceful argument against the mentalism of cognitive psychology, uncritically replacing habit with skill risks glossing over a crucial distinction: While it is undoubtedly true that technology use requires skills, it also seems true that our intuitive and skillful use of digital devices sometimes makes us do things that we do not intend to do (Aagaard 2020). The concept of skill involves elements of training and mastery that make it ill-equipped to address this phenomenon. Indeed, John Dewey (2007), another great philosopher of habit, specifically warned us against discussing habits in terms of skills, because by doing so we risk envisioning habits as mere technical abilities that we call into action at will. Dewey preferred to discuss bad habits, because in this case it becomes more obvious that habits grip us and transcend our conscious decision-making. “A bad habit”, he argued, “suggests an inherent tendency to action and also a hold, command over us. It makes us do things we are ashamed of, things which we tell ourselves we prefer not to do” (p. 24).

To avoid romanticizing egoless agency, it may therefore be helpful to go back to Merleau-Ponty’s original terminology of habit. Like skills, however, such tech habits have to be acquired. In the context of technology use, Robert Rosenberger (2009) has described the process of learning to use a computer in which the novice is forced to concentrate on each individual keystroke, while the experienced user barely notices the computer itself, but instead focuses on whatever it is being used to do. Furthermore, if one routinely uses the computer for specific purposes like accessing certain websites, more fine-grained habits develop. This is relevant in the context of impulsive technology use, where students describe being drawn to distraction: They often experience ‘habitual distraction’ in the form of a prereflective attraction towards frequently visited, but educationally irrelevant websites like Facebook (Aagaard 2015).Footnote 4 “It’s just F, A, and Enter”, as one student put it. Due to deeply sedimented tech habits that have been built, maintained, and solidified in the course of their everyday lives, the action of logging onto Facebook has become embodied in these students’ hands and fingers and now occurs habitually. While nothing in the laptop determines that it be used for distractive purposes, students are habitually inclined to do so. Succumbing to this temptation is frustratingly easy since it occurs independently of conscious decision-making, and students describe how they sometimes close their laptops to resist this magnetically attractive affordance (Aagaard 2018).

Phenomenology shows us that tech habits are not bad per se, but need to be cultivated. While this argument does not tell us how such cultivation is to be done, it does suggest helpful ways of framing and understanding the issue: In the case of habitual distraction, students neither curtail the problem through digital detoxes (i.e., abandoning laptops) nor through conscious control (i.e., reflective inhibition), but through intelligently restructuring their environment in a way that minimizes temptation (i.e., closing laptops). This mirrors research on ‘situational strategies for self-control’ which shows that it is helpful to choose or change situations in ways that minimize the need for conscious control (Duckworth et al. 2016). Phenomenology thereby helps us embrace the ‘specter of teleology’: Humans are intentional beings, and although tech habits can be performed with a remarkable degree of automaticity, they ultimately spring from purposeful and meaningful activity that cannot be reduced to neurobiological effects in the brain. Including this intentional dimension in our analyses, however, does not absolve tech companies from responsibility: We must not become too focused on individual experiences to analyze the fact that we live in an age of surveillance capitalism where human attention has become a highly marketized, financialized, and sought-after commodity (Zuboff 2019). As an astute reviewer noted, habit-talk should not overshadow the fact that technology companies design systems in ways that potentiate certain user responses. In other words, we still need to discuss that (and how) technologies are designed to be habit-forming (Eyal 2014). Even in this context, however, the terminology of habit adds more nuance to the debate than simplistic neurobehaviorist notions of addiction.

6 Conclusion

The neurobehaviorist rhetoric of tech addiction is no longer confined to popular science (Guitton 2020; Melo et al. 2020). Aaron and Lipton (2018) have recently claimed that “students’ attraction to device use is driven in much the same way an addict is driven to their drug of choice [ …] Students need their ‘fix’” (p. 374). Even skeptics like Panova and Carbonell (2018), who criticize frivolous use of the term addiction, begrudgingly acknowledge that “there is no other accepted term for a behavior that manifests similar problems with a lack of self-control, attachment, high use, and problematic consequences” (p. 256). The whole purpose of this paper, however, has been to argue such a term does in fact exist: Habits. The phenomenological idea of tech habits sketched here manifests many of the same experiential qualities that the concept of tech addiction tries to capture: Habits are prereflective and occur ‘below’ the level of conscious decision-making, so to speak, they can be difficult to change, and some of our habits – our bad habits or vices, as they are also called – go against our best intentions. At the same time, the concept of habit lessens the implied severity of the consequences: Bad habits may be irritating, but they are not debilitating. Finally, the concept of habit helpfully severs all linguistic ties to pushers, fixes, junkies, and detoxes. By creating a conceptual space for discussing the downsides of technology use that does not resort to pathologization, the concept of habit thereby replaces moral panic with a more grounded discussion of our collective technology use.

Svend Brinkmann (2014) has argued that the vocabularies we use to make sense of distressful experiences (our languages of suffering, as he calls them) do not just copy the world, but allow us to cope with the world in different ways. Based on this idea, he criticizes how a diagnostic language is slowly crowding out other religious, existential, moral, and political understandings and action possibilities. While not one of Brinkmann’s examples (in fact, he approvingly cites Alter’s book elsewhere [Brinkmann 2019]), tech addiction is a perfect example of such pathologization: By extending the diagnostic language to everyday technology use, we end up excluding other useful understandings and action possibilities. In the case of addiction, the neurobehaviorist story builds on a peculiarly chemical notion of freedom and slavery: Whenever something is pleasant, it is also inherently dangerous. This applies to everything from technology to Argentinean tango and Harry Potter books (Panova and Carbonell 2018). The only way to remain free (of addiction) is therefore to stay clear of all the world’s pleasantries. Indeed, the latest Silicon Valley trend is supposedly a practice called ‘dopamine fasting’ in which people deliberately avoid anything pleasurable (Cocozza 2019). In the case of habits, on the other hand, freedom does not consist of an absence of such external influences, but the practice of coping with these influences. While tech addiction is inherently bad and must be eliminated, tech habits must be trained and cultivated (Vallor 2016). The phenomenological concept of habit thus opens up a space for formation of good technology habits, or what we might call digital Bildung.Footnote 5

Saying that a given technology is addictive or that one is addicted to it may be more convenient than saying that it is habit-forming or that one has developed a bad habit of using it at inappropriate times, but these linguistic advantages cannot offset the drawbacks listed here. In conclusion, I therefore suggest that we curb the medicalized concept of addiction and start discussing the experienced lack of self-control in some cases of technology use in terms of habits.