Gaia Bernstein is a law professor, co-director of the Institute for Privacy Protection, and co-director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. Gaia’s research has been featured by the media including the New York Times, Forbes, ABC News, and Psychology Today.
Below, Gaia shares 5 key insights from her new book, Unwired: Gaining Control Over Addictive Technologies. Listen to the audio version—read by Gaia herself—in the Next Big Idea App.
1. The illusion of control.
We know we spend a huge amount of time on our screens, but most of us, if asked if we want to spend five hours a day on our phones, would probably say no. So how did this happen? Sometime around 2009, things started changing. Smartphones and social networks became popular and we felt in control when we made many small choices, like texting on the go or choosing one app over another. Each of these small choices became a habit and, once we got going, we started spending much more time on our devices than we ever meant to.
In a way, we were like the frog in the famous fable about the frog and the boiling water. According to the fable, if you throw a frog into boiling water, it will jump out. However, if you put it into tepid water and then slowly bring the water to boil, it won’t realize the danger and will be cooked to death.
When I got my first smartphone, I started using my work commute to text my children’s babysitters and check work emails. It took only a month to get to a point where I rarely took my eyes off my phone throughout the commute. I was often surprised to find out as I got off the train that a student or a colleague had been in the same car throughout the ride and I hadn’t seen them.
Like me, many others around that time gradually changed how they used their devices. We made each decision separately and we never thought about the whole picture. Around 2009, we voluntarily stepped into tepid water under the illusion that we were the choosers, that we were in control, and we adopted a very different way of living. We endorsed a choice many of us would have rejected had we reflected on it. By the time the water was boiling and we’d realized how much time we were spending on screens, it was too late to jump out; our lives at that point revolved around screens.
It was no coincidence that we ended up spending so much time online—we were lured in by the tech companies, who covertly influenced our decision-making. They used manipulative designs, tempting us, through likes, auto plays, and unexpected rewards, to stay on for longer. We were under the illusion of control because we thought we were the choosers, but what we didn’t do was make the autonomous decision that we wanted to dedicate so much of our time to our screens.
2. Our self-blame history is repeating itself.
Around 2015, I started noticing how screens changed everyday life. The moment I realized that something was seriously wrong was when I went with my family to a friend’s house for dinner. They had an 11-year-old son who was playing with his parents’ iPad. When we came in, his parents told him to stop playing and give them the iPad, but the boy got very angry and refused. He then tried to snatch it back from his mom, and regressed to toddler-type wailing to get it back. Throughout the evening he used every manipulation in his power to get the iPad back. His parents felt increasingly desperate, and suddenly I remembered a time some years earlier in my parents’ house. My father, a heavy smoker, was diagnosed with emphysema. We hoped he would realize he had to stop smoking, but he refused. We tried to convince him, we took his cigarettes away, but exactly like my friends’ son, my father was uncharacteristically angry, and did everything he could to get his cigarette pack back.
“They shift responsibility away from themselves by arguing that we are choosing to consume their products, so we are responsible for the consequences.”
I then spent years studying what the battles against tobacco, and the battle to protect privacy can teach us about how to stop technology addiction: looking into the past helped me understand an important paradox. Despite all we know about how tech companies try to make us become addicted, why do we still feel personally responsible and keep blaming ourselves? We do this because the tech industry is using an old strategy, one used by other powerful industries before it. They shift responsibility away from themselves by arguing that we are choosing to consume their products, so we are responsible for the consequences.
For example, when smokers and their families sued cigarette companies because they developed lung cancer, they kept losing in court. They lost because the tobacco industry convinced the courts that smokers choose to smoke, so therefore, they were responsible for their lung cancer. The tech industry is already doing just that; they are doing more than just arguing that users are choosing to use their products. They also give users tools like parental controls or Apple screen time, which let us know how much time we spend online, to convince us that we are the choosers and therefore responsible if we end up spending more time online.
What is important is that we can also learn from the past about where this argument breaks, which is one place where there is evidence of intent to make someone addicted. When we learned that the tobacco companies knew that nicotine was addictive, and used it to make smokers addicted, courts finally held the tobacco industry responsible, and smokers started winning cases. The evidence is already available for tech addiction. Whistleblowers reported that tech companies purposefully made their users addicted to prolong their time online. Another vulnerable spot of the self-responsibility argument is children: we don’t see children as responsible choice-makers, so we are more likely to take legal action to protect them. For example, to fight obesity, schools are mandated to weigh children and send their BMI to their parents. Children already are, and will continue to be, the first place to start action against technology addiction.
3. Self-help is a trap, not the solution.
When we realized how many hours we usually spend online, we naturally turned to psychologists for solutions, and many psychologists were alarmed by what they saw. They suggested solutions involving self-control and self-help. There are many self-help techniques out there, including taking a digital detox, becoming intentional about how we spend our time online, or even just taking screen breaks. Despite all this effort, however, things are not changing—it’s actually getting worse, and our screen time just keeps trickling up.
So why do we fail? The first reason is because these self-help measures place the responsibility on us to fight an untenable battle. We are battling against technology companies that hire experienced psychologists to help them design technologies to keep us online for longer. For example, on Twitter, Facebook, and Instagram, there’s a feature called Infinite Scroll: a technique where the user never reaches the end of the page; therefore, never stopping. On Netflix and YouTube, there’s autoplay, when one video ends the next one starts immediately. Again, there is no natural place to stop; they’ve taken away our stopping cues.
The second reason we fail is because technology companies have not only created designs that cause us to become addicted, they have also created self-help tools that are not designed to help us reduce our time online, but to redirect responsibility to us. They call their self-help tools Digital Well-being tools and many of them, like Apple screen time support, let us set time limits on certain apps, but we can easily override them. They also give us the option to make our phones less irresistible by making the screen gray, but tech companies never designed these tools to succeed because their business model relies on maximizing user time online.
None of these features change the phone’s built-in default settings to effectively limit time or reduce the glowing allure of the device. They also do not target the most addictive features of the products—the Infinite Scroll, or the swipe-to-refresh (which entices us with dopamine bursts, when we keep swiping to check for more likes or comments). Focusing on self-help is a trap, it leads us to fight the battle in the wrong place—within ourselves and inside our homes—instead of taking it to the public sphere.
4. The ground is burning.
So, is excessive screen time harmful? The science wars on this topic are ongoing, not because of a lack of evidence, but because the monetary stakes are high. Evidence of the harm threatens the tech industry’s business model, which relies on extending our time online to increase advertising revenues.
A large body of findings warns of a public health crisis for children, especially psychology studies and brain imaging research; they point to impact on cognitive development, mental health, and attention span. There are also reports from whistleblowers that for years, tech companies had internal research indicating that their products could harm children, but they chose to ignore it in order to protect revenue growth.
“Faced with all we know now about the harms of excessive screen time for children, it is no longer justifiable to wait.”
Despite all of this evidence, even education policy still promotes maximizing the incorporation of technology into the classroom—a laptop for every child—and this trend has intensified with the pandemic. Teachers now incorporate addictive games like Minecraft and Roblox into the curriculum and post their lectures on social networks like TikTok.
As adults, we have a moral obligation to act, and not to neglect a whole generation of children to face the technology industry’s abusive designs alone. This is a generation of children who have already spent more than a decade, including a whole pandemic, in front of screens.
To move forward we need first of all to end the science wars. The tech science wars are not the first wars that have taken place: there were fierce scientific debates about whether cigarettes or junk food are harmful to consumers. Historically, science wars ended when respected professional establishments, or governmental organizations, endorsed a stand.
History can teach us a lot about how lengthy science wars can be. For example, despite all we know now about the harms of smoking, it is incredible to realize that even the tobacco science wars dragged on for decades. The first major scientific studies about the harms of smoking came out in the 1950s, but it was only in 1964 that the Surgeon General announced that smoking is a major health hazard.
Ending the science wars has, in the past, and will likely again, expedite the implementation of necessary legal protections. Faced with all we know now about the harms of excessive screen time for children, it is no longer justifiable to wait. We have waited this long because we were worried that making a hasty determination could unnecessarily inhibit technological innovation. However, right now, waiting is the riskier option, creating an irreversible future for our children.
5. From internal battles to collective action.
We have already spent years fighting with ourselves and with our children, blaming ourselves and them for not being successful in reducing screen time. But we have another option—fight collectively against the real-choice makers, the technology industry. This does not mean we should go back to a screen-less, unconnected world, but we can definitely progress toward achieving a better online-offline balance.
“We need to start a movement to battle technology overuse; we cannot rely on lawyers alone.”
We need to pressure technology companies to redesign their products, and this legal movement to regain control over our time is already happening. Parents and school systems are suing social media companies and game makers for making children addicted and causing them mental harm. Legislators are constantly coming up with different bills to limit tech companies’ ability to use addictive features; many bills focus on protecting children.
But re-designing technology isn’t the only way. It’s also about changing how we use technology in the spaces we occupy. In New York City, all three airports have iPads on every table. When my children and I wait for a flight or sit down for a meal, we cannot have a conversation because four iPads separate us; it’s impossible to avoid scrolling or playing games. These airport spaces are designed for technology overuse. But this is where all of us come in—we need to start a movement to battle technology overuse; we cannot rely on lawyers alone.
There are many examples of what we can all do to create change in how we use technology in spaces: parents can influence their children’s schools to be more discriminating about teaching with technology or allowing the use of cell phones in school, including during recess. Business owners can influence how much people use screens on their premises, for example, restaurant owners can decide not to replace menus with QR codes, reducing the likelihood that diners will take their phones out during a meal. Online start-ups can opt for a different business model, not one that is based on advertising and user time, but perhaps a pay-as-you-go model. Technology designers can evaluate whether to design a feature whose main goal is to keep users online for longer. We have many options to make a collective impact and it is possible to change norms and businesses. Before the 1990s we could have never imagined bars without cigarettes, but then that became our reality. The same could happen for a better-balanced tech future, once we decide to shift from internal battles to collective action.
To listen to the audio version read by author Gaia Bernstein, download the Next Big Idea App today: