SINCE broadband began its inexorable spread at the start of this millennium, Internet use has expanded at a cosmic rate. Last year, the number of Internet users topped 2.4 billion — more than a third of all humans on the planet. The time spent on the screen was 16 hours per week globally — double that in high-use countries, and much of that on social media. We have changed how we interact. Are we also changing what we are?

We put that question to three people who have written extensively on the subject, and brought them together to discuss it with Serge Schmemann, the editor of this magazine. The participants: Susan Greenfield, professor of synaptic pharmacology at Oxford. She has written and spoken widely on the impact of new technology on users’ brains. Maria Popova, the curator behind Brain Pickings, a Web site of “eclectic interestingness.” She is also an M.I.T. Futures of Entertainment Fellow and writes for Wired and The Atlantic. Evgeny Morozov, the author of The Net Delusion: The Dark Side of Internet Freedom . He is a contributing editor to The New Republic .

Serge Schmemann : The question we are asking is: Are we being turned into cyborgs? Are new digital technologies changing us in a more profound and perhaps troubling way than any previous technological breakthrough?

Let me start with Baroness Greenfield. Susan, you’ve said some very scary things about the impact of the Internet not only on how we think, but on our brains. You have said that new technologies are invasive in a way that the printing press, say, or the electric light or television were not. What is so different?

Susan Greenfield: Can I first qualify this issue of “scary”? What I’m really trying to do is stimulate the debate and try and keep extreme black or white value judgments out of it. Whether people find it scary or not is a separate question.

 

Susan Greenfield
Anita Corbin

Susan Greenfield

 

Very broadly, I’d like to suggest that technologies up until now have been a means to an end. The printing press enabled you to read both fiction and fact that gave you insight into the real world. A fridge enabled you to keep your food fresh longer. A car or a plane enabled you to travel farther and faster.

What concerns me is that the current technologies have been converted from being means to being ends. Instead of complementing or supplementing or enriching life in three dimensions, an alternative life in just two dimensions — stimulating only hearing and vision — seems to have become an end in and of itself. That’s the first difference.

The second is the sheer pervasiveness of these technologies over the other technologies. Whilst it’s one thing for someone like my mum, who’s 85 and a widow, to go onto Facebook for the first time — not that she’s done this, but I’d love for her to do it — to actually widen her circle and stimulate her brain, there are stats coming out, for example, that over 50 percent of kids, between 13 and 17, spend 30-plus hours a week recreationally in front of a screen.

So what concerns me is not the technology in itself, but the degree to which it has become a lifestyle in and of itself rather than a means to improving your life.

Schmemann : Maria, I’ve seen some amazing statistics on the time you spend online, on your tablet, and also on reading books and exercise. You seem to have about 30 hours to your day. Yet you’ve argued that the information diet works like any good diet: You shouldn’t think about denying yourself information, but rather about consuming more of the right stuff and developing healthy habits.

Has this worked for you? How do you filter what is good for you?

Maria Popova : Well, I don’t claim to have any sort of universal litmus test for what is valuable for culture at large; I can only speak for myself. It’s sort of odd to me that this personal journey of learning that has been my site and my writing has amassed quite a number of people who have tagged along for the ride. And a little caveat to those statistics: A large portion of that time is spent with analog stuff — mostly books, and a lot of them old, out-of-print books.

Which brings me to the cyborg question. My concern is really not — to Baroness Greenfield’s point — the degree to which technology is being used, but the way in which we use it.

 

Maria Popova

Maria Popova

 

The Web by and large is really well designed to help people find more of what they already know they’re looking for, and really poorly designed to help us discover that which we don’t yet know will interest us and hopefully even change the way we understand the world.

One reason for this is the enormous chronology bias in how the Web is organized. When you think about any content management system or blogging platform, which by the way many mainstream media use as their online presence — be it WordPress or Tumblr, and even Twitter and Facebook timelines — they’re wired for chronology, so that the latest floats to the top, and we infer that this means that the latest is the most meaningful, most relevant, most significant. The older things that could be timeless and timely get buried.

So a lot of what I do is to try to resurface these old things. Actually, in thinking about our conversation today, I came across a beautiful 1945 essay that was published in The Atlantic by a man named Vannevar Bush, who was the director of the Office of Scientific Research and Development. He talks about information overload and all these issues that, by the way, are not at all unique to our time. He envisions a device called the Memex, from “memory” and “index”; he talks about the compression of knowledge, how all of Encyclopedia Britannica can be put in the Memex, and we would use what we would now call metadata and hyperlinks to retrieve different bits of information.

His point is that at the end of the day, all of these associative relations between different pieces of information, how they link to one another, are really in the mind of the user of the Memex, and can never be automated. While we can compress the information, that’s not enough, because you need to be able to consult it.

That’s something I think about a lot, this tendency to conflate information and knowledge. Ultimately, knowledge is an understanding of how different bits of information fit together. There’s an element of correlation and interpretation. While we can automate the retrieving of knowledge, I don’t think we can ever automate the moral end on making sense of that and making sense of ourselves.

Schmemann : Evgeny, in your book, you paint a fairly ominous picture of the Internet as something almost of a Brave New World — a breeding ground, you say, not of activists, but slacktivists — people who think that clicking on a Facebook petition, for example, counts as a political act.

Do you think that technology has taken a dangerous turn?

Evgeny Morozov : I don’t think that any of the trends I’ve been writing about are the product of some inherent logic of technology, of the Internet itself. To a large extent they are the product of a political economy and various market conditions that these platforms operate in.

It just happens that sites like Facebook do want to have you clicking on new headlines and new photos and new news from your friends, in part because the more you click the more they get to learn about you; and the more they get to learn about you the better advertising they can sell.

In that sense, the Internet could be arranged very differently. It doesn’t have to be arranged this way. The combination of public/private funding and platforms we have at the moment makes it more likely that we’ll be clicking rather than, say, reading or getting deeper within one particular link.

As for the political aspect, I didn’t mean to paint a picture that is so dark. As a platform, as a combination of various technologies, the Internet does hold huge promise. Even Facebook can be used by activists for smart and strategic action.

 

Evgeny Morozov

Evgeny Morozov

 

The question is whether it will displace other forms of activism, and whether people will think they’re campaigning for something very important when they are in fact joining online groups that have very little relevance in the political world — and which their governments are actually very happy with. Many authoritarian governments I document in the book are perfectly O.K. with young people expressing discontent online, so long as it doesn’t spill out into the streets.

What I am campaigning against is people who think that somehow social media and Internet platforms can replace that whole process of creating and making and adjusting their strategy. It cannot. We have to be realistic about what these platforms can deliver, and once we are, I think we can use them to our advantage.

Schmemann : You have all spoken of the risk of misusing the new technology. Is not such apprehension about new technology as old as technology itself?

Popova : I think one of the most human tendencies is to want to have a concrete answer and a quantifiable measure of everything. And when we deal with degrees of abstraction, which is what any new technology in essence compels us to do, it can be very uncomfortable.

Not to cite historical materials too much, but it reminds me of another old essay, this by a man named Abraham Flexner in 1939, called “The Usefulness of Useless Knowledge.” He says, basically, that curiosity is what has driven the most important discoveries of science and inventions of technology. Which is something very different from the notion of practical or useful knowledge, which is what we crave. We want a concrete answer to the question, but at the same time it’s this sort of boundless curiosity that has driven most of the great scientists and inventors.

Morozov : It’s true that virtually all new technologies do trigger what sociologists would call moral panics, that there are a lot of people who are concerned with the possible political and social consequences, and that this has been true throughout the ages. So in that sense we are not living through unique or exceptional times.

That said, I don’t think you should take this too far. Surrounded by all of this advanced technology now, we tend to romanticize the past; we tend to say, “Well, a century ago or even 50 years ago, our life was completely technologically unmediated; we didn’t use technology to get things done and we were living in this nice environment where we had to do everything by ourselves.”

This is not true. If you trace the history of mankind, our evolution has been mediated by technology, and without technology it’s not really obvious where we would be. So I think we have always been cyborgs in this sense.

You know, anyone who wears glasses, in one sense or another, is a cyborg. And anyone who relies on technology in daily life to extend their human capacity is a cyborg as well. So I don’t think that there is anything to be feared from the very category of cyborg. We have always been cyborgs and always will be.

The question is, what are some of the areas of our life and of our existence that should not be technologically mediated? Our friendships and our sense of connectedness to other people — perhaps they can be mediated, but they have to be mediated in a very thoughtful and careful manner, because human relations are at stake. Perhaps we do have to be more critical of Facebook, but we have to be very careful not to criticize the whole idea of technological mediation. We only have to set limits on how far this mediation should go, and how exactly it should proceed.

Greenfield : I don’t fear the power of the technology and all the wonderful things it can do — these are irrefutable — but more how it is being used by people. The human mind — this is where I do part company with Evgeny — is not one that we could say has always been a cyborg. There is no evidence for this statement. Niels Bohr, the famous physicist, once admonished a student: “You’re not thinking; you’re just being logical.” I think it actually demeans human cognition to reduce it to computational approaches and mechanistic operations.

I’m worried about how that mind might be sidetracked, corrupted, underdeveloped — whatever word you want to use — by technology.

Human brains are exquisitely evolved to adapt to the environment in which they’re placed. It follows that if the environment is changing in an unprecedented way, then the changes too will be unprecedented. Every hour you spend sitting in front of a screen is an hour not talking to someone, not giving someone a hug, not having the sun on your face. So the fear I have is not with the technology per se, but the way it’s used by the native mind.

Morozov : There are many things I could say in response. The choice to view everything through the perspective of the human brain is a normative choice that we could debate. I’m not sure that that’s the right unit of analysis. It in itself has a cultural tendency to reduce everything to neuroscience. Why, for example, should we be thinking about these technologies from the perspective of the user and not of the designer?

Greenfield : The user constitutes the bulk of our society. That’s why. They’re the consumers and they’re the people who…

Morozov : I know, but, for example, perhaps I want to spend more time thinking about how we should inspire designers to build better technologies. I don’t want to end up with ugly and dysfunctional technologies and shift the responsibility to the user…

Greenfield : But Evgeny, the current situation is constituted by the current users…

Morozov : …but it shouldn’t be left up to the individuals to hide from all the ugly designs and dysfunctional links that Facebook and other platforms are throwing at them, right? It’s not just a matter of not visiting certain Web sites. It’s also trying to alert people in Silicon Valley and designers and…

Greenfield : Yes, they’ve got minds as well, so I wouldn’t disenfranchise them. Everything starts with the people. It’s about people, and how we’re living together and how we’re using the technology.

Popova : To return to the point about cyborgs — and I think both of you touch on something really important here, which is this notion of, what is the human mind supposed to do, or what does it do? The notion of a cyborg is essentially an enhanced human. And I think a large portion of the cyborgism of today is algorithms.

So much of the fear is that rather than enhancing human cognition, they’re beginning to displace or replace meaningful human interactions.

With Google Street View’s new “neural network” artificial intelligence technology, for example, they’re able to tell whether an object is a house or a number. That’s something that previously a human would have to sort through the data to do.

That’s an enormous magnitude of efficiency higher than what we used to have. But the thing to remember is that these are concrete criteria. It’s like a binary decision: Is this a house, is this a number? As soon as it begins to bleed into the abstract — is this a beautiful house, is this a beautiful number? — we can’t trust an algorithm, or even hope that an algorithm would be able to do that.

The fear that certain portions of the human mind would be replaced or displaced is very misguided. You guys have been talking a lot about this notion of choice: The future is choice, both for us as individuals and what we choose to engage with, and what careers we take, and whether we want to hire the designers in Silicon Valley to build better algorithms — those are choices — and also at a governmental and state level, where the choice is what kind of research gets funded.

My concern is that many of the biases in the way knowledge and information are organized on the Web are not necessarily in humanity’s best interest. When you think about so-called social curation — algorithms that recommend what to read based on what your friends are reading — there’s an obvious danger. Eli Pariser called it “The Filter Bubble” of information, and it’s not really broadening your horizons.

I think the role of whatever we want to call these people, information filters or curators or editors or something else, is to broaden the horizons of the human mind. The algorithmic Web can’t do that, because an algorithm can only work with existing data. It can only tell you what you might like, based on what you have liked.

Greenfield : Maria, you mentioned differentiating information from knowledge. Whilst we can easily define information, knowledge is a little bit more elusive. My own definition of knowledge, or true understanding, is seeing one thing in terms of other things. For example, Shakespeare’s “Out, out, brief candle” — you can only really understand that if you see the extinction of a candle in terms of the extinction of life.

In order to have knowledge, you need some kind of conceptual framework. You need a means for joining up the dots with the information or the facts that you’ve encountered throughout your life, not someone else’s life. Only when you can embed a fact or a person or an event within an ever wider framework do you understand it more deeply.

Speaking of Google, there’s a wonderful quote from Eric Schmidt, the chairman of Google: “I still believe that sitting down and reading a book is the best way to really learn something. And I worry that we’re losing that.” So whilst we shouldn’t be too awed by the power of information, we should never, never confuse it with insight.

Popova : I completely agree. This conflation of information and insight is something I constantly worry about. Algorithms can help access information, but the insight we extract from it is really the fabric of our individual, lived human experience. This can never be replaced or automated.

Schmemann : Let me relate what you say to my own craft: journalism. We in what is now condescendingly called “the legacy media” live in terror of the Internet, and the sense that it is creating a kind of information anarchy. Our purpose in life has always been to apply what you have called experience, knowledge, judgment and order to what we call news.

Now the Internet and Facebook not only have assumed this function, but they create communities of people who share the same prejudice, the same ideology. To me, this may be a greater danger than shifting newspapers to a different platform.

Morozov : If it’s really happening, it is a danger. But I’m not convinced that it’s actually happening. The groups that are hanging out in bubbles — whether it’s the liberals in their bubble or the conservatives in their bubble — tend to venture out into sources that are the exact opposite of their ideological positions.

You actually see liberals checking Fox News, if only to know what the conservatives are thinking. And you’re seeing conservatives who venture into liberal sources, just to know what The New York Times is thinking. I think there is a danger in trying to imagine that those platforms — the Internet, television, newspapers — all exist in their own little worlds and don’t overlap.

Greenfield : I think a related issue, if you take conventional print and broadcast media compared to the Internet, is speed. When you read a paper or a book, you have time for reflection. You think about it, you put it down to stare at the wall. Now what concerns me is the way people are instantly tweeting. As soon as they’re having some experience, some input, they’re tweeting for fear that they may lose their identity if they don’t make some kind of instant response.

This is a concern for me, apart from the obvious want of regulation and slander and unsubstantiated lies that people spread around, that people no longer have the time for reflection.

Popova : If I may slightly counter that, I would argue that there’s actually an enormous surge in interest in a sort of time-shifted reading — delayed and immersive reading that leaves room for deeper processing. We’ve seen this with the rise of apps like Instapaper and Read It Later and long-form ventures like The Atavist and Byliner, which are essentially the opposite of the experience of the Web, which is an experience of constant stimulation and flux.

These tools allow you to save content and engage with it later in an environment that is controlled, that is ad-free, that is essentially stimulus-free, other than the actual stimulus in front of you.

Greenfield : I’ll just add one more thing, and that is the alarming increase in prescriptions for drugs used for attentional disorders in most Western countries over the last decade or two. Of course, it could be doctors are prescribing more liberally or that attentional illnesses are now becoming medicalized in a way they weren’t before. But my own view, especially for the younger brain, is that if you take a brain with the evolutionary mandate, which the human brain has, to adapt to the environment; if you place such a brain in an environment that is fast-paced, loud and sensory-laden, then the brain will adapt to that. Why would it compete with the other, three-dimensional world?

And whilst the apps that Maria raises are fine for the more mature person, younger kids could be handling it in a very different way. My concern is that we are heading toward a short attention span and a premium on sensationalism rather than on abstract thought and deeper reflection.

Schmemann : Susan, having described all these dangers you perceive, do you think this is something that we as people or we as governments or we as institutions need to work on? Does this require regulations, or do you think the human spirit will sort it out?

Greenfield : My emphasis would be away from regulation, to education. You can regulate ‘til you’re blue in the face; it doesn’t make it any better. I think that, although, I sit in the House of Lords, as you know, and although we had debates on all the various regulations on how we might ensure a more benign and beneficial society, what we really should be doing is thinking proactively about how, for the first time, can we shape an environment that stretches individuals to their true potential.

Schmemann : Picking up a bit where Susan was, Evgeny, in your book you talk a lot about the political uses and misuses of the Internet. You talk about cyber-utopianism, Internet-centrism, and you call for cyber-realism. What does that mean?

Morozov : For me, Internet-centrism is a very negative term. By that I mean that many of our debates about important issues essentially start revolving around the question of the Internet, and we lose sight of the analytical depths that we need to be plumbing.

The problem in our cultural debate in the last decade or so is that a lot of people think the Internet has an answer to the problems that it generates. People use phrases like, “This won’t work on the Internet,” or, “This will break the Internet,” or, “This is not how the Internet works.” I think this is a very dangerous attitude because it tends to oversimplify things. Regulation is great when it comes to protecting our liberties and our freedoms — things like privacy or freedom of expression or hate speech. No one is going to cancel those values just because we’re transitioning online.

But when it comes to things like curation, or whether we should have e-readers distributed in schools, this is not something that regulation can handle. This is where we will have to make normative choices and decisions about how we want to live.

Popova : I think for the most part I agree with Evgeny. I think much, if not all of it, comes down to how we choose to engage with these technologies. Immanuel Kant had three criteria for defining a human being: One was the technical predisposition for manipulating things. The second was the pragmatic predisposition — to use other human beings and objects to one’s own purposes. I think these two can, to some degree, be automated, and we can use the tools of the so-called digital age to maximize and optimize those.

His third criterion was what he called moral predisposition, which is this idea of man treating himself and others according to principles of freedom and justice. I think that is where a lot of fear comes with the Digital Age — we begin to worry that perhaps we’re losing the moral predisposition or that it’s mutating or that it’s becoming outsourced to things outside of ourselves.

I don’t actually think this is a reasonable fear, because you can program an algorithm to give you news and information, and to analyze data in ways that are much more efficient than a human could. But I don’t believe you could ever program an algorithm for morality.

 

 

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *