The Infinite Retina
上QQ阅读APP看书,第一时间看更新

The Four Paradigms

What do we mean by "paradigm"? The evolution of the user experience is the easiest way to understand this. The First Paradigm has to do with the beginning of personal computing and the interface being text-based. With the Second Paradigm, graphics and color capabilities were included.

Movement and mobility were added with the Third Paradigm in the form of mobile phones. With Spatial Computing, the Fourth Paradigm, computing escapes from small screens and can be all around us. We define Spatial Computing as computing that humans, virtual beings, or robots move through. It includes what others define as ambient computing, ubiquitous computing, or mixed reality. We see this age as also inclusive of things like autonomous vehicles, delivery robots, and the component technologies that support this new kind of computing, whether laser sensors, Augmented Reality, including Machine Learning and Computer Vision, or new kinds of displays that let people view and manipulate 3D-centric computing.

We will see seven industries that will be dramatically upended by the 3D thinking that Spatial Computing brings and will go into more detail on the impacts on those industries in further chapters. Soon, all businesses will need to think in 3D and this shift will be difficult for many. Why can we say that with confidence? Because the shift has started already.

The Oculus Quest, which shipped in May 2019, is an important product for one reason―it proves that a small computer can run an on-the-face spatial computer that brings fantastic visual worlds you can move around in and interact with. It is proving that the world of Spatial Computing has arrived with a device many can afford and is easy to use. It's based on a Qualcomm chipset that is already years old as you are reading this―all for $400, and with controllers that can move freely in space and control it all! Now that our bodies can move within a digital landscape, the Spatial Computing age, that is, "the Fourth Paradigm of Personal Computing," has arrived and will only expand as new devices arrive this year and the next. What we are seeing now are the Spatial Computing counterparts of products like the Apple II or the IBM PC of earlier paradigms.

The Quest isn't alone as evidence that a total shift is underway. Mobile phones now have 3D sensors on both sides of the phone, and brands are spending big bucks bringing Augmented Reality experiences to hundreds of millions of phones and seeing higher engagement and sales as a result.

Look at mobile phones and the popularity of Augmented Reality games like Minecraft Earth and Niantic's Harry Potter: Wizards Unite last summer. You play these while walking around the real world. A ton of new technologies are working in the background to make these 3D games possible.

How is this fourth paradigm of computing different? It is additive, including the technologies that came before it, even as it brings quantitatively different kinds of technology to our eyes and world. Spatial Computing promises to do what the previous three paradigms have failed to do: to map computing to humans in a deep way. Elon Musk says it will increase our bandwidth; what he means is that our computers will be able to communicate with each of us, and us with our technology, in a far more efficient way than we've ever done before. Paradigm one kicked it off by enabling us to converse with our own computer, one that was in our homes for the first time, through a keyboard. That brought a revolution, and the next paradigm promises a huge amount of change, but to understand the amplitude of that change, we should look back to the world that Steve Wozniak and Jobs brought us way back in 1977.

Paradigm One – The Arrival of the Personal Computer

The Apple II is as important as the Oculus Quest, even though most people alive have never used one. The Quest brought a new kind of Spatial Computing device to the market―one that mostly was biased toward Virtual Reality, where you can only see a virtual world while the real world is hidden from view. This $400 device was the first from Facebook's Oculus division that didn't require sensors to be placed around you, and didn't require a wire from the headset to a PC. It was all self-contained and it powered on instantly, which dramatically increased usage numbers.

Where the Quest let everyday people think about owning a VR headset for the first time, the Apple II acted in the same way for those back in the late 1970s, allowing people to own a personal computer for the first time! As the 1980s began, a lot of people, not just governments or big businesses, had access to computers. Four decades later, we are seeing the same trend with Spatial Computing.

The Apple II, and later the IBM PC, which ran Microsoft's DOS (for Disk Operating System), introduced the integrated circuit into a consumer-focused product and enabled an entire industry to be built up around it. By the end of the 1980s, literally every desk inside some corporations had a personal computer sitting on it, and the names of those who made them, such as Michael Dell, are still well-known today.

We expect that Spatial Computing will similarly lift previously unknown people up to wealth and household name status.

Photo credit: Robert Scoble. Steve Wozniak stands next to an Apple II at the Computer History Museum. He is the last person to have the design of an entire computer, from the processor to the memory to the display drivers, in his head. After this, teams took on the engineering of all those parts.

Speaking of Dell, he may not be the one that gets credit for starting the personal computing age, but only because he was able to make personal computing more accessible to the masses. He did this by licensing Microsoft's operating system and putting it in lower-cost machines, allowing companies like Dell to make many sales. We foresee that a similar pattern will probably play out. We expect Apple to enter the market in late 2020, but believe that its first products will be very controlling of privacy, tethered wirelessly to the iPhone, and much more expensive than, say, competitive products from Google and Facebook, not to mention those from the Chinese.

It is hard to see how early computers that could only display black and white computations on a screen could be relevant to Spatial Computing, but the concepts it introduced remain relevant even today, such as files, printing, saving, and others. The thing is that back in the beginning, there were no graphics; computers were way too big to hold (much less dream about putting in your pocket) and were much harder to use. Those of us who learned computing way back then remember having to look up commands in manuals to do something simple like print, and then you'd have to type those commands onto the screen.

Spatial Computing, or computing you can move through, is actually joined by much improved voice technology. Companies like Otter.ai are understanding our voices, and systems like Apple's Siri, Amazon's Alexa, Google's Assistant, and others are waiting for us to speak to them.

We imagine that, soon, you will be able to just say something like "Hey Siri, can you print out my report?" and it will be so. This new world of convenience that is being ushered in is, in our opinion, far preferable to the days of code and command lines that we saw during the first days of personal computing!

The first computers arrived with little excitement or fanfare. The first ones only had a few applications to choose from, a basic recipe database, and a couple of games. Plus, Apple founders Steve Wozniak and Steve Jobs were barely out of high school, and the first machines were popular mostly with engineers and technical people who had dreamed of owning their own personal computers. Those days remind us a lot of the current Virtual Reality market. At the time of writing this book, only a few million VR machines have sold. In its first year, only a few thousand Apple IIs had sold. It was held back because the machines were fairly expensive (in today's dollars, they cost more than $10,000) and because they were hard to use; the first people using them had to memorize lots of text commands to type in.

It's funny to hear complaints of "There's not enough to do on an Oculus Quest," which we heard frequently last summer. Hello, you can play basketball with your friends in Rec Room! Try going back to 1977, when the first ones basically didn't do anything and, worse, you had to load the handful of apps that were out back then from tape, a process that took minutes and frequently didn't work right at all. Wozniak told us his wife back then lost her entire thesis project on an Apple II, and even he couldn't figure out how to save it. Those problems aren't nearly as frequent in the days of automatic saving on cloud computing servers.

Why is the Apple II, along with its competitors and precursors, so important? What was really significant was that people like Dan Bricklin and Bob Frankston bought one. They were students at Harvard University and saw the potential in the machine to do new things. In their case, they saw how it could help businesspeople. You might already know the rest of the story; they invented the digital spreadsheet.

The app they developed, VisiCalc, changed businesses forever and became the reason many people bought Apple IIs. "VisiCalc took 20 hours of work per week for some people, turned it out in 15 minutes, and let them become much more creative," Bricklin says.

The Apple II ended up selling fewer than 6 million units. The Apple II, and its competitors that would soon come, such as the IBM PC, did something important, though―they provided the scale the industry needed to start shrinking the components, making them much faster, and reducing their cost, which is what computing would need to do to sell not just a few million machines, but billions―which it did in the next two paradigms.

Photo credit: Robert Scoble. Label on Tim Berners-Lee's NeXT computer on which he invented the World Wide Web while working at CERN. The NeXT operating system still survives in today's Macintosh and iOS, but is really the best example of concepts invented at Xerox Palo Alto Research Center in the early 1980s.

Paradigm Two – Graphical Interfaces and Thinking

In 1984, the launch of the Apple Macintosh computer brought us graphical computing. Instead of typing text commands to, say, print a document, you clicked an icon. That made computing much easier but also enabled the launch of a new series of desktop publishing apps. By the time Windows 95 arrived, finally bringing the idea to the mass market, the entire technology stack was built around graphics. This greatly increased the size of tech companies and led to profitable new lines of software for Microsoft and Adobe, setting the stage for Spatial Computing.

This graphical approach was much easier to use than learning to type in commands to copy files and print. Now, you could just click on a printer icon or a save icon. This increased accessibility brought many new users into computing. The thing to take away here is computing, with this, and each of the paradigm shifts that followed, made a massive move toward working more like humans do, and made handling computer tasks much easier. Spatial Computing will complete this move (Google's Tilt Brush in VR still uses many of the icons developed in this era to do things like choose brushes, or save/delete files).

It was a massive increase in the number of computer users (many stores had long lines of people waiting to buy Windows 95) that gave Microsoft, in particular, the resources to invest in R&D labs that led directly to the development of HoloLens 25 years after Windows 95's huge release.

Also, it took many graphic designers off of working on typesetting machines and brought them into computing, which accelerated with the popularity of the web, which also saw its debut on Windows 95 and Macintosh. When Tim O'Reilly and Dale Dougherty popularized the term Web 2.0 in 1994, even Bill Gates didn't understand how important having people interacting on web pages would be.

Weblogs were springing up by the millions and e-commerce sites like Amazon and eBay were early adopters of techniques that let parts of web pages change without being completely refreshed. Today, WordPress is used by about 20 percent of the web but back then, Gates and his lieutenant Steven Sinofsky didn't see the business value in Web 2.0, refusing to consider a purchase after the coauthor of this book, Robert Scoble, suggested such a thing when he worked as a strategist at Microsoft. He is now Chief Strategy Officer at Infinite Retina.

The web was starting to "come alive" and desktops and laptops were too with new video gaming technology. Nvidia, born in the 1990s, was now seeing rapid growth as computing got cheaper and better. Where a megabyte of RAM cost $450 in the late 1980s, by 2005, a gigabyte was running at $120 and while that was happening, internet speeds escaped the very slow modem age and were increasing to such speeds that new video services like YouTube and social networks, including LinkedIn, Twitter, and Facebook, were able to appear. Technology continues to get cheaper and smaller to this day. Today, 21 billion transistors fit into a chip the size of your fingernail (like the one that does self-driving in a Tesla) and memory now costs about $15 for 64-GB chips. It is this decrease in cost and increase in capabilities that is bringing us Spatial Computing.

This new user interface paradigm, while easier than typing text commands, still was difficult to use for many. More than one phone call to Microsoft's support lines demonstrated that many had difficulty figuring out how a mouse worked, and even dragging icons around the screen was daunting and scary to many. Those of us who grew up around computers saw it as easy, but many didn't. While the visual metaphors were there, the disconnect between moving your hand on a mouse on a desk while controlling a cursor on the screen meant that computers still didn't work the way we did. While most figured it out, computing had another problem―computing wasn't designed to fit in your hand or pockets, which is the vision that Douglas Engelbart, among other pioneers, had for all of us. Engelbart was the genius who, back in the late 1960s, showed the world the technology that it wouldn't get until the Macintosh arrived in 1984. Before he died, he told us he had an unfinished dream: of making computing even more personal, where you could communicate with computers with your hands and eyes. He predicted not only the move to mobile, but the move to truly Spatial Computing.

This brings us to Paradigm Three: Mobile.

Paradigm Three – Mobile

Humans aren't happy being tied to desks to do their work, and that enabled a new industry and a new paradigm of personal computing to arrive―one that brought computing off of desks and laps and into your hand. This paradigm shift enabled billions to get on the internet for the first time (we've seen very poor people in China and other places riding bikes while talking on their smartphones) and would be the platform that many new companies would build upon, thanks to new sensors, ubiquitous data networks, and new kinds of software that were designed for these devices we all now hold.

This third technology shift started in places like Toronto (RIM Blackberry) and Helsinki (Nokia). For years, these two companies, along with Palm, with its Treo, and a few others, started a new technology industry direction. They produced products that fit in your hand and didn't seem to be very powerful computers at the time. Mostly, they were aimed at helping you take a few notes (Treo) or make a call, while entertaining the ability to take a photo too (Nokia) or send a few text messages to coworkers (RIM's Blackberry). This turned into quite an important industry. Nokia alone, at its peak in 2000, accounted for four percent of its country's GDP and 70 percent of the Helsinki Stock Exchange market capital.

Photo credit: Robert Scoble. A Microsoft Windows Phone, circa 2006, sits next to the original mouse, circa 1968, on Douglas Engelbart's coffee table.

What they didn't count on was that Steve Jobs would return to Apple and with the help of Microsoft, who poured capital into the failing Apple of the late 1990s, brought Apple back, first by rejuvenating the Macintosh line to keep its faithful happy, then with the introduction of the iPod. Now, the iPod didn't seem to be a device that would help Apple compete with the Blackberries, Treos, and Nokias, but it helped Jobs build teams that could build small, hand-held devices and figure out how to market them. That effort crushed other portable audio players and did so in such a way to give Jobs the confidence, and cash, to invest in other devices like phones.

A few of the engineers who built the iPod were asked by Jobs to come up with ideas for a new device that would merge what they learned with the iPod and add in phone capabilities. The early prototypes looked much more like an iPod than the product we all know today.

That team all carried Treos and studied the other devices, Andy Grignon told us. He was one of the dozen who worked on the first prototypes. They saw that while early devices were useful because they could be carried around, they were hard to use for most things other than making phone calls. Many didn't have keyboards that were easy to type on, for instance, and even the ones that did have keyboards, like the RIM devices, were hard to use to surf the web or do things like edit photos on.

He told us that Jobs forbade him from hiring anyone who had worked on one of these competitive products, or even from hiring anyone who had worked in the telecom industry. Jobs wanted new thinking.

On January 9, 2007, Steve Jobs introduced the iPhone. That day, we were at the big Consumer Electronics Show getting reactions from Blackberry and Nokia execs. They demonstrated the hubris that often comes from being on top: "Cupertino doesn't know how to build phones," one told us. They totally missed that there was an unserved market―one that not only wanted to use devices while walking around, but also wanted to do far more than just make a call or take a photo once in a while. Their devices were too hard to use for other tasks, and their arrogance kept them from coming up with a device that made it easy.

Around that time, the web had become something that everyone was using for all sorts of things that Tim Berners-Lee, the inventor of the web, could never imagine. With iPhones, and later, Android phones, we could easily use the full web on our mobile devices while walking around, all by using our fingers to zoom into articles on the New York Times, just like Steve Jobs had demoed months earlier from a stage in San Francisco.

It was this combination of an easy-to-use device, along with sensors, that could start adding location-based context to apps that formed the basis of many new companies, from Uber to Instagram, that were born within years of the iPhone launch, which showed something significant had happened to the world of technology and that set up the conditions for the next battle over where the tech industry will go next: Spatial Computing.

Paradigm Four – Spatial Computing

You might notice a theme here. Each paradigm builds upon the paradigm that came before, bringing real breakthroughs in user experience. With our mobile phones, tablets, and computers, there's still one glaring problem―they don't work like humans do. Paradigm Four is bringing the perfect storm of all usability breakthroughs.

Even a young child knows how to pick up a cup and put it in the dishwasher or fill it with milk. But this same child is forced to use computing that doesn't work like that. Instead of grabbing with her hand, she has to touch a screen, or use a mouse, to manipulate objects on a flat screen.

In Spatial Computing, she will just grab a virtual cup like she would a real one in the real world. This will enable many new people to get into computing and will make all of our tasks easier, and introduce new ways of living.

This move to a 3D world isn't just happening in computing, either. We experienced an off-Broadway play, "Sleep No More," which was a remake of Shakespeare's Macbeth, where you walk through sets, with action happening all around. Even New York plays are starting to move from something confined to a rectangular stage to one that surrounds us in 360-degrees. It's a powerful move―one that totally changes entertainment and the audience's expectations of it.

Photo credit: Robert Scoble. Qualcomm shows off Spatial Computing/AR Glasses of the future at an event in 2017.

If Spatial Computing only introduced new 3D thinking, that would be absolutely huge. But it's joined by new kinds of voice interfaces. Those of us who have an Amazon Echo or a Google Home device already know that you can talk to computers now and ask them to do things. Within a year or two, you will be having entire conversations with computers. Also coming at the same time are powerful new AIs that can "see" things in your environment. Computer Vision will make getting information about things, plants, and people much easier.

A perfect storm is arriving―one that will make computing more personal, easier to use, and more powerful. This new form of computing will disrupt seven industries, at a minimum, which we will go into deeply in the rest of this book. In the next section, we'll look at how this storm of change is to impact technology itself.