Or "Five reasons why Mike Elgan, and Google, are wrong about Google Glass".
I first met Mike Elgan at I/O 2012. We had interacted before then on Google+, but this was the first time we had met in person.
We spent about 30 minutes chatting about a wide variety of things. The sessions we had been to, his perspective as a member of the press, my telecommuting, his digitally nomadic life, etc. We were standing across from the Google Glass booth, still packed with people signing up, so eventually our conversation turned towards it. We had both, of course, watched people jump out of a blimp and into our keynote session. We had seen the video, talking about how Glass was a personal device like no other previously seen.
Both of us were interested. We saw the potential, and were curious where it would go. Within the year, both of us were Glass Explorers.
Which is why I'm somewhat confused about why, in a recent opinion article for Computerworld, he seems to agree with Google that Glass should be restricted to the business domain. While he raises some good issues, I think he misses some important points about Glass and the state of Google's technologies today. (And it sure sounds like Google is missing these points too.)
1. Glass is more socially acceptable than Silicon Valley thinks
#throughglass |
- Movie theaters
- San Francisco
Everywhere else? People understand it and accept it without question, regardless of age. I've had people on the New York City subway excited to hear how I was using it, and share with me how theirs was on order. Truckers in Missouri interested in the ability to send a text with just my voice. Students in Louisiana excited that they could keep in touch with friends without having to hide behind a screen. Airport security screeners and border agents curious about what it actually does.
Many people start by assuming it is AR, and when I explain that it isn't, they get even more excited. More and more, people understand everything it can do and compare it to technology that has come along since Glass. Sometimes they relate it to the smart watch they're wearing. Other times they understand that it is the Google Assistant in a different form factor. But Glass is no longer a strange device with unclear features.
2. Google is already building solutions to problems, consumer and business, that Glass would benefit from
The notion that there are "consumer" problems that are somehow different than "enterprise" problems has confused me for a while. One thing that PCs, mobile devices, and the web have shown us is that general purpose technology crosses over from business to personal quite smoothly.
The big feature about Glass EE is that it can stream video, which was one of the most demanded features that people wanted to return in the "consumer" XE version. Consumers stream video all the time - YouTube and Facebook rely on this. Two years before Glass, consumers were experimenting with live-streaming in Hangouts. Glass would just let us do this and still focus on the surroundings.
Mike noted that the most used feature of Glass Classic was to take pictures - and this was true. What I'm not sure is true is that "Picture-taking is not something consumers really need help with." If not, why is this still the feature that phone companies constantly say are improved from year to year? When I used Glass at I/O 2013, I was one of the only people taking pictures of the slides, and I would do the same thing anywhere I wanted to remember something - now, people routinely do the same at conferences, in schools, at supermarkets, etc. Glass does this faster, easier, and with less intrusion.
What Mike failed to mention, however, was that the next two most popular features for Glass were navigation and messaging - features that aren't "business" features at all. And features that Google is working hard on for other products.
#throughglass |
Navigation seemed like a solved problem - yet recently Google has demonstrated a map view where it uses the phone's camera to help determine your location and the direction you're facing and provide better directions than GPS alone could provide. Although touted as "AR navigation", the augmentation is fairly minimal - the significant feature is being able to locate exactly where you are and suggest exactly where you should be turning, based solely on what it can see. Yet it is extremely awkward to hold your phone up at eye level to turn it around to locate where you are. Wouldn't it make more sense to have that in something like, say, a head mounted camera?
Google has also been making strong pushes with other hands-free features, provided by the voice-controlled Google Assistant. The Assistant can be used to do such non-business tasks as controlling the aforementioned navigation and sending messages. Glass did these, and even more - notifications could be hands-free and voice-controlled as well. You could even have your notifications and messages read to you, if you wished it. Having these sorts of voice-first operations is very much a consumer drive at Google right now - so much so that it took two years for them to have the Assistant access gSuite calendars.
Having Glass be a channel to the (consumer) Assistant makes a tremendous amount of sense. In many ways, the Assistant is the true heir of Glass.
3. Glass isn't AR, and tethering doesn't require USB-C
#throughglass |
Fortunately, Glass doesn't need that. (And neither do most people, even for business.) It works well with more limited bandwidth that Bluetooth provides because it isn't trying to provide high-bandwidth 60 fps animated overlays on top of reality.
This is even more true when you think about the work Google is doing with having the Assistant on devices ranging from low end, relying on connectivity to do the voice processing, to high end, where the Next Generation Assistant will be able to handle all processing locally. Where would something like Glass fall into that mix? Somewhere in between, but the bandwidth requirements for it don't seem to demand a USB tether.
Glass' motto was "there when you need it, out of the way when you don't" and, as much as possible, it tried to stay out of your way and let you experience the world around you without interference. Even things like navigation were minimally intrusive - intentionally so. Glass isn't cool because it had flashy graphics - it is cool because of the value it provides when using it, and that value doesn't require a lot of bandwidth.
4. Similarly, battery use isn't that bad
I know a bunch of Glass Explorers are looking at me strange with that statement, but the nature of how Glass was used tended to sip power from the battery. Yes, streaming video could drain the battery pretty quickly. It does on a phone, too. But most everyday uses of Glass would give you about as long a life as most watches.
Which isn't to say its a solved problem. It isn't. But if you're not doing AR or streaming video, and just using it for Assistant-like notifications and replies, the battery life is pretty decent. Using Glass at I/O, for example, it was not unusual for me to get through 12 hours of heavy picture taking without a recharge (which is more than I could say for either my laptop or phone).
5. Price is a factor, but a complex one
As Mike noted, Glass got a (roughly) 33% price cut in the latest edition. That seems good, but $1000 is still pretty steep for an accessory.
At least... until you consider that Samsung tried to sell a phone that had the dubious feature of having a foldable screen for more than twice that price. Or that wrist wearables, with half the features of Glass, are roughly half the price.
It becomes more interesting when you look at the strategy that North, whom Mike also mentioned, has taken. Their glasses start at $600, target both consumers and the enterprise, and are available at their permanent stores in Brooklyn and Toronto (nice hip and trendy cities) as well as pop-up shops continent-wide (from Seattle to Charlotte). Oh... and they support Alexa out of the box, with everything that integration brings to the consumer.
By limiting Glass to the enterprise only, Google is eliminating any possible economies of scale in production. Calling a deployment of 440 units "at scale" seems laughable. But I have to assume that someone inside Google has figured out that if they don't make a lot of them, they can charge a higher price than if they sold to the consumer demand, and are making a large profit margin.
Certainly they have fewer support costs if they only need to support a handful of "partners", who are responsible for actually supporting the customers. Unfortunately, they don't seem very interested in adding partners who want to support a consumer-level device - no partners listed seemed to support such features, and Google hasn't responded to my partner request.
I continue to speak to people who are interested in buying Glass. Last year at I/O, I ran into a person who had just purchased Glass and knew nothing about the history of the device. You can still find them for sale on Ebay, at prices ranging from just a couple of hundred to over a thousand dollars.
Speaking just for myself, I would be willing to put down $1000 right now for a brand new Glass if it
- had full support for the Google Assistant and third party Actions
- supported modern navigation, messaging, photos, and video
- came in the classic blue color
And I don't think I'm the only consumer out there that would see value in it.
What about it, Google?
"came in the classic blue color" -> This is mandatory.
ReplyDeleteI believe we have already talked about this. But all the tools that Apple and Google are presenting for AR, or assisted reality don´t make sense holding a phone during long period of times in front of you to give you directions or give you extended info of what you are looking at (Lens). Also using the Assistant and having results like the Nest Hub. I see a lot of consumer/business use cases.
Hope one day we see it