March 12, 2014

A week with Google Glass

Written by

Unless you’ve been living under a rock you by now will have heard about the hype surrounding Google’s recently released Google Glass. To much excitement, we recently received our device in the Odecee office, and we asked one of our senior developers to spend a week getting to know the technology. Below are his thoughts on the device.

Hardware

Glass is an Android based wearable device shaped to be worn like a pair of glasses. It consists of a display that is designed to sit slightly above your normal line of sight; a touch pad located on the side of your head, a bone conduction speaker that is unusual in that it vibrates your skull rather than sending vibrations through the air like normal headphones, and a microphone for voice recognition.  Other specs include: OMAP 4430 CPU (couldn’t find speed rating); 1GB RAM (682MB useable); 16GB Flash (12GB useable); WIFI b/g, Bluetooth, micro USB; 640 x 360 pixel display; 5M photos, and 720p video.

Glass comes in a beautifully presented box that is heavier than the actual Glass headset! It smacks of Apple but in this case that’s a good thing. Accessories included a tiny USB charger with US plugs but luckily is rated up to 240V, clip on sun shades, extra nose pieces hidden away in another Apple-like mini cardboard pack / instruction pamphlet, a non-twist USB cable, a normal headphone (one ear only and never needs to be used) and a grey felt carrying pouch to protect the rather delicate looking Glass. The pouch only has a central pocket for the Glass. Dumping the accessories in there is messy and their sharp edges seem like they could easily scratch or even snap off the even more delicate looking prism display. Take off marks for that design oversight, but keep in mind that this product, even at version 2, has a feel of Beta about it, and Google deliberately are trying to restrict it by being available via invite only.

Glass is designed to fit over any existing pair of glasses you may or may not be wearing, so the nose bridge pieces are long and adjustable. The unit has a titanium steel band so it’s very light and flexible, however the chunkier bits holding the CPU and display look to be made of softer and more fragile plastics. The adjustable display prism looks like it could snap off easily if the user is not careful. The non-replaceable battery is placed in a separate pod behind the ear to keep it cooler as the unit under operation gets quite noticeably hot under sustained operation.

Related to this heat, the battery life is around 3-4 hours when showing off the unit and playing games with the display continuously on. Watching the Google videos explaining the design and usage of Glass, this is not the normal way to interact with Glass. Instead, the interaction is meant to be short and intense, eg. find the next turn on a navigation map, then the display turns off whilst the user does something else, before another short burst of interaction. Under this interaction model we could expect better battery life, but other Glass blogs have complained that the battery does not last a whole day. On standby, the unit completely discharged after two days of no interaction (but turned on). Glass can be turned off (only taking a few seconds), but turning it back on requires waiting for about 20 seconds which seems “like forever!” and more akin to a desktop PC experience.

Built-in Apps

Out of the box, Glass comes with the following functions:

  • Search Google
  • A Simple web browser
  • Take a picture
  • Record a video
  • Get directions
  • Send a message
  • Make a call,
  • Make a video call
  • Take a note.

It doesn’t sound like much – after all any smartphone these days comes bundled with a couple of screenfuls of apps – however the way you are meant to interact with Glass is different to that of other devices. All these apps can be activated via voice, and for some, voice is the only way to interact with them. For instance, to get directions or take a note, there is is no keyboard to enter street names or a reminder to buy milk, so you speak to Glass. The voice recognition is pretty accurate and seemed to handle the various accents this unit was tried with. Results may vary, of course..

The other way to interact with Glass is via the touchpad on the right hand side of the unit. As this is the outer surface of the central pod containing the CPU, it’s quite chunky and long, so it’s easy to swipe forwards and backwards, resulting in scroll left / right, or scroll up / down, depending on the application in question. Tapping on the unit is the equivalent of a single click ie. ok. Swipe down is the equivalent of hitting the back button. With a built in compass and tilt sensor, the unit can also be adjusted (via Settings) to recognise nods, tilt head up to turn glass on (if you’re in the middle of a task and can’t tap the side of the unit), wink to take photos, etc.

For a unit that is supposed to be so voice controlled, one annoying thing is that the left / right / down swipes and taps, aren’t replicated as voice commands, as far as I could experiment. This means that you have to have a free right hand to navigate some apps, and certainly the “home” screens timelines. This seems like an incomplete design, considering from the home screen, you can launch any app via voice, and menu app specific menus are voice triggerable.

Initially I could only wear it for short periods as I got a bit dizzy trying to focus on the screen then back to reality and back again, but eventually my eyes got used to it and the nausea went away.  (I should add that I also get quite nauseous playing first person shooters, and even with repeated tries at these games, this problem has not gone away)

On the surface it doesn’t seem like you get much functionality with Glass and it’s list of built-in apps, but it really grows on you, the more time you spend with it.

Initial operation

Once Glass has been turned on, the display goes blank after a few seconds. You can tilt your head up or tap on the side of the unit to wake it up. This “home” screen shows the current time and “Ok Glass”, which is the prompt for you to speak “Ok Glass”! A list of actions then appears, and you can scroll up and down by tilting your head. You then say the words to activate that function like “send a message to..” which will launch the messaging app. Any new app (called Glassware) that you install can have a phrase added to this initial list so you can launch it via a voice command such as “record recipe”.

Back on the home screen, if you swipe forwards, it goes back through your Timeline, which is like your browser history, but expanded to be any action you’ve taken in the past, be it a message, google search, etc. Any photos or videos taken form part of this history, so be careful – anyone borrowing your glass can see *exactly* what you’ve been up to, unless you remember to delete each and every item you’d rather keep private!

 

Demo Games

The motion / tilt sensors are shown off in some demo games (which are hilarious to watch by the way, as some use a combination of tilt and voice control, resulting in the Glass player looking off into the distance, head tilting crazily, and shouting “bang bang bang” to activate a gun!), but luckily in normal operation only the head tilt up to turn on and possibly turning around to handle compass motion / navigation, is used.

Tethering Requirements

Glass is a unit that needs to be tethered to a phone for full functionality, similar to the way the smartwatch accessories like Samsung Gear. Glass does not have its own GPS sensor as far as I could tell (conflicting rumours on the interwebs says standalone directions are coming??) as it was unable to determine a location until paired up with smartphone with GPS sensor enabled. Once tethered via Bluetooth (the unit also has Wifi), it shows maps and turn by turn directions. Similarly, it relies on the tethered phone to do messaging and calls.

You also need a companion app called My Glass if you want to download apps (Glassware) and other setup functions as the unit has no ability to do so natively. For example, initially, the unit has no wifi connection, and must be set up via My Glass or via the web. Instead of entering the Wifi’s name and password, you enter it on the app / website, which generates a QR code. Glass scans this QR code in order to complete wifi setup.

The My Glass companion app is not officially available in non-launch countries ie. Australia, so you are left to your own devices to install these apps.

The other way to install apps is via a development / debug connection, using standard Android development tools like the Android Development Toolkit / SDK.

Native Apps – use Glass Development Kit (GDK)

Google Glass uses different libraries branched off Android SDK 15 / 4.0.4 / Ice Cream Sandwich, which is called the GDK – Google Development Kit, allowing you to develop native apps directly on the device. This allows access to the hardware: motion sensor, touch pad, camera and display. The GDK is in “Preview” status and is not anywhere near finalised yet.  See the Augmented Reality screenshots below for some examples of GDK apps.

Web based Apps – use Mirror API

The other category of Glassware is web based, using Google’s Mirror API, where the registered google user account that the device is linked to, is authorised to a access the device via the OAuth2 protocol. This allows bidirectional communication between a third party website and the Glass device, programmed entirely in RESTful web services. “Push” calls from the third party website go via google servers to result in static “cards” on the users’s timeline. They appear like a popup message but can contain text and images. Since the display is fairly small, cards can be linked together via swipes / clicks to navigate through multiple pages. These cards also appear in the users’s timeline so they can be recalled at any time afterwards with a swipe, from the “Ok glass” screen. The user can also authorise a given app to send location updates, resulting in the device sending periodic location updates to google. The third party website previously registers for location updates from google, and google will periodically notify this other website that the user is at a new location, thus allowing the website to send location based information such as special offers applicable only at that location, for instance.

Example of a map based image easily created using GPS coordinates and text strings, and sent to the device, using the Mirror API.

Privacy concerns

This of course opens up all sorts of privacy concerns, but this is an opt-in authorisation, and the privileges being asked for are displayed when logging in via OAuth2 credentials.  As there is no keyboard to enter private data, the user needs to be wary of speaking such information out aloud and the people nearby.  This includes addresses, which you need to speak in order to obtain directions, or create a note etc.  Imagine a theoretical banking app, whereby authorising payments requires speaking out a private PIN?  This would be a disaster, so Glass interaction limitations obviously need to be at the forefront of Glass app design.

Augmented Reality learning – phone based AR and Glass AR are different

Before I finish this episode, one area we decided to focus on initially was the use of Glass for Augmented Reality. Here is an initial finding:

Augmented Reality on smartphones works very well using the built-in backside camera and touchscreen display, as you have the phone’s frame around the video feed that makes it look like a window that you’re looking into, with the AR animation superimposed on that video. AR through Glass requires a different approach. With Glass you are already looking at reality, so I found that if you duplicate this “reality” by showing the video feed on the Glass display, it detracts from AR experience rather than enhancing it, as it appears to be a slightly different version of reality duplicated in one corner of your eye. It’s a better experience for the Glass display to show only the animated parts (eg. icons showing the nearest bank branch with a transparent background) with no attempt at duplication of the video feed.

Approximate view as seen by person:

 

Display in corner of user’s field of vision using Wikitude AR library, replicating user’s view in a slightly distracting way. (Imagine this superimposed to top right hand corner of the picture above):

Another AR view not attempting to replicate view as seen by person, which seems to work better. (Imagine this superimposed to top right hand corner of the picture above):

Summary

I’m always excited to get to play with the latest new kit that comes out, and given the hype around wearable technology at the moment, it’s clear to see that they are going to be both extremely useful and pervasive at the same time. There is going to be a fine line that is going to need to be explored around user privacy. Overall however, I am confident that Glass is going to be an amazing addition to people’s lives.

Glass was fun to use, though I think there are still areas which need improvement. I found I was using my hands to control it far too much for a voice controlled device.

As Glass is still in beta it obviously has a lack of applications at the moment, but after playing with the SDK I’m excited about the future of Glass and its potential opportunities once it is more accessible to developers. I look forward to seeing what developers come up with!

 

Author

Gary Chang was Odecee first employee! He’s a Java Enterprise developer with over 20 years of development experience. He’s recently turned his attention to Android and has worked on several of our clients’ Android applications.

 

 

 

 

Tags: , , , , ,

Categorised in: Technology

This post was written by Gary Chang