5 Reasons To Pay Attention To Google Glass

Written by

We’ve been paying close attention to Google Glass, it is a unique wearable device that offers plenty for learning designers to be excited about.

Five reasons we think Glass is going to matter:

  • Ubiquitous – always on, always there – we left computers behind on our desks when we left home or work, we carried our laptop and notebook computers around, called them portable but didn’t really use them everywhere, then came tablets/smart-phone like devices; we carry them around but we aren’t always looking at them, or using them. Glass changes this in a marked way, rather than your computer being something you carried, turned on and looked at when required, it is cumbersome. Glass overcomes that by being always on, and as something you wear, always there.In this video, it becomes apparent that the wearable nature of the device overcomes many of the physical limitations of a device that must be carried, and turned on/off (or woken/sent to sleep).This video gives a great example of ubiquitous always-on devices integrated with back-end systems is capable of
  • Capable of Continuous Capture – With an inbuilt camera for capture photos and video, Glass offers the ability to capture the user’s field of view. This is a pretty unique feature, up to now our devices with capture capabilities have been cumbersome. While smart-phones have changed the game for photos and videos, it doesn’t really offer a first-person view unless one makes an attempt at it. Google Glass changes that; because of its wearable design the Glass camera has a field of view that is close to what the user is actually seeing. While there has been a flood of ‘how-to’ videos around for a while, Glass takes away many of the perceived barriers to first person field of view video (device not on person, or carried on person but put away, having to turn it on/waken it, and then accessing the device menu to access the camera hardware, etc.). While at this time, the video duration Glass can record is limited, an SDK would allow a developer to circumvent that.Google Glass: What It DoesImagine a future where you can choose to record (and broadcast/stream)video on the go related to any type of performance. This will eventually lead to video content generated on the fly, that is indexed and documents job performances ranging from mundane to highly complex that require well-honed skills.
  • Truly Location Aware – desktop computers weren’t location aware, and neither were the portable computers, the advent of phones bought approximate location awareness to devices. Embedding GPS, which is far more accurate than network/cell-based triangulation in wearable devices offers a fine location that is far more useful than a coarse approximation, Glass will always know where it is. While one could visualize many use-case scenarios for location in a wearable device, nothing is as persuasive as being provided information just-in-time based on location. If you are in a museum, get information about what you are seeing, or have detailed information presented to you as you navigate a new workplace, or as one attempt to use a particular use of equipment; the possibilities are enormous.Just to get a good idea of what Google Glass is actually about –Check out the bit where the reviewer asks for directions or Oriole Park, shows what location awareness does for the device and applications. The entire video points to the utility of a device such as Glass.
  • Ability to Augment Reality – augmented reality so far has been focused around smart phones; simply because AR depends on the ability to determine location and the orientation of a device, something only smart-phones equipped with sensors and GPS allowed till this point in time. Glass becomes perhaps the first bit of wearable kit that comes with sensors and a display (coupled with it always pointing at the user’s field of view). If looked at differently, Glass can be considered as nothing but a device that is meant to augment reality, something smart-phones weren’t designed to do. This ability to augment reality with information or graphics points to a future of applications for learning that are context-sensitive and actually useful.I love this example of a Google Glass application being used to augment a baseball game
  • Truly Hands-free – computers and smart-phones need ‘manipulation’ with hands to make them productive, they need input devices that require physical dexterity and hand-eye coordination. Glass doesn’t use any sort of hardware input device, it depends on advanced voice recognition to interpret commands and act accordingly – what this has done is to free up the user’s hands.This video gives insight into how voice works on Google glass (this is a bit geeky)So now, you could actually be using your computing device while using your hands to ‘work’ on something. One of the simplest examples I can think of is where a Glass-like device is being used when you are fixing something wrong with a car. The display overlays what you see with technical information, repair instructions pertinent to your situation, all this while you continue to work with physical tools – performance support like we’ve never known it.

One of the arguments I often hear is that a comprehensive SDK isn’t in place yet, and making something meaningful from a learning perspective won’t happen until there is an reasonably easy-to-use and cheap SDK in place. At this point in time, any development on Glass is restricted to what you can dream within the boundaries of a fairly restrictive API. As an Android device you could possibly used the Android SDK for development, but other than try out some ideas, I don’t see it worthy of sustained development efforts.

To create something meaningful, robust and use-worthy, we need a Glass specific SDK – it seems that just might be happening shortly – Google seems to be readying to launch the Glass Development Kit .

The age of context is upon us, hardware and software are now driving towards providing computing capabilities in the context of use – whether it is work, play or learning. Performance support applications will change form and become context driven. I believe Glass is just the first wave of devices that will provide context, this will fundamentally change how we leverage learning technology in our day-to-day lives.

Upcoming Events

Learning@Work 2013 | Nov 11–13 | Sydney

FREE eBook

eLearning on tablets – Getting it right

Write a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

GET INSIGHTS AND LEARNING DELIGHTS STRAIGHT TO YOUR INBOX, SUBSCRIBE TO UPSIDE LEARNING BLOG.

    Enter Your Email

    Published on:

    Don't forget to share this post!

    Achievements of Upside Learning Solutions

    WANT TO FIND OUT HOW OUR SOLUTIONS CAN IMPACT
    YOUR ORGANISATION?
    CLICK HERE TO GET IN TOUCH