I don’t have access to one (yet), so I’m extrapolating a bit from information found at http://www.slideshare.net/mdurwin/future-m-glass-preso, http://www.hongkiat.com/blog/google-glass/ and http://www.techradar.com/news/video/google-glass-what-you-need-to-know-1078114:
- It can communicate directly with Web sites/apps via a smartphone/tablet (through Bluetooth), or direct via Wi-Fi, hence is not dependent on any phone/tablet app, even though such can add functionality.
- It’s not locked to Android, as it fundamentally uses a smartphone/tablet as a tethering device.
- There will be Android apps adding capabilities to Google Glass, like location (no integrated GPS) etc. Will there be SDKs for other mobile platforms?
- It uses voice for input. There’s no indication of eye movement detection. Could that be supported via apps?
- Wi-Fi and camera draw a lot of power, so “a day of normal use” sounds like a stretch, considering the battery is tiny.
- It seems to support the features of Goggles and more. E.g. it can read and translate QR Codes, and probably also do picture searches etc.
What’s unclear is how information will be presented, and whether the user can interact in detail with it, like scrolling pages and such:
- E.g., what happens when a QR Code is read? Will it show a full-screen content view? Hopefully not, as it would obstruct the view.
- An AR solution should point to where shops etc are (if you’ve used e.g. Layar, you know what I mean), one of the use cases where eye movements would be a nice way to navigate. I didn’t find any such example.