The Impact of New Technology on Search
With the recent CES show in Las Vegas highlighting new technology that may – potentially – be part of our everyday lives in the next few years, what does the future of search look like in light of new gadgets?
You may have heard that 2012 was the ‘year of mobile’. As, in fact, has pretty much every year for almost the last decade been as well. What’s looking increasingly likely is that despite the bursts of popularity created around specific devices and new functionality, the uptake of mobile will reach a quiet, gradual tipping point beyond which it becomes impossible to ignore. For many businesses and industries this has already happened, with over half site traffic coming from mobile search. Although the distance between the desktop and the smartphone experience is getting closer, with newer displays and touchscreen refinements, there are still gains to be had by optimising design and search architecture for mobile devices.
From books, to laptops, to smartphones… where next?
Where this is really going to take off is when our need for information matches our lifestyle and behaviour. Years ago I would often keep a copy of Halliwell’s Film Guide near the television, as my film-buff household would turn any film into a Q&A session about what other films an actor had been in, who had directed, and when. This evolved into a laptop on standby which, although it was used for other work, frequently ended up on Google, Wikipedia or IMDB for the same purpose. Now the same gets done on a smartphone.
Note this circumvent is the previous ‘innovations’ of having contextual content appearing on DVDs or Blu-Rays. Although I would often wish for an internet TV that would highlight the contextual information I needed, the attempts to solve this on proprietary technologies were intrusive and disruptive to the viewing experience (you had to stop playback to watch the featurette). Instead, I ended up using existing devices to supplement one experience with another.
Speaking in 2007, futurist and early internet visionary William Gibson said this about the ‘early days’ of cyberspace when he was writing Neuromancer:
Back in the eighties, if you were lucky enough to have the equipment to visit cyberspace, you usually didn’t spend very long in there. These days though, cyberspace has everted and is now a part of people’s lives every day.
Mobile devices have evolved from portable communication tools to devices that tag us geographically and offer information based on that location. They also help create and view the ‘digital footprint’ of ourselves and others we are connected with, or may want to connect to. It’s already possible, for example, to interrupt one of my favourite authors, as I know from his tweets and uploaded blog status posts where and when he’s writing in the pub. (I wouldn’t though. One person’s flexible boundaries are another’s restraining order).
Innovative adaptations vs brand new concepts
Although there are new devices on the horizon, such as Google Glass, which offers the opportunity to get an overlay of digital information on the world we see right in front of our eyes, this is going to be another adoption jump in the same way as many tablet choices have been. For every iPad success is an Iliad or Archos also-ran.
In the interim, devices that can piggy-back on an existing technology that add greater function are going to be winners and we’re seeing this with some apps that turn your mobile into a quite capable sat-nav. Convergence is still very attractive as we only want to carry new gadgets around if they’re popular, fashionable (and can afford the price tag that usually applies), or replace the functionality of several separate pieces of technology.
One advantage of the Project Glass approach is using the complexity of sight to layer more information (and functionality) for the user. Another method is voice search, and improvements in voice recognition technology would be another way in which our method of searching (and eventually consuming retrieved information) will alter. Gibson also noted that, compared to a previous visit to the UK ten years earlier, London had changed:
The London of [Ezra] Pound has gone. The faces like petals on black, wet boughs. Everyone‘s talking into their hands.
With solitary conversations now explained by Bluetooth headsets rather than clinical insanity, the types of input into search may well be increasingly vocal, and for the future even thought-powered. CES demonstrated some control devices using brainwaves, which, if fine-tuned, could prove to have a wider attraction than current applications in medical mobility and communication.
What does this mean for SEO?
What this means for search marketers is sketchy: the big search engines have already talked at length about personalised search experiences, and about how integrating technology that moves and responds as the wearer/carrier moves about is going to become more of an influencing factor.
The holders of these technology patents are going to be able to acquire more information on these people and, as well as starting to attribute more information to specified individuals on the web, I believe it’s going to become more difficult to get access to this information. Already vast swathes of Google Analytics data are withheld for logged-in searches and via certain devices; imagine having to detect the search intent of mobile users who are moving in and out of a brand’s online influence via a heads-up display!
Although manufacturer shows like CES will present a scattergun blast of gadgets from forks that warn you are eating too fast, to keyboards you can use in a recliner, I think the ones that serve our behavioural informational needs will be the ones that last. Although wearable and self-powered computers are a way off, Google Glass may be the first step… and will open up a whole area of what the wearable, wandering web is searching for.