The 80s called. They want their mobile phones back.

Size, it seems, matters.

At the end of last week Samsung announced the latest in their range of amusingly large mobile handsets – The 6.3 inch Galaxy Mega. 6.3 inches…so finally we’ve returned to the era in technology when phones are larger than Smurfs. Oh, the humanity…or smurfanity.

I’ve seen the Samsung Note 2 out in the wild a few times and have always thought that its 5.5 inch screen looked pretty ridiculous. Now it has a bigger brother. Where will it end? Are we only a flexible glass display away from a 21st Century version of the Nintendo Power Glove?

The future...breath it in.
The future…breath it in.


Now of course I realise that a screen that size can be very useful for watching videos, typing long words, and converting into a makeshift Bivouac should you find yourself lost in the forest as the darkness closes in. As a mobile device that you hold up to your face in public and make phone calls with it seems less than optimum.

There’s the fact that at the end of your two year contract you’ll resemble one of the creatures from the last levels of Resident Evil. One arm will be enormous and muscled due to the repeated task of lifting an electronic surfboard up to your face for extended periods of time, while the other apendage will seem listless and withered in comparison.

You may find yourself worshipped by easily impressed people who see a halo-like ethereal light around your face thanks to your wide girthed phone of choice. Of course some would say this could be a plus.

Probably the most serious problem though is that you’ll just look very, very silly talking into a glowing cereal box while walking down the street.

It’s a strange thing that as we increase the size of devices that are meant to, theoretically, fit in our pockets we’re actually shrinking the dimensions of devices that we do some proper work on. The iPad has started a trend towards tablets as many people’s computer of choice, and now offers a smaller version which is selling rather well. Microsoft seem to view the future in terms of PC/tablet hybrids, with its Surface division deciding that the 10-inch mark is enough for most people to be productive. Of course if you’re older like me, and find your eyes a little less eagle-like than they used to be, then you’re going to be doing a lot of squinting in the technological years ahead.



It does feel curious that when the mobile phone first appeared it was a huge device that looked, well, stupid, and was a bugger to put in your pocket. Now, a few decades later we’re in need of spacious trousers if we want to keep up with the times.

Of course I’m somewhat biased as I have an iPhone 4S which is fast becoming the Smart Car of mobile phones. Frankly its a wonder that I haven’t lost it under a speck of dust or had it slip through the stitching in my pocket. The thing is it’s actually a very nice size. I can operate it with one hand – which is something I do a lot with a phone as I’m usually moving around while using it – and it reaches from my ear to my mouth, so phone calls are quite achievable. Granted, after using a HTC One X for a while recently I could see how Youtube was much better on the larger screen, but other than that I’m lost for why we need something as big as the Samsung Mega.

Time will tell if the age of gargantuphones will be a long one, or whether we’ll be talking to our watches and glasses in the next few years. Either way I think I’ll stick with something small, as I’m already struggling enough to fit into my trousers these days.

What do you think? Am I missing the point? Is a huge phone actually really useful? I’m genuinely intrigued and open to persuasion, as I just don’t get this whole trend. Let me know your experiences and how these devices can be a good or bad thing in the comments below. 

The Future of Computer Interfaces: Voice Control

The last couple of years have seen a huge emphasis put on voice control interfaces. From Apple’s Siri to Google Now and the upcoming Google Glass project it seems that the future is definitely going to be a louder one. There’s no doubt that as these technologies mature they will become a central part of our interaction with devices, but they still have a fair way to go in terms of accuracy. Siri can be a frustrating device to use, especially if you have a heavy accent or even a cold. The anguish that the system can induce is wonderfully highlighted in the NSFW Youtube video ‘Apple Scotland – iPhone commercial for Siri’, which features a scotsman trying in vain to ask Siri for eating advice. The main challenge for the voice deciphering code is that it has to contend with many different factors while interpreting a user’s input. One company that has been working on overcoming these challenges on the desktop are Nuance, whose Dragon Dictate software is one of the most advanced in the industry. I spoke with them recently to discover just what it takes to write a voice interface that we can actually use.

‘Speech recognition is an extraordinarily hard computational problem’ explains Nuance’s Neil Grant. ‘Effectively you’ve got an astronomical search space. An example would be if you had a seventeen word phrase – which is an average length sentence – within a fifty thousand word vocabulary. It’s the equivalent of finding the correct phrase out of seven point six times ten to the seventy nine possibility. Roughly the amount of atoms in the observable universe. Now to put that into context when Google does a search to find a webpage for you, it’s searching somewhere around one times ten to the twelve web pages, so significantly less.

‘If you’re typing something on a keyboard it’s very simple, it’s binary – you either hit the keystroke or you don’t. With speech there’s far more variability in terms of accents, tonality, environmental conditions, background noise, and microphone quality. One of the ways we tighten that with the desktop speech recognition is that a user has a profile attached to them so the computer understands the nuances of the way they speak. The software can apply this data to achieve higher levels of accuracy, and the more you use it and make corrections, the more it learns and then applies those learnings to your profile.’


This dedicated usage is a significant factor that gives Nuance software its famed levels of accuracy. It also highlights one of the challenges ahead for the mobile software that many of us currently use.

‘Something like Siri is effectively speaker independent speech recognition’ says Neil. ‘ Now that means it’s not training a profile for you, certainly not in any great depth. You might use it on your phone then another family member might use it, so it’s dealing with potentially multiple speakers from the same device. It’s a much harder process and means it can’t set itself up in advance for a particular accent.’

Advances in noise cancelling microphones and the continued refinement of voice control software is seeing rapid improvements in all areas of the technology. Nuance itself offers iPad and iPhone versions of their software now, and the continued updates to Siri and Google Voice Search will no doubt push the software even further in the years ahead. Manufacturers are also beginning to incorporate the technology into newer versions of laptops in response to the ever encroaching influence of tablets.

‘One of the key specifications set by Intel on the new ultrabooks is embedded speech recognition’ says Neil. ‘So this is something that is absolutely coming through and what we will see is speech on these devices becoming more and more ubiquitous.’

One of the eye catching elements of Siri that Apple aggressively markets is the system wide integration of commands. Rather than a stand-alone app, Siri is able to control calendar entries, send emails, tweets, update Facebook, and play specific music to you, all from the same interface. For voice control to really make an impact on the everyday computers it needs to offer a similar level of depth. 

‘We can get very very deep’ Neil continues. ‘not only dictation capabilities but real command and control of applications like MS Office. For example a chap called Stuart Mangan, a rugby player, was involved in a tackle and broke his neck leaving him paralysed from the neck down. We effectively voice enabled his entire PC, to give him not only his email and documents but, through Nokia PC Suite, he was able text messages and make phone calls. He came back to us saying that we’d given him his independence and privacy back.’

The concept of voice control has been a staple of science fiction for decades, and the representation of communicable computers such as HAL in 2001: A Space Odyssey, or even Holly from Red Dwarf, has been a constant reminder of the convenience and ease with which the interface could work – so long as the computer in question will acquiesce to opening the pod bay doors when you ask. There’s no doubt that this kind of interface is now more of a possibility, but as the way we interact with our technology changes what impact will this have on the systems of the future?

Jetsons Voice

‘A mouse and a keyboard are not a natural way of interfacing with something’ states Neil. ‘They’re a solution to a problem, and they’ve been a very successful solution, but the keyboard layout was designed to slow us down. Stephen Fry came out with a very good quote a couple of years ago where he stated it took less time to get your private pilots license than it did to learn to type at sixty words per minute. So we’ve got these interfaces we’re stuck with at the moment – the keyboard and the mouse – which are fine for certain things but for others there are certainly improvements that can be made. You’re starting to see prototypes coming through, the Google Glass project for one, looking at ultra-mobility – wearable computing – and there is a necessity to change the interface. As your devices become more and more mobile you’re not going to be able to carry a keyboard around. Obviously voice is the natural step for that.’