Is it Google’s plan to index the world’s information, or to curate it?

I just heard (via @montymunford) that Google will start ranking mobile websites lower in search results when they use a “download our app” popup on the page. Read about it here.

One of Google’s justifications is that the experience of seeing a pop-up banner may be ‘disruptive’ to the user experience.

Is it Google’s job to play User Experience police to the whole internet?

It’s one thing to deprioritise sites with poor or duplicate content. But to de-rank sites based on user interaction decisions of the developers? Isn’t that taking it a bit too far?

Some argue that it’s a good thing… that it helps us find better content. Maybe that’s true… but where would it end? What if Google started de-ranking sites because the navigation was unclear? Or because there was no ‘about’ page?

It’s a slippery slope.

Google already controls access to a huge proportion of the internet. They are the gatekeepers… the ones who decide what we get to see, and what not. To me, consolidating all of this power in one gate puts the freedom and openness of the internet at risk.

What Stalin learned about incentives (and how most companies are still doing it wrong)

In the 1930’s, in the very early days of the Soviet Union under Stalin, the communist leaders knew they had a problem.

The process of Stalinist Industrialisation forced the majority of the Russian population, most of whom had lived in the countryside, into newly constructed settlements based around factories and industry. People were dispossessed of their land and belongings (which then became property of the state) and were put to work in new industry for the glory of the Soviet state and economy.

Although there were many efficiency gains created by the reallocation of resources to industry and the introduction of new tools and processes to factories, Stalin found that economic growth beyond that created by manual allocation of labour was essentially non-existent.

Why?

Stalin had uncovered the critical flaw of the communist system: incentives. His entire population had been dispossessed of their lands and property and put to work for a stipend salary and subsistence diet, with the entire profit going to the state. Where was the motivation to work hard?

Stalin had an incentive problem.

As early as 1931 Stalin realised that the dream of a society of citizens intrinsically motivated to work hard purely for the glory of the Socialist Party would never be a reality, and he gave up on the idea of creating “socialist men and women” who would work hard without incentives. So Stalin introduced two kinds of incentives:

1. Fear – of being imprisoned, tortured, sent to a gulag in Siberia or shot; and
2. Monetary incentives.

FEAR
Keeping people working was enforced by the absenteeism law, which defined absenteeism as any twenty minutes of unauthorised absence or idling on the job. Even giving the perception of idling was sufficient. But even Stalin appreciated that fear will only get someone to their job such that they do the bare minimum. You can’t scare someone into being extra productive, much less innovative.

And yet, it turns out fear didn’t work so well after all. 36 million people – about one third of the adult population of the Soviet Union – were found guilty of absenteeism at least once between 1940 and 1955. Of these, 15 million were sent to prison and 250,000 were shot.

It seems that fear will only take you so far.

MONETARY INCENTIVES
Stalin also experimented with various monetary incentives. For example, he introduced monthly bonus payments to individuals and companies who exceeded their production output target, and penalties for coming in under. (Sound familiar?) It seemed like the perfect way to motivate workers to produce more.

So what happened? Stalin saw that while in some cases output targets were exceeded, it was relatively seldom, and simultaneously levels of innovation dropped. Why?

One problem was that the monthly targets were always based on the previous month’s achievement – so although people may have been incentivised to exceed their target, they certainly weren’t interested in exceeding it too much, or their next month would only be tougher.

Innovation requires time, effort and resources… resources that would necessarily have to be taken away from producing output for the monthly target. As a result, little extra effort was invested in innovative creative idea creation. Furthermore the monthly targets kept people focussed very much on the present, where innovation necessarily requires investing today in things that will not pay off until tomorrow or next year.

The point is – we’ve known for years that the stick (fear) doesn’t do a great job at incentivising people. The conventional wisdom is to use the carrot. The problem, as this example shows, is: the carrot is broken too, and money is a poor motivator – a fact which countless studies have also shown. (Read the book ‘Drive’ by Daniel Pink).

So why do we keep getting it wrong?

Companies who motivate through fear (fear of a bad performance evaluation, fear of not getting that promotion, fear of losing my job), and poorly constructed monetary incentives, will at best achieve only a short-term production increase. These motivators are not, as Stalin has shown us, sustainable for a long period.

It’s time for a new incentive structure. How about: everyone believes in what you are trying to achieve, and is motivated intrinsically by the challenge, the vision, and the passion to win? (Which is, incidentally, exactly the first thing Lenin, Stalin, Kim Jong Il and countless others took away from their people).

Device fatigue and the next connected device form factor

A pile of devices
Photo: Wikimedia

I suffer from device fatigue.

Not just the kind where I cannot deal with the sheer number of connected devices, gadgets and gizmos being released every day – but the kind where I am overwhelmed with the number of devices that I actually already own.

I have a Macbook Air, a Sony Vaio running Windows 8, a Surface Pro tablet, a Nokia Lumia 920, a first-generation iPad and a Kindle, and in my living room I also have an XBOX 360.

And that’s not counting devices that I have temporarily for testing or benchmarking… the iPads, the Galaxies, the Kindle Fires…

Now, I like devices, and I work for a device company and my job is building device software, so i’m trying to build them into my life… but I just cannot deal with having so many different devices. The basket under the bookshelf where I put old devices is overflowing with dead, partially working or even fully functional devices that I just can’t find a good reason to carry anymore.

They all have their specific use cases and particular strong points: the MacBook’s power and good quality hard keyboard; the tablet’s big screen but relative portability; the smartphone’s ultra-portability and LTE connection… But the real problem is that there is maybe 80% crossover in the use cases and usage contexts of the different form factors, and this is frustrating and tiring.

I want one device that does everything – but I don’t want to trade the specific benefits of particular form factors, like the portability of my Lumia 920 and its amazing camera, or the stylus/drawing input of the tablet, or the physical keyboard and relative horsepower of my Macbook.

One of the greatest challenge now facing connected device manufacturers I think is the next form factor. The form factor that truly converges the fragmented connected device space.

While the last 5 years or so since tablets started their meteoric blast into consumers’ living rooms the focus has been on device divergence – building devices of every conceivable form factor, with increasing household incomes (in first-world markets) driving a huge increase multi-device ownership.

The next 5 years will be about device convergence. The search for the next form factor that unites your devices into a single, adaptable and flexible touchpoint.

Quick low-light comparison: Nokia Lumia 920 vs the iPad mini

While playing around with an iPad mini I tried out the camera, and was not too impressed with the result.

These photos below were taken at the same time of day in a relatively dark room with no flash and no additional lighting. There was no post processing of the photos at all – this is exactly how they came out of the respective devices.

The road to Robocop: how connected devices and sensors are the bionic enhancements that are evolving us

Robocop Statue

Detroit City is about to erect a gigantic statue of their three-time hero Robocop – the part man, part machine crime-enforcing cyborg. As fantastic as the story is, every day there is more and more science in the fiction.

Peter Weller’s character had to die before being packed into the giant metal suit and coming back to life as the bionic-enhanced supercop; but in the real world, it turns out we are all actually becoming more bionicly enhanced every day.

Take the well-known artificial cardiac pacemaker, a device which is implanted into the body that uses electrical impulses to regulate the beating of the human heart.

Part man, part machine?

The first experiments concerning artificially regulating the heart were conducted in 1899, and the first working prototype was assembled by some Aussies in 1926. Since then this man-made bionic improvement has helped save the lives of thousands of people.

Luke Skywalker bionic hand

Or take the humble hearing aid, which has improved or returned hearing to millions of people since the first one was invented in the 17th century. Then there are bionic limbs, helping people like Oscar Pistorius do what they do. Medical scientists are actually getting closer and closer to having real Luke Skywalker-type bionic limbs.

Although these bionic technologies have been around for a long time, until now they have focussed primarily on restoring or correcting defects (hearing loss, amputations, etc). Now, the internet and the variety of connected devices we carry with us every day are opening up a new world of bionic enhancements, accessible to everyone.

We already use our smartphones to replace our memories. Who knows anyone’s telephone number off the top of their head anymore? To-do applications like Wunderlist and note-taking applications like Evernote are becoming our long-term memory, and turn-by-turn navigation has not only replaced the paper street directory but most of our sense of spatial recollection as well (unless you’re a London cabbie).

With our smartphones constantly on the internet, the answer to any question is just a few taps away. Who was the last King of the Tudor dynasty? When did the Boer War end? Google or Wikipedia are with you – on the couch, on the bus, or in an exam.

Is this a form of bionic memory enhancement?

While our smartphones, tablets and PCs, and our always-on connection to the cloud, replace our memories and more and more become our primary interface to interact with the world, new forms of wearable technology will enhance us and our bodies even further.

Sports-sensors like the FitBit or the Nike+ Sports Watch track our movements and give us feedback on performance. They can even be programmed to tell you when you haven’t walked far enough or drunk enough water today.

Star Trek Tricorder

A $10 million X-Prize is fuelling a race to make the famous ‘Tricorder’ from Star Trek, a hand-held scanner that can detect any known human illness, a reality. The leading entries focus on using complex sensors to generate gigabytes of data about the body in a scan to be used to diagnose illnesses like cancer, or detect a heart attack before it strikes.

Is using sensor data to monitor your body a form of bionic enhancement?

The watch-phone, or ‘smart watch’, like Samsumg’s planned device or Apple’s rumoured ‘iWatch’, embeds our smartphone and, by extension, the whole power of the internet, closer to us (in us?) than ever before. Now we can communicate with each other wirelessly, just by telling our wristwatch to call somebody, and without having to get out our phone or hold it to our ear. It’s almost as good as telepathy.

Luke Skywalker bionic hand

Augmented reality headsets like Google Glass give us the power to retrieve information from the web at a glance, and layer it over our field of vision in a constant heads-up display. Smart contextual algorithms will decide what to show us at any time. We can be notified of our upcoming appointments or upcoming changes in the weather, or the in-built camera could even identify the person we are talking to and start to lay over important information about them in your field of view.

Building on visual heads-up displays like Glass are technologies like Augmented Reality Audio (ARA). Using binaural headphones ARA headsets can blend additional audio sources with what you’re hearing in the actual world, and do so based on our location, the time of day or even how you’re holding or moving your head. These headsets also have microphones built in that can not only create noise cancellation, but potentially give the wearer vampire-like super-hearing, or even allow you to select a single audio source from the environment (a barking dog, for example) and just blend it out.

An ARA headset can not only help improve our abilities, but can actually start to change our perception of the environment around us. This is not only a true augmentation of reality, but a significant enhancement of ourselves.

We’re entering an age now where our bodies and our perception of reality will become continually enhanced by the devices and sensors that we wear or that we are connected to. I hope that we are able to retain our humanity and our humility as we continue to defy Darwin and evolve ourselves into the future.

Windows 8.1 will evolve… and respond to consumer feedback

The Financial Times reported today:

“Microsoft is preparing to reverse course over key elements of its Windows 8 operating system, marking one of the most prominent admissions of failure for a new mass-market consumer product since Coca-Cola’s New Coke fiasco nearly 30 years ago.”

One of the most prominent admissions of failure since New Coke? What gratuitous hyperbole.

Of course key elements will be changed in the upcoming release. That’s what upcoming releases are for, in any software development: to evolve, respond to consumer and market feedback and innovate.

The claim comes from a Financial Times interview with Tammy Reller, head of marketing and finance for the Windows business. The only actual quote from the interview they include was this:

“The learning curve is definitely real.”

Apparently this statement means Microsoft will be making a U-turn on their strategy of making touch a key input paradigm for both tablets and laptop/desktop form factors, and bringing these form factors together into a consistent user experience.

And if Microsoft takes moves to either simplify the user experience to lessen the learning curve, or provide support for users to make learning easier – does this really represent a massive “admission of failure”?

Even if Microsoft replaced the ‘Start’ button, does that really represent such a massive admission of failure? Really?

Tech media is seemingly enjoying dumping on Redmond lately; even going so far as to blame Microsoft alone for the recent slump in PC sales, although this sales slump also coincides with recent economic downturn.

We knew there were lots of problem areas with Windows 8; particularly the awkward relationship between the Metro-style interface and the old desktop. If Windows 8.1 (Blue) addresses this challenge and makes the relationship somehow clearer or easier to understand, this can only be a positive evolution along their current strategy.

Although the current leaked developer preview of Windows Blue doesn’t reveal much other than a few customisation options, I think it’s far too early to herald the downfall of Windows.

A FlashBuild – the Flash Mob for product development

Customer involvement and user feedback is at the core of building great software experiences that people want and love.

It’s a bit old now, but I just stumbled on this impressive way to build an app with customer feedback at the core. Nordstrom innovation lab built an iPad app in just one week to help people pick sunglasses. To better involve customers and user feedback in the development process, they built the app in a sunglasses store!

Check out the vid:

This is how you involve customer feedback in your development cycle!

Whose line is it anyway? (Don’t block your colleagues)

In improvisation theatre (yes, I was a theatre nerd in high school) there is a concept called blocking. Essentially, whenever you have an interaction with someone on the stage in improv, your interaction should allow the next person to pick up the thread and run with it to continue the scene. In other words, you need to provide a hook for the next person to continue the story.

If you don’t provide a thread for the next actor to continue the scene, it’s called ‘blocking’, as you’ve essentially blocked the scene from proceeding.

A quick example:

Actor A: “Do you want to go for a walk with me?”
Actor B: “No, I don’t feel well.”

B has blocked A’s offer to go for a walk, without providing an alternative option for the storyline, leaving A with the responsibility for creating a new storyline for the scene.

In development teams I see people blocking each other all the time.

Another example:

Developer A: “Can we add an extra parameter to this function?”
Developer B: “No, it doesn’t work like that.”

Developer B has blocked Developer A from solving the problem, without giving A another thread to follow. In other words, B has shunted the topic back to A, leaving A with the responsibility to try to find another way to attack the problem, but with no extra cues from B on what might work better next time.

Saying ‘no’ is not the problem here. But when you say ‘no’, you should always try to give a thread to allow the scene to continue.

Let’s try the example again:

Developer A: “Can we add an extra parameter to this function?”
Developer B: “No, it doesn’t work like that, but if you look at the sample requests you might get an idea of what’s working already.”

Now Developer A has a thread; she has an idea where to go to continue the search for the solution.

Not blocking someone can be as simple as giving a cue to let the scene continue. So when interacting with people in your team, or with other teams, remember: everybody wants the scene to continue, so avoid blocking.

The show must go on!

The users lose

When the giants of the tech world play the game of thrones, it’s the users who pay the blood price.

About two weeks ago Twitter removed the Instagram inline preview of Instagram photos, meaning Twitter users can no longer see Instagram photos their friends have posted directly in the twitter stream: users now need to click the Instagram link, and open the Instagram site in another browser tab to view the photo.

Why? Due to hostilities between Twitter and the now Facebook-owned Instagram that can most likely be traced back to bad vibes stemming from some sneaky dealings during the company’s acquisition.

This is just the latest example of the user’s experience suffering as successful and loved products start to feel the investors’ pressure to focus on monetisation and revenue. LinkedIn users felt a similar blow when tweets stopped appearing on people’s user profiles as Twitter tightened up access to the API back in June.

The very open philosophy of APIs and data exchange that helped to build companies like Twitter is slowly getting left by the wayside in the search for sustainable monetisation strategies for “Web 2.0” products.

Where does this leave users?

Application experiences are increasingly taking place behind walled gardens – meaning that all, of the majority, of user’s interaction with the service is taking place within the proprietary application interfaces (twitter.com and the official twitter apps, in Twitter’s case, for example). This will lead to less choice and fewer options for users in terms of where and how to consume content and interact with the service.

Moreover, the products and services created by 3rd party developers leveraging APIs such as twitters have heavily driven innovation in the core products and the surrounding ecosystems.

When the first web mashup was born seven and a half years ago when Paul Rademacher reverse-engineered Google Maps to put craigslist rentals on a map it set a precedent that influenced, maybe more than anything else, how the web would develop for the following years. The social web as we know it today, led heavily by product companies such as Twitter, Facebook, Tubmlr, Foursquare, WordPress and many others have been built on a philosophy of openness, hacking and mashing up diverse data assets into new and compelling experiences.

As more and more of the power on the web is drifting toward more closed and walled-up product ecosystems like Facebook, Google+ and others, we need to call on these companies to remember the philosophy of openness that built the web that allowed them to succeed. Data should be becoming more, not less, available and sharable, and the pillars of the modern social web are in the position now to set the precedent for the next 7 years of innovation on the social web.

The stand-in Lumia 920

Can’t wait for your Nokia Lumia 920? Although shipments have started, not everyone has been able to get their hands on one yet.

But here’s an inventive solution constructed by my friend Geoff to “get a feel for the Lumia”.

Almost as good as the real thing!