A Short Tale Of Retail

I have been following technology for years, having spent many of those working in retail environments with high end equipment. Customers ranged from film production companies to members of the public with no knowledge of the current state of technology. This could be cinema cameras to smartphones. The most recent company I have worked with is Google, hence this article. I am also a professional photographer so I can observe this subject from both sides.

From the customer service side, every new encounter is a new puzzle to solve. How do you explain this technology / theory to someone with no knowledge of it or its sub technologies? If they are at a professional level, can you switch to using as much technical info as necessary? How many different ways can you explain the same thing? These translations of the features and benefits of a product require a genuine understanding of the technologies involved. There is no waffling your way through it. (That doesn’t mean other people working in my field don’t try to but that’s another story).

Oh, and I’ll come back to the translation model later.

It has been frustrating to witness this endless cycle of companies over the years trying to impress consumers with buzz words about their new gadget. They mention all of the options you are going to get and all of the customisability. For the more technical among us this style of product design and communication tends to work. We want more power, more options, more freedom. More options to play with in the settings, a thicker manual to read… Okay maybe scrap that last one. The problem is the vast majority of consumers have no interest at all in such things. They just want it to work.

Smartphone

Smartphones are one such sub-industry. They may have smaller sensors and fewer options (and a fixed lens, generally) but the growing marketplace for them makes the shrinking DSLR and mirrorless [digital cameras] market look pathetic in comparison. In 2016, global shipments of digital cameras were in the region of 25 million units. Source. In contrast, global smartphone shipments approached 1.5 billion units. Source. With this in mind it is no surprise that in 2017, over 85% of all images taken and shared are done so with smartphones. Source. So much for a ‘sub-industry!’

This massive consumer market (60x larger infact) naturally invites more investment due to greater competition. Smartphone manufacturers may be pressed to want to add more and more options. But is that what the consumer wants? Most of them have no idea what different settings actually do. In fact it kind of scares them. I think Google may have recognised this.

Google Wants More For Less

So it can be argued that this ‘legacy’ way of developing products (or marketing) can be based on an incorrect assertion. It isn’t that consumers don’t want more out of their devices. They just don’t want to put more work in to get it. In my experience, people are inspired to capture any given moment because of its emotional impact. Google has actually trained its neural network by feeding it images of people so it can recognise humans in photographs. A posh camera or one with fifteen pages of camera options does not inspire moments.

Most consumers don’t know or care how to use cameras and most of them didn’t want to have to read a book to use their device. They just wanted a camera to take photos of their newborn / family / holiday. They would purchase an £1000+ interchangeable lens full frame camera and leave it in ‘Auto’. Why? Because they think it will get them the best out of the camera without having to put the work in. That option may be fine for taking standard looking photos of a city scene in the middle of the day, but later in the evening, those same settings to photograph the moment when your child is doing a Tasmanian Devil impression may not work. The user won’t know why it is blurry, they’ll just be unhappy with their device. Not the best experience for a consumer to have. Sub-optimal experiences increase usage resistance.

Smarterphone

We have established that the smartphone market is seeing the most sales and the greatest fight to produce the most technologically advanced products. In this race (as in the digital camera space) you have hardware and software. The user interacts with the software and the software tells the hardware what to do. The hardware then collects data and sends it back to the software which processes it and presents it back to the user. In larger dedicated cameras the emphasis has traditionally been on hardware. ‘Better’ sensors, faster shutters, faster burst modes, battery life etc. This is because that market is populated with tech heads who constantly go on about that stuff and generally have underlying knowledge of the subject. The software takes a back seat.

Lightbulb Moment

But like I keep saying, the mass market consumer just wants it to be easy to use. Enter the humble lightbulb. Everyone knows how to use one, you plug it in and turn it on. Done. I bet most people have no idea who invented it or how many failed attempts came before it. They probably don’t really know how they work either. But that isn’t why people buy them. They buy lightbulbs because they produce light and consumers use the light to do other things. You could call the bulb itself a conduit technology in the sense that it merely facilitates another process and its success came because the usage resistance dropped so low it become ubiquitous. It gives you the result you want with minimum effort. Plug it in, turn it on. Done.

In my opinion, the same can be argued for smartphones. Phones began by providing the means to communicate, then they expanded to give you the power to consume content, the next step is to give you the power to create. But with all of these advancements in processing power and display quality, ergonomics etc, the weakest link is no longer the device, but its user.

I suppose it was only a matter of time until smartphones added that to their repertoire.

Google Pixel Visual Core

Let’s keep it simple. The above mentioned processor in the Pixel 2 replaces the role the user traditionally had in choosing the correct settings to get the best capture of a moment (based on the capabilities of the hardware). It is a custom built machine-learning processor that excels at crunching lots of numbers fast and efficiently. No it probably wouldn’t make the best general-use processor but you have the Snapdragon 835 for that. When it comes to processing image data the Visual Core is a whole new beast. When your average consumer puts the Pixel 2 into portrait mode and takes a photo and goes ‘oooh that’s nice’, this is what happens (pretty much).

Press Shutter Button:

  • Take burst of images y
  • Process y*12 million pixels of RAW data from the sensor
  • Lines up pixels from different shots in the burst, discarding ones that aren’t good enough
  • Averages the data in a larger luma/chroma volume
  • Goes through each pixel and chooses the one from the range that suits what the final image should have
  • Receives phase detection data from the sensor
  • AI uses it to produce a depth map from phase data
  • Refer to machine-learning analysis (trained on a million images)
  • Recognises continuity in contrast, lines, shapes, faces
  • Selectively blurs areas of the photo that should be out of focus

How long would it take you to manually edit an image like that? The Pixel Visual Core does it in four seconds. Also, processing is deferred until you have finished taking photos. How considerate.

Who Cares?

Does the consumer need to know all of this? Nope. They don’t need to either as this all happens automatically. They can always Google it if they want to know. The HDR+ system in the Pixel 2 produces images that exceed the limits that the hardware can normally provide. Consider this. We capture, edit, share and view images on smartphones more than anything else. The average smartphone has a HD (2MP) screen, a 4K UHD TV has an (8MP) screen. The Pixel 2 produces images at 12MP.

Most people don’t pixel peep. Mobile phone screens are a few inches across. They only spend a few seconds looking at it. The decision on whether they like it is predicated on the image being well exposed and of a subject that inspires emotion. These things don’t need to be ‘correct’, they just need to be interesting to look at. The person viewing it rarely cares about the technical side either. That is what the Pixel 2 does. Minimal usage resistance! Maybe only thing to add would be artificial intelligence that can also choose when to take the picture.

Google’s AI Is Just Waking Up

Artificial Intelligence applications are growing exponentially. They leverage hardware in ways that augment its value beyond what the hardware can do on its own. If you were into using buzzwords you could say it is synergistic. Coincidentally, the attack on digital camera sales by smartphones will hopefully pressure the digital camera market to innovate. Futhermore, they cannot efficiently expand their market share without reducing usage resistance and Artificial Intelligence integration is one way to do that.

In my next article I will be presenting a list of ways AI / machine-learning can continue to add value to imaging hardware. This will improve consumer experience and my hopes for a renaissance in the digital market by integrating AI into their imaging pipelines. Finally, I would have provided more examples of Pixel 2 photography but I was time-constrained due to other projects I was working on concurrently. Google are welcome to send me a Pixel 2 to continue using however 🙂