Our Attention isn’t What it Used to Be

Brian Resnick, writing for Vox.com, explored the concept of society’s collective attention span—and why it’s continuing to fall. Off the back of new research published last month in Nature Communications, Resnick noted how the researchers used Twitter as a measure of our collective attention:

They measured collective attention on Twitter by looking at how long individual hashtags stayed in the list of 50 most popular hashtags. In 2013, they remained, on average, for 17.5 hours. In 2016, that was reduced to 11.9 hours.

This fall in our collective attention is mirrored on Reddit threads, the coming and going of blockbuster movies and even Google trends (which rapidly spike before fading out after a few days). What’s causing this drop? Technology, media consumption and the rise of the attention economy.

With digital tools, companies keep getting better at knowing how to capture attention, targeting ads and content to particular users.

News sites that need high pageview numbers to turn a profit cannot afford to miss out on super-popular trends like Game of Thrones, or Avengers, as fleeting as the attention for those topics may be.

Because every media outlet is jumping on the same bandwagon, the cycle is “self-inhibitory,” Lorenz-Spreen says. “Because the more you produce, the earlier people become bored about it.”

The mechanisms driving the attention economy have weighed heavily on my mind in recent weeks. Seemingly infinite A/B testing thanks to big data and real-time feedback have enabled organisations to optimise content for maximum engagement (i.e. maximum ad revenue). Add to that psychological hacks and dark patterns designed to keep our attention, is it any wonder we move onto the next shiny thing when it’s dangled in front of us? We’re only human.

⚭ The iPad Pro Won’t Replace Your Laptop—That’s a Good Thing

Reviews of the new iPad Pro have covered the display (it’s great), the performance (it’s great) and the sleeker design (it’s great). Then they all ask the same question—the same question of all tablets. Can this replace your laptop? The same question asked last year, and the before that.

While most reviews include caveats for different user types (i.e. creative professionals), they settle on the same answer. No, not yet.

Mark Gurman, on Twitter:

2015: “iPad Pro review: Big and powerful, but it won’t replace your laptop”
2016: “iPad Pro 9.7 review: Apple’s best tablet, but it won’t replace a laptop”
2017: “iOS 11 on an iPad Pro still won’t replace your laptop”
2018: “Nope, Apple’s new iPad Pro still isn’t a laptop”

Given that these iPad Pros now support USB-C and offer (or rather, flaunt) ridiculous processing power, the comparison seems a natural one to make. Even in its keynote last week, Apple made performance comparisons to PCs and the latest generation of gaming consoles.

And while this question should be asked—I myself am considering a move to an iPad Pro from my 2016 13” MacBook Pro—it clouds our judgement of what the iPad can be. It limits what the future of computing can become.

The more interesting answers arise when we consider what iPads can do.

Here’s Rene Ritchie, on Twitter:

I’d argue it’s not power users but *empowering* users that was and is the whole point of iPad (including pro).
We often mistake complexity for capability, but that’s often an easy way out.
How do you make a computer that makes people into pros?

We want (and expect) our desktops to rise to any task we might throw at them. To run multiple displays, to integrate every peripheral and to effortlessly power pro software. Those expectations then trickled down to laptops (replicating the desktop experience), which naturally  tricked down to tablets.

However, the dream of the perfect all-in-one portable device is a noble, yet, unreachable one. Video Editors need the fastest processors to handle 4K productions with ease. Freelancers need portability to travel across the city on a whim. Users in developing countries need a low-cost solution that can connect them across the world.

The iPad Pro isn’t and won’t (for the foreseeable future) be a perfect device for all users. It is, however, a great device for most users.

It’s a device that’s reduced all barriers to interaction. Shrunken bezels. No home button. Sleeker profile. It’s become a slab of glass that begs to be picked up and discovered—one that can adapt to its user’s needs at whim.

Need to make last minute edits to a client presentation, in the back of a taxi? Need to make fine tweaks to a multi-layered PSD file? Need to read an interactive bedtime story to your kids, upside down in their bunk bed? The iPad Pro has you covered, without breaking a sweat.

In 2018, the iPad Pro didn’t become a laptop replacement. It became a blank slate.

Facebook Moves Into Your Home

Since the launch of Portal—Facebook’s video-calling device for the home—tech pundits have been less concerned about its features, and more so about privacy issues and data collection. Not without cause too, given the company’s recent track record.

Here’s the opening to Joana Stern’s review for the WSJ:

I’ve had one of Facebook’s new video-calling gadgets, the Portal+, in my home for the last week. And by “in my home,” I mean, in the basement, in a closet, in a box, in a bag that’s in another bag that’s covered with old coats.

That opening paragraph contains everything you need to know about the Portal.

You could stop reading there, but don’t. Stay for Stern’s critical assessments of Andrew Bosworth’s (Facebook’s VP of Consumer Hardware) PR spin about the device.

Underlying almost every review of the Portal is the concept of trust. Or the lack of it.

When a misuse of Facebook’s platform occurs, the company apologizes and ensures us they’ll fix the problem. Then another controversy arrises, and the cycle repeats.

Bosworth understands that “[Facebook has] to earn that trust back”. But given there’s already been misunderstandings over the Portal’s data collection policies (covered by Recode), the waters around Facebook’s Kingdom might not be muddy, but they remain as opaque as ever.

Samsung Unveils One UI

At its developer conference earlier today, Samsung unveiled One UI—a refreshed design language for its mobile devices. The Verge’s Tom Warren was in attendance:

One UI contains some new visual flair including more rounded corners, splashes of color across apps, and redesigned icons. But its main purpose is to relieve the repetitive stress our hands endure in today’s world of giant phones.

As phones grew larger and bezels shrank, the trend has been to expand displays up rather than out. Any wider and we’d struggle to hold them one-handed.

Almost all flagship devices now exceed display aspect ratios taller the previous standard of 16:9. As they’ve stretched, interacting with content and controls in the upper third of the screen requires a grip adjustment—even in two handed use.

What’s strikingly simple—and at the same time profound—is Samsung’s treatment of content and control with One UI.

One UI places key interaction areas near the bottom of the screen where they’re within reach of your thumb. No more making that dreadful reach for the top left of your Galaxy Note.

Several of Samsung’s bundled apps will now have a “viewing area” — big, easy-to-read header text — and an interaction area covering the lower half of the screen.

As form factors evolve, it’s worth revisiting top navigation/control bars—a legacy approach to UI design that’s trickled down from desktops.

While real world tests await to determine the practicality of these UI shifts, it’s refreshing to see Samsung take the initiative here. To not only acknowledge the issue but present a plausible solution too.

(Side note: iOS’s Reachability feature—where the screen temporarily jumps down halfway, allowing easier access to higher controls—has been around since 2014. However, Apple’s solution has always seemed inelegant. Merely a temporary hack for pro users. One UI appears to be a genuine rethink of how we interact with our phones.)

Bit by Bit. The Future of Photography is Code

Writing for TechCrunch, Devin Coldewey takes a deep dive into the rise—and advantages—of Computational Photography.

All modern photography relies on some computational processing to take incoming light and transform it into a digital representation. Yet Devin explores the current flurry of advances taking place in that stage of transformation. Progress fueled not by established camera companies, but by smartphone manufacturers.

While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The pace at which these companies are innovating—in areas of algorithms and machine learning—far outstrips the gradual (and slowing) advances in lens and sensor design.

Devin provides this metaphor to illuminate how camera sensor design is reaching its physical limits:

Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Enter smartphones.

Historically, smartphones have been at a disadvantage in terms of photography potential. Generally, the bigger the sensor and the bigger the lens, the better the image quality. The inherent thinness of smartphones limits this.

What to do?

When it becomes a struggle to innovate under the old rules of play, invent new ones.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

Why not just always be recording?

Recent power efficiencies have enabled smartphone manufacturers to do just that.

With the stream of photons reaching a smartphone’s sensor, algorithms take all the incoming data and emit results previously unattainable outside of professional cameras. Algorithms that can be tested and refined. Algorithms that display their results in real-time.

Shallow depths of field (bokeh), sharp low-light shots and HDR imagery are all now found (and expected) in smartphones a fraction the size of bulky camera rigs.

Yet not all code is created equal. And unlike sensor/lens design, this is where further advances await to be made.

In the highly profitable smartphone market, each manufacturer is striving to gain a computational edge over its rivals. Edges that drive photography forward, bit by bit.

Made for Humans—Apple’s Approach to Design

Writing for The Independent, David Phelan sat down with Jony Ive, Apple’s Chief of Design, following the launch of the new iPad Pro and MacBook Air.

The piece is worth reading in its entirety, however Ive’s insights into Apple’s philosophy when designing product iterations caught my attention (emphasis mine):

It starts with the determination not to fall into the trap of just making things different. Because when a product has been highly regarded there is often a desire from people to see it redesigned. I think one of the most important things is that you change something not to make it different but to make it better.

If you are making changes that are in the service of making something better, then you don’t need to convince people to fall in love with it again.

These new iPad Pros mark the biggest shift to the product’s design since its inception in 2010. With significantly slimmer profiles, the models forego the home button and sport rounded displays.

Physically, these changes allow for a larger display in a smaller shell. Practically, they combine to minimise barriers to interaction. The device itself fades away, enabling users to feel as if they were directly manipulating the content on screen.

In line with the sleeker profile, Ive and his team squared off the tablet’s edges (not unlike the first-generation iPad). This allows the new Apple Pencil—also now sporting a flat edge—to sit flush against the iPad’s side, connecting and charging via magnets.

I think the way it just snaps onto the side, well, that’s a nice example of a sort of that magical feeling. It’s unexpected, we don’t quite understand how it’s working and even more incomprehensible is the fact that it’s also charging. You can see how that’s aligned with this idea that you can just pick the product up and use it without thought.

Actually, you’re using it with tremendous thought, but it’s based on what you want to be doing rather than wondering if you’re holding the tablet the right way up.

That final insight is key to how Apple approaches design. Consumer products—both hardware and software—should be designed to be as intuitive as possible. It’s one of the reasons Apple products have for years foregone instruction manuals. The products simply didn’t need them.

From the way they look, to the way they function, each device is designed to be picked up, touched, clicked and experienced. Sure there are learning curves, but the product is there with you every step of the way.

Hello, World

Technology isn’t created in isolation.

It’s designed, refined and used. By people. People who influence technology every bit as much as they are by it.

TechType considers and curates the trends driving—and being driven by—the latest in personal technology. From the devices we covet to the software we use to the services we consume.

This is a chronicle of technology and our interactions with it. Welcome.