Says the Futurist: “The future is just over the horizon!” To which the social scientist replied: “Isn’t ‘the horizon’ an imaginary line that moves farther away as you approach it?”

A Useful Shorthand

Awhile back I spoke at the Centre for Workplace Leadership’s 2018 conference on the “Future of Work.” For me, it was an opportunity to stop and think about a phrase that I’ve heard a lot these past couple years (and used a lot these past couple years!) but haven’t properly unpacked before.

It’s become a ubiquitous phrase on the lips of executives everywhere, in both the public and private sector. I’m not sure I would call it a “buzzword,” exactly (“buzz-phrase”?). To me a buzz-phrase—like, say, “systems thinking”—is “a concept that everyone agrees with but nobody can quite explain.”

The phrase “Future of Work” certainly attracts a lot of buzz. However it refers, not to a concept, but rather a list. A long, thorny list of work-related issues like:

  • Technological changes (especially AI and robotics), which are: eliminating some jobs entirely (e.g. truck driver); eliminating certain tasks within jobs (e.g. transcription); and creating new jobs with very different skill requirements (e.g. machine learning architect).
  • The emergence of “platforms” for matching people with jobs, from LinkedIn, to Uber, to Freelancer and Shiftgig and Upwork, which: change how and where training & recruitment happen; make it easier for freelancers and “digital nomads” to earn a living without having “a job”; and throw into doubt the whole notion of formal full-time contracts (after all: why hire a full-time employee when you can scale-up your staffing on-demand, on a project-by-project basis?).
  • The changing age structure of the workplace (at the entrance, the arrival of millennials and post-millennials into the workplace; at the exits, the elongation of people’s working lives into their 60s and 70s) — with resulting changes in workplace values and expectations.
  • Changing gender dynamics in workplace hierarchies (ranging from the ’#metoo movement to the mainstreaming of transgender identities).
  • And a host of smaller, but equally thorny, changes underway—many of them technology-driven. (For example, have you taken a look at the introduction of biometric monitoring devicesinto the workplace? Huge ethical questions here, but—so far—little discussion.)

So “Future of Work” has become a shorthand for saying: Look—here’s this list of work-related issues. It’s long and thorny, and we as individuals, organizations and societies need to think our way through it. And we need to do so because the “present of work” is still heavily influenced by our industrial roots—by factory culture, by command-and-control management styles, by an over-emphasis on measurable efficiency and an under-appreciation of important intangibles (like creativity, health & wellbeing, inclusion or a sense of purpose).

It’s a useful shorthand. Simply by invoking the phrase “future of work” in an executive setting today, you can get everyone around the table nodding soberly and agreeing that these issues matter, that we need to respond to them somehow, and that a very different relationship between organizations and their employees is just over the horizon. So, no, it’s not a buzzword. It’s a rich and meaning-full phrase.

Shortcomings

But like all useful shorthands, this one, too, has its shortcomings.

Language is like a map we use to navigate the world. And geographers will tell you: no map is value-free. No map is a 100% objective description of the territory. What do we choose to put at the center of our map? At what scale do we draw the map? Which features do we include, and which do we omit?

It’s an inescapable conundrum at the heart of human social sense-making: in order to communicate something complex, we need to eliminate a lot of the complexity we want to communicate. And doing so involves choices—often private choices, that we probably didn’t talk much about in public before they were made. Some of those choices, we weren’t even aware of when we made them.

So, from time to time, we need to return to the raw complexity and the choices we made when we distilled that complexity into new language. We need to return to the territory we’re trying to talk about, and refresh our awareness of what we’ve simplified away from the conversation.

How do we elicit the shortcomings of our shorthand? A good place to start is to trace the language we’re using back to its origins.

The history of the Future of Work

A quick Google Trends analysis tells the life story of this term. It popped briefly into common parlance (‘common search-lance’?) in October 2004. (I haven’t yet figured out what event might have caused that spike; if you have a theory, please share it.) But its recent climb into popular lingo began only in late 2013.

Why then? I have a hunch. In September 2013, Cary Frey and Michael Osborne at the Oxford Martin School published a paper called The Future Of Employment: How Susceptible Are Jobs To Computerisation? In it, they analyzed the entire U.S. labour market, job-code by job-code, and concluded that 47% of all present-day jobs in the U.S. were at high risk of being automated away by 2050.

47%. It was the sort of number that made people sit up and take notice.

That single paper has now been cited by academic researchers 2,817 times (or, about 2,800 more times than my doctoral thesis). But it’s also been cited tens of thousands of times (with widely varying accuracy) by media, pundits and the “commentariat.” (Subsequent papers by other researchers have tweaked the methodology but basically all arrived at the same conclusion: robots are coming to steal a lot of people’s jobs.)

In 2013, the idea that “machines will steal our jobs” was hardly new. Twenty years earlier, in 1994, Stanley Aronowitz wrote The Jobless Future. He was one of many thinkers at the time who looked at the emergence of networked computers (i.e., the Internet) and thought: If computers start “talking” to one another directly, that’ll have big implications for the people whose job it is to pass data around society.

And the wider question—of technology’s detrimental impact on work, jobs, and human behavior—is aeons old. In Ancient Greece, Socrates bemoaned the spreading technology of writing. (It would lead, he predicted, to a loss of memory, to more passive forms of learning, and “to endless disputation, since what one writer has written, another can challenge, without either of them meeting and arguing the issue to a conclusion.”)

What was new, from about 2012/2013 onwards, was the apparent rate of progress made by computer scientists in building systems that could do “pattern-recognition”: image recognition, facial recognition, natural-language processing and so on. “Some people would argue we’ve made more progress in those systems in the last 5 or 6 or 8 years than we’ve seen in the last 50 years.”

The recent, sudden acceleration in our computers’ pattern-recognition powers is due to three big factors:

  • The amount of computing power that we now have available to throw at these problems, thanks to the latest CPUs central processing units and GPUs graphics processing units, and on-demand processing in the cloud
  • The amount of data (and cheap data storage) that we now have available to train computer algorithms, thanks to the billions of pictures and voice streams and digital transactions that we all generate each day, every day, of our lives
  • The development of new pattern-recognition algorithms and techniques that take fuller advantage of all this computing power and data. Supervised and unsupervised machine learning, deep learning, convolutional neural networks, recursive neural networks. These phrases mean very little to people outside the AI research space, but within this space they represent a global flurry of research, experimentation, progress and big money.

A computer that can’t do anything until you explicitly tell it how to do something feels like a tool. A computer that can look over your shoulder, watch the patterns (i.e., tasks) you perform, and then perform the same patterns—more reliably, more precisely, without food or rest—feels like a replacement. Especially when it proves able to identify patterns in your own behavior that you yourself didn’t know existed.

Stitch enough pattern-recognition systems together, and you start to get driverless cars and autonomous financial traders—systems that can actually do something in the physical or real world without our (human) involvement.

And so, people started to worry about what’ll be left for human beings to do, once this technology spreads.

The history of the term “future of work” suggests that the “center of the map” has, from the beginning, been automation: this accelerating trend of software and machines taking over many of the jobs and tasks that are currently being performed by people.

Who drew the map?

Automation is the mountain at the center of “the future of work.” In the shadow of this mountain, several other challenges to how organizations presently organize the workplace have been identified and drawn in—like the new platform-marketplaces that force organizations to rethink how they hire, train and retain employees, and collaborate with outside talent; like the widening range of ages being brought together to work on the same project; like the social media-spotlighting of pay- and gender-inequities in the workplace; like the growing tension between the organization’s power and incentive to find any patterns in every aspect of all its employees’ behavior versus each employee’s right to privacy.

When you step back and look at it, what’s interesting is that so much of the map is being drawn—so much of our thinking about the future of work is being done—from the organization’s perspective.

This makes total sense, for two reasons. First, inside organizations is where most work was done during the Industrial Age, and where most work is still being done now. And second, managers are the people in society who have the most time to think about these things. In fact, they’re the ones being paid to do so.

But this same reasoning also suggests that mapping the future of work from the organization’s perspective makes no sense at all. Or at least, such a map is unlikely to prepare us for some of the biggest features of that future landscape. Because one of the biggest differences between the present world of work and the future world of work may just be how much work won’t happen inside formal organizations at all.

Management, not Markets

For most of history, human’s haven’t worked inside organizations. Even today, it’s a bit strange that we do. After all, we live in market societies. We’ve built our whole economy on the idea that an open market of buyers and sellers, haggling with each other to agree upon a price, is the best way for society to allocate resources and to organize production of the stuff we all want and need. “Why then, do we gather inside of organizations, suspend the market and replace it with something called ‘management’?” as my friend David Storey at the consulting firm EY so elegantly put it to me.

In 1937, the Nobel Prize-winning economist, Ronald Coase, explained this strange behavior by introducing the now-familiar idea of “transaction costs.” Figuring out mutually agreeable contract terms with each other every time we needed to cooperate to get something done would cost a lot of time and money. In theory it might work; in practice it’d be impossible. Plus it would create a lot of uncertainty on both sides of every transaction. (Do I trust a freelancer to do a mission-critical piece of work—knowing that they can blackmail me just when I need them most? Does the freelancer buy a house near me, her employer, knowing that I might decide at any time to work with someone else?). Putting work inside organizations makes economic sense.

By now, we’ve come to appreciate that it makes social sense, too. We are social animals. Organizations offer a shared, cooperative structure that outlasts specific participants who come-and-go. And they offer a ‘campfire’ for collective story-telling and learning.

Markets, not Management

But today, these rationales are less winning than they used to be. Online, external platforms are proving that efficient, thriving markets can now be created for once-unimaginably small, rare or vital exchanges—from a single hour of zen garden design work to trouble-shooting a software company’s core product. External platforms for learning (Coursera, edX, Udacity, Degreed, etc…) boast millions more users than any in-house training department ever could, and they can therefore mine better insights (via pattern-recognition) to create better learning pathways for learners.

Whether the incoming generation of adults values organizations for their social benefits is also in doubt. In some developed-country surveys (and I’m sorry; I’m still trying to find the link for you!), up to a third of today’s high school students say they’d rather be full-time freelancers than full-time employees. (In the same breath, it’s worth noting that loneliness, isolation and depression are also on the rise among young people. How will youth negotiate seemingly competing needs for freedom and belonging in the “future of work”? Big, open question.)

And yes, organizations remain excellent at retaining and transmitting learning and shared stories. But for the same reason, they’re poor at adapting. And during times of rapid environmental change, adaptability is a must-have survival skill. (Fun stat: In 1935, the average age of companies listed on the S&P500 was 90 years; today, it’s just 11.)

The future we can see, and the future we can’t

The “future of work,” which as a useful shorthand has helped organizations to accomplish 5 years of intensive, important reflection, rethinking and redesign, now needs to come to terms with its own shortcomings. Namely, that it is an automation-centric picture of how the workplace is changing, drawn from the organization’s perspective.

It is, in other words, a conversation about the future we can see—the future that, from where we’re standing now, we know is coming.

In many ways, I think this is the more important, more urgent future for us to explore. It wasn’t so long ago that most of us looked to the future and thought (or were told) that the European Union was inseparable, that Trump was unelectable, that globalization was irreversible, that China’s democratization was inevitable and that facts were incontrovertible. We failed to see a lot. As the British philosopher John Gray put it,

It wasn’t just that people failed to predict the global financial crisis. It wasn’t just that people failed to predict that Trump would become the U.S. president. What’s really sobering is that, for most of us, these things weren’t even conceivable. So we need to ask ourselves: What are we doing wrong, that we are unable even to conceive of the big changes that will transform the world, just 10 years down the road?

Part of the answer, I think, is that whenever we explore “the future,” yes we might shift our time horizon, but we don’t often shift our point of view.

A people-centered perspective

When Copernicus proposed his sun-centric theory of the solar system, he was describing something (a) that he couldn’t possibly see and (b) for which he had no data. (Sort of like trying to describe the future.) Nonetheless, he was convinced that his sun-centric perspective was the right one, because his new map of the heavens was more intuitive than the old one that people had been using for the past 1,500 years. That old map had grown head-scratchingly complex over the centuries. As astronomers’ measurements of planetary movements became more accurate, the geometry of their orbits had to become more complicated to fit within an earth-centric model of the universe. But once you flipped perspectives and looked at the heavens the way Copernicus did, a lot of that complicatedness just fell away.

David Nordfors makes a similar argument for shifting from an organization-centric to a people-centric perspective on the future of work. (David co-founded the Center for Innovation and Communication at Stanford University, and now heads up the i4j Leadership Forum. We met up in mid-November at a private gathering of 100 “thoughtful doers” that I convened in Toronto.)

For the long version of David’s argument, I commend to you his recent book, The People Centered Economy. Here’s a brief flavor of why such a shift in perspective makes sense intuitively:

In an organization-centered economy, people are sought to perform valuable tasks for the organization, but the people who perform the tasks are seen as a cost.

In a people-centered economy, tasks are sought that make people’s labor valuable.

In an organization-centered economy, innovation (especially automation) presents a social problem. Automation makes it possible to do valuable tasks without costly people. Some people may lose their ability to earn a living entirely.

In a people-centered economy, innovation and automation present a social opportunity. Automation frees people up to do other tasks. AI helps people to find those other tasks more easily—other tasks that better fit their abilities and feel more meaningful to them. Organizations seize the opportunity to invent new human tasks and tools and match people with them, which can help people earn more, and feel happier, than was possible with the old tasks and tools.

In an organization-centered economy, corporations face a paradox. Each corporation is incentivized to reduce their wage expenses so that they can increase profits. But if enough corporations successfully do so, their consumers earn less money, spend less on their products, and corporate profits fall. (In the macro economy, a dollar of labor costs saved is also a dollar of consumer spending lost.)

In a people-centered economy, this paradox falls away. Corporations are in the business of creating opportunities for people to spend money and to earn money. Some people spend money to consume the corporation’s goods and services. Other people earn money by performing the corporation’s job-services.

If all this sounds a bit far-out, that’s a good indicator that—maybe—we’re starting to glimpse that elusive “future we can’t see.” But it’s also a rough description of eBay, Etsy, Uber, Airbnb and many other smaller two-sided platforms today, whose business model is already about serving buyers with ways to spend money and serving sellers with ways to earn money.

So it may sound far-out, but it may not actually be that far away. In his book, David offers an example of how organizations in the near future might reframe a job posting as an “earning service” instead:

Dear Customer,

We offer to help you earn a better living in more meaningful ways. We will use AI to tailor a job to your unique skills, talents and passions. We will match you in teams with people you like working with. You can choose between different kinds of meaningful work. You will earn more than you do today. We will charge a commission. Do you want our service?”

As David summarizes, ‘This is a service that everybody wants but almost nobody has.’

But they will, and soon. I’m personally familiar with several efforts already underway to build businesses that make precisely that offer to people. One of the best I’ve seen so far is FutureFit.ai, which gets people to declare where they want to go professionally and then uses AI to plot them a personalized journey (via study, learning and work opportunities) to get them there. “Google Maps for the future of work and learning,” is how their founder, Hamoon Ekhtiari, sums up his vision.

Here and now, just a (beautiful, lucrative) possibility

Like Copernicus in his day, it’s impossible to prove that this alternative perspective on the future of work is “right.” (Copernicus published his sun-centric theory in the early 1510s, and it wasn’t until Galileo pointed a telescope heavenward a century later that anyone had hard evidence to support his paradigm shift.)

But, like Copernicus’ new model of the heavens, a people-centered model of the economy is more intuitive. It resolves the head-scratching paradox that today’s businesses are being incentivized to automate away the consumer spending power upon which their profits depend.

And it’s more beautiful. David quotes Gallup’s chairman Jim Clifton, who estimates that in the present world of work: 5 billion people are of working age; most of those people want a job that earns them a living, but only 1.3 billion people actually have one; and of those 1.3 billion people, only about 200 million people actually enjoy their work and look forward to doing it each day.

Jim’s numbers suggest that humanity’s $100 trillion global economy is running at only a fraction of its capacity. How much more economic value could we collectively generate if we used AI and automation to connect more of the world’s 5 billion workers with learning and work that matched their talents, passions and sense of purpose? How much happier would we collectively be?

Keeping an eye on the future we can’t see

Fundamental shifts in how society looks at things—like work, or health, or wealth, or education…—don’t happen overnight, or all at once. And they’re rarely total. Paradigm shifts are a messy, social process. Multiple paradigms coexist for a long time, until the new paradigm reaches an invisible tipping point and simply is the way that most people think.

In an organization-centered economy, innovation is about coming up with new tasks that machines can do, and new products and services for people to consume. But in a people-centered economy, a lot of innovation will also focus on coming up with new tasks that people can do to earn a better living.

Innovation in this people-centered vein is already beginning. Expeditions aimed at reaching a people-centered future of work have already set forth—in a few markets, with a few startups, in nascent ecosystems. These efforts are not purely altruistic; there are vast profits to be made. That’s why we can be reasonably sure that these efforts will continue, and expand.

There’s gold to be found. Someone’s going to strike it. And then there’ll be a rush.

Bearing in mind (with all the humility gained from the last decade of political, economic and technological shocks) that preparing for the future we can’t see may be even more important than preparing for the future we can see, there are three questions that I think maybe can help us to keep an eye on these people-centric possibilities:

  1. For us as Individuals: How can we create more alignment between what we ourselves deem valuable or important and what we do to earn a living?
  2. For us as Organizations: How can we support individuals in making those changes?
  3. For us as a Society: How can we invite excluded populations into that personal search for alignment between work and value? (e.g. the unemployed, people with “disabilities,” people doing unpaid work (child/elderly care), children in school, the elderly?)

Because that, I think, is what we all really want the Future of Work to look like.