AI and The Future of Work is About Lifelong Learning

I often get asked what are the most important skills for a student to learn going into the coming decade of new AI technology. I have some ideas about why I’m asked this, but it still surprises me how desperate some people are to know the “secret” winning skills of the future.

The World Economic Forum’s own list for 2020 is basically a shuffle of their list from 2015, with complex problem-solving at the top of both. And while I don’t know exactly how long it’s been around, I don’t think it’s a particularly new idea that college education is about developing critical thinking skills, learning how to learn, and being able to determine cause and effect in a complex system.

What technology changes is the availability of tools to foster these skills throughout our adult careers in order to make a well-rewarded contribution to the economy? A growing number of adults today are finding time for continuous learning and mindfulness, seeing them as essential practices for being an effective modern worker. In other words, the future of work is what our society’s intellectuals have been saying should be happening in organizations for decades now; only, the traditional organization still forces us to do it outside of work.

Today’s dominant business models have been built around achieving economies of scale that seek to maximize the efficiency of narrowly defined tasks (which makes them easy targets for automation). We then allow a select few in the C-Suite to consider strategy, but even executives struggle with fitting in learning and personal development. The moments of study and reflection that lead to clear thinking about the systems the organization is subject to become rare, isolated incidents within their day jobs of putting out fires and maintaining legacies laid out long before them.

For most established organizations, nearly everyone’s focus has become how they can do the same job more efficiently, rather than asking why they are doing what they are doing. If anyone does start questioning the function they are performing, it often means they are leaving that company rather than changing the job itself.

The startup world has done well in part because it’s super clear that everyone working in a new company is in a position to design their job and the rest of the organization. At those startups, some of which are the biggest companies in the world now, employees have been empowered with a sense of the strategic direction, their connection with the rest of the system, and the autonomy to redesign their work as they adapt to new information.

At my own startup, Element AI, everyone is creating new intellectual property. At barely a year old, each person is shaping their role and their department and making a direct impact on the company’s overall form. It can be a scary thing for a lot of people, but it’s an ideal set up when the underlying science of AI technology is moving at such a high speed. We’d be dead in the water if all of our people weren’t in a position to bring daily new developments into their work.

Optimize, simplify, command, and control

The shelf life of a typical organization is 20-30 years. To start one, you would figure out a service that you could get paid as much as possible for, build a market for it, then scale by driving optimization. But as you do that, change becomes more difficult. And expectations are changing. Customers expect you to customize and personalize your product or service, and to be flexible to whatever circumstances come your way. Amazon is a great example of this: While they’ve become the fourth largest company in the world by market capitalization, they’ve remained flexible by design and are continuing to gobble up new markets almost as soon as they emerge. However, Amazon is the exception.

Large organizations are optimizing for things they know and sometimes deal with things they know they don’t know. But they are extremely ill-equipped to deal with what they don’t know they don’t know—the unknown unknowns. The reason companies like Amazon do well in this world is that they are good at sensing, searching and finding new dynamics they didn’t even know they needed to know. In other words, the challenge is how we deal with success, and what it is we are scaling.

The focus is too often on the fruits of success and trying to optimize, simplify, command, and control a solution for a problem that is, in fact, a moving target. We essentially cut ourselves off from the real cause of success: learning the unknown unknowns and being able to quickly adapt to them.

“Learning & Adapting”; “Flat & Fast”

The cyber-industrial revolution will help, if not necessitate, organizations shift from being ‘Commanded & Controlled’ to ‘Learning & Adapting’ and AI technologies are a big enabler of this organizational shift in thinking. Industry 4.0 and technologies like IoT and AI are unlocking new perceptive senses for organizations. It’s like suddenly experiencing the sense of sight when you had been blind before. The possibility to discover and integrate the unknown unknowns is just beginning to spread through industries; and, while reshaping the organizational structure around learning from these new senses is not easy, it is possible.

Ask any digital evangelist, the transformation is a tough sell. Then there’s the question of actually doing it, and I think those same evangelists will have more bad stories than good of companies still struggling to get it as AI is rounding the corner to raise the hurdle even higher. Though, I did learn about a convincing example at an Aspen Institute roundtable this summer. It’s striking because it happened within one of the most rigid and hierarchical organizations you can find: the US Military.

In the mid-‘00s,  General Stanley McChrystal led the Joint Special Operations Command, which oversaw all the elite units from each of the divisions of the US Military. When McChrystal took over, JSOC was focused on carrying out a limited number of carefully planned operations. Critically, information gleaned from these operations had to be sent back to intelligence analysts in the U.S. for interpretation, which diminished its operational value.

McChrystal decided that JSOC needed to change to become “flat and fast.” He embedded Washington D.C. analysts into the JSOC teams on the front lines and essentially eliminated the information bottleneck between operations and Washington D.C. He also implemented several other policies and practices that expanded the flow of information and increased the autonomy of people to act on that information, substituting out the old long, narrow path of decision making. The long story short is that the JSOC went from making 10 operations a month to 300. (I strongly recommend reading the booklet that is free online here.)

Another reading I recommend is “The Fifth Discipline” by Peter Senge. In it, he lucidly describes the 5 disciplines of the Learning Organization

5 Disciplines of the Learning Organization

  1. Systems Thinking – Thinking of the organization as a single system, that is also made up of, and is itself within, many other systems. This way of seeing the world is necessary for understanding cause and effect.

  2. Personal Mastery – Learning happens at the individual level, and so each person must have their own intent and motivation to learn.

  3. Mental Models – We all operate based on fundamental assumptions about our environment. Not only are unconscious assumptions a missed opportunity for learning, but certain unchallenged assumptions can also directly block learning.

  4. Shared Vision – Having a shared vision that is rooted in the visions of the ground-level teams empowers individuals to align themselves (and their learning) with that vision and quickly be able to act on new information for the better of the company.

  5. Team Learning – Effectively bringing together individual learning can yield more than the simple sum when individuals and teams cross boundaries to engage in dialogue.

Peter Senge’s “fifth discipline” is systems thinking: being able to see the interconnectedness of things and their effects on each other. The organization is complicated because it is a system of systems, and one of the things AI is good at is connecting disparate systems. This AI thing is moving fast. Just to be able to navigate the new, and rising, the complexity of AI, we need to be completely open-minded about how we do business.

AI is a double-edged sword

On the one hand, AI is making an already tumultuous world magnitude more complicated by condensing the lead time to many technologies we didn’t expect to be viable for years or even decades. On the other hand, AI is a promising new tool for navigating all that complexity.

We are always operating on assumptions based on our past observations of the world. When a change in the environment occurs, it is extremely costly and time-consuming to capture the new rule and scale it across the organization. Initially, there is coordinating a response, and then there’s the matter of replicating it across the organization—new training, informing people of the changes—it’s super cumbersome to work against rigidly established processes.

With AI, the cost of applying new rules drops dramatically because we can simulate multiple scenarios that take into consideration the vast complexity of the business itself, saving the arduous information-gathering process required for justifying a change in direction. Simple adjustments can be made automatically, leaving more time to consider larger strategic decisions with information not yet captured by the machine (say information from the last board meeting). In either case, the new rules are quickly implemented and also skip human communication distortion, delays, and latency.*

For the organizations who failed to heed the digital evangelists, what I am talking about will require a serious overhaul of their IT systems just to keep up to speed. But, unless you’re a brand-new startup or Amazon, every company will also need to reshape their organizational structures and incentives for learning and augment their people into knowledge workers** as part of their transformation around AI.

Some difficult changes that come with becoming a learning organization

Companies can no longer reliably depend on a workforce that is only concerned with their immediate environment/department/cubicle. But it’s a two-way street; a workforce also can’t expect a company to keep them gainfully employed if it doesn’t empower them to help shape the strategic direction on a daily basis. Given how we’ve seen individuals adapt to the information age, I think the bigger challenge is for the companies because of three implicit changes that come with becoming an organization designed for the future of work:

  1. Bringing information processing to the front lines

  2. Increasing autonomy in decision making

  3. Changing the incentives to allow risk-taking and failure

Bring information processing to the front lines – End the cult of hierarchy and allow for people on the front lines to see the rest of the system and what’s happening in it. Investing in tools that will augment your workforce into knowledge workers will also mean breaking silos and decentralizing the source of information.

We need knowledge workers who can creatively bring together the valuable information that AI can deliver and adjust as it relates to the big-picture narrative. We need to tool our people who are executing decisions to understand the dynamics of the company as a whole. “What are our parameters?” “What are we trying to do?” “What service are we providing to our customers?” Help them to see the big picture through AI-empowered tools that deliver them the relevant actionable info, as well as a common narrative. The people who are only concerned with the relatively small world of their tasks, regardless of what level of the company they are at, are the ones at most risk of being replaced.

Increasing autonomy in decision making – The point is to move on the information when it is still valuable. Question your intuition when it comes to authority, access to information, and decision making. Build trust to flatten the hierarchy.

The job of a leader is to ask better questions. Stop asking about marginal increases in efficiency, which AI will soon be taking care of anyways, and start asking questions about how to guide the systems, construct the right feedback loops that reinforce the results we want to see, and how to eliminate what isn’t useful. Your consumer will actually help you with some of this, but a lot of it is going to be shaped by the organization. The skill set of the future is going to be about asking the right questions. The best people an organization can hire in the future are going to be the people who write down the best hypothesis and figure out very quickly a way to test and resolve that hypothesis. This requires autonomy to experiment and leads to the next point…

Changing the incentives to allow risk-taking and failure – Experiment and fail fast. Empower your people to make mistakes, then ramp up what works and kill the projects that are not delivering the results. As we work out the answers for respective industries and companies, we will need to accept failure and being outright wrong. It’s important to know what doesn’t work, just as it is to know what does.

Incentives can help us fail better. We can fail smaller, and we can learn more from our mistakes. By creating incentive structures around failing better, it will prevent counter-productive tendencies like avoiding risk-taking, sweeping under the rug the failures that do occur, and offloading blame when trying to understand what really happened.

Society’s transformation around AI

I think crises are good at pushing us temporarily into open-mindedness. For a lot of the world, AI will be a crisis. But with this crisis, it’s important to realize this is a permanent change, and our new open-mindedness should be perpetual. There will be some basic changes to make across the board like Peter Senge’s disciplines of the learning organization. Though, these will manifest in very different ways depending on the organization over the years to come.

We can’t rewrite the law or reinvent how we do business from scratch. In order to adapt, we need to pick up principles for how we reconsider design thinking, translate our current law and policy, and even change the way we see the world and each other ahead of this fast-moving, world-changing technology.

This kind of principles-first thinking is what I see is needed across our society dealing with the impact of AI. Technology does not appear in its own vacuum, it emerges from an environment of particular laws, ethics, social equality, externalities, and all the other ways we conduct our business. While technology does impact these conditions, it cannot alone fix their inherent problems and is far more likely to only amplify them unless we also use technology as an opportunity to change our thinking.

*This will destroy a lot of the jobs (a.k.a. the links in the communication chain) and likely lead to middle management becoming the front-line workers. This is a big problem that all sectors of society need to think hard about how to handle, and I plan on coming back to it in more detail in the near future.

**A knowledge worker is not someone who just knows a lot of facts, they can think creatively about the facts and connect the dots to create new ideas and new IP.

You can also find these posts on Medium and LinkedIn.

Photo by Ghost Presenter.


    2 Comments
    • Reply Shadi Yazdan

      August 24, 2020, 10:41 am

      Hello sir,
      Your blog on AI is very informative and I know one such company which is PredictivEye Inc.

      An AI-Platform That Decodes Your Customers’ DNA
      Our AI platform helps mid-sized businesses develop a competitive advantage using real-time customers behaviour insights. This allows businesses to tailor their offerings and retention strategies.

    • Reply Junghyun Chae

      November 17, 2017, 11:14 pm

      There is a lot I learned from managing change. Probably the most cherished would be the view of change as "a journey, not a race". The recent emergence of artificial intelligence in business realm bears a striking similarity with ERP when the concept was still young and promising – yet uncharted. Although today we’ve grown familiar with the term ERP, only a few company progresses to the level of BI, and this is very unfortunate.
      For a company to succeed in ERP, technology alone is not enough. It needs to redefine its business process, and more importantly, it requires the power of its people to do so. For most people, embracing change is not easy. More often than not, I have faced resistance. Resistance due to the unknown, resistance due to the lack of trust. Resistance due to the lack of transparency. And this is the pit where most companies fall – instead of providing transparency, trust, or information to encourage and promote change, they force the implementation of ERP toward their as-is processes. Not only this is expensive and time-consuming, this results in little to no improvement, grows inefficient and becomes a burden. This is not what they wanted.
      A too common misconception is treating ERP as an IT solution. This is true only in terms of technology. What ERP can achieve, streamlined process, optimized resource and inventory, supplier and client management, sales and production planning and forecasting, all of this can’t be done without proper process and people in proper function. And BI, the pinnacle of ERP, requires impeccable and accurate data tailored for the exact purpose of analysis. This, technology alone cannot provide.
      Following the artificial intelligence today I have a strong sense of déjà-vu. Similar anxiety, similar expectation. Almost too similar, such that if I substitute "ERP" to "AI" in my letter it would still make sense. Hopefully we’ve learned from the past, and we understand well the importance of managing change to embrace the new frontier. Like it or not, this journey has already begun.
      (letter from anonymous digital evangelist)

    Leave a comment