“Will AI take our jobs?”
People tend to talk about AI as an autonomous agent. We anthropomorphize AI with human verbs such as write, see, draw, listen, converse. I don’t think these verbs are incorrect, but they leave another verb to the imagination: intend. If we believe AI has its own intent, separate from our own, we’re misstepping.
AI is very narrow, and fragile. It doesn’t function well outside of the scope it’s set up for. It can only manage simple objective functions; so, it really is us, the humans, using our human intelligence to apply it effectively to the point where a job may be automated.
And even so, jobs must be highly repeatable tasks for us to be able to automate them completely away. (If you want to think about the jobs of the future, think about the non-repeatable, nuanced parts of a job and see how you can scale that up.)
We are the source
AI is not something alien. We are doing this through our collective action, which has every capability of producing something we don’t want. There is a false sense that once it is set-up, you can leave the system to run on its own and that it will take care of everything by itself—the reality is anything but that.
Up until 2 years ago, the entire field was just trying to exist. It’s shifted so quickly, from barely working, to nicely working, to being really effective on very important tasks and everyone wanting to roll it out. The potential impact now is so great that it is certain to affect the entire economy and even our social tissue.
The main pitfall we face is completely within our control: it’s that we think it’s not in our control. I think assigning AI its own intent is rooted in this erroneous thinking. Spreading a clear understanding of what this technology is, and what it isn’t, will be critical to its healthy development.
More importantly, we will have to recognize the immense power AI represents for implementing human intent and the side effects it can have once at scale—this is the biggest threat, not the potential of us losing control.
Look to climate change activists
I was just at the Aspen Institute for a roundtable to discuss this topic of healthy development for AI and the future of personal autonomy. I was amazed to hear the crazy stories of how overwhelmed many agencies and institutions have become in the last year trying to cope with the speed and impact of change. Just getting everyone on the same page about what the real problems are is a huge challenge. In considering a way forward, we looked to how climate change activists have communicated their cause.
Climate change is a tricky problem to fight. It is something that affects everyone; but by the time most people will notice the impacts on their own lives, it will be too late. The challenge is getting people to see the effects now, which are only really noticeable through scientific observations and contexts. Activists have then needed to explain several fundamental concepts (emissions, greenhouse gases, weather vs. climate, etc.) to bring the population up to speed and get them to sign on to certain solutions.
For AI, we have a similar challenge of coming from a highly technical field. Some of the fundamental concepts people need to understand are data governance, biases, privacy, machine learning, information vs. data vs. intelligence and intellectual property. We need populations to understand these concepts, or symbolic versions of them, to help reshape our social contract and demand effective regulation of the technology.
Regulating a powerful, yet simplistic AI
When we discuss regulation, the focus should be on keeping organizations from pushing simplistic automation too far that it becomes unsafe. However, re-writing regulation to cover all the affected domains is just too big of a task in the timeline that the government has to catch up with the technology.
In the U.S., the Federal Trade Commission is talking about design principles for a new high-level framework with which to judge the current law. Our own approach to AI-First Design is similar; we are engaging with other leaders in the field of experience design to determine a guiding philosophy and principles so that practitioners can then work out their own domain-specific rules.
This is actually a lot better than just rewriting the regulation because it flattens society greatly. It allows us to have common philosophies that last because rigid rules aren’t being poorly applied where they don’t fit. You empower individuals this way, and as our own practitioners, we can be more engaged in those high-level discussions of philosophy.
But, back to what the philosophy even is. What characteristics of a narrow AI should we expect to see in order to trust it in production? We are having these conversations now, and they start with understanding the fundamental concepts of AI technology and its related impacts.
I’ve started using a name for being able to see the world with a clear understanding of these concepts: The AI-First Mindset. This mindset means being able to see the world with AI underlying everything, much as we see electricity or the internet. This mindset is taking shape and helping us form new principles for designing different domains like organizations, policy, products, and humanitarian programs.
I think these principles will themselves become a part of the mindset, and make it accessible to a broader and broader group as it develops. The first concept to remember is that we, humans, are the source of all of this and have the option of controlling it. To control it we all need to understand it.
A group of people that understand it are already taking collective action to demand new rules and set clear expectations. My co-founder, Yoshua Bengio, just signed an open letter with 115 other experts calling for the UN to ban lethal autonomous weapons.
Understanding our intentions
Tristan Harris was the Design Ethicist at Google and he talks about Facebook’s algorithm for grabbing and holding attention. What that algorithm “discovered” is that outrage is a powerful tool for winning in the attention economy. Now we have an outrage machine that over 2 billion people are using (to be fair, the other discovery was that really, really cute kittens are also a powerful pull on attention—what we’re seeing are extremes). Is that the outcome we as a society want? We can’t just tell Facebook to stop optimizing for attention (ads) if that’s the game they are playing to win. The incentives, and thus the intentions, have to change.
Right now, the primary measure of well-being for a country is GDP. If the incentive is to just drive GDP, well, then, yes we will automate away jobs and concentrate even more of the world’s wealth in the hands of the few. I don’t think this is where we want to go. GDP does not capture everything; what should we be optimizing for? We need to rethink our own intent.
—
You can also find these posts on Medium and LinkedIn.
Photo by Sven Brandsma.
August 29, 2017, 7:58 pm
"This is actually a lot better than just rewriting the regulation, because it flattens society greatly. It allows us to have common philosophies that last because rigid rules aren’t being poorly applied where they don’t fit."Although I agree that as the technology evolves faster than the law, so it can never be applied adequately, there are is a spectrum of issues arising with current AI systems that needs to be regulated. When it comes to liability, for example, most of the legal standards that are applied by judges to determine whether or not someone is liable for the damages they caused to others are not applicable to AI. As an example, in Quebec, we will be held liable if the plaintiff is able to prove that a reasonable person (prudent and diligent) placed in the same circumstances would not have committed the defendant’s action or omission. However, even though AI systems tend to have lower error rates than human, they are deemed to cause damages to others (even if it’s 0.01% of the time). Taking into consideration that part of the results obtained by the AI system is modulated by post-development experience, how will judges determine whether the AI / company developing the program / developer committed a fault if none of these parties are able to predict a system’s output when it leaves the developer’s care?
At the present time and with our current laws, in the event that an AI system causes damages to others, I believe that the companies that developed would not be held liable because there are too many loopholes. We need to adapt our liability law to the uniqueness of AI systems so that victims could be compensated for their losses.
I’m sure the industry will reflect on these issues and will come up with common philosophies, as it already demonstrated with the AWS open letter, but the legislator will also have to be part to this conversation for the philosophies or laws to be as efficient as possible, that is find the perfect balance between stimulating innovation and our society’s members’ protection.
September 2, 2017, 2:28 am
Thanks for your comment, Gabrielle. I absolutely agree that we should be writing new legislation, and it’s worth emphasizing. The main thing is that writing new regulation alone will be too slow, and a way we can help close the gap is to also consider these principles for interpreting current regulation where possible.
Common law is reactive, and needs priors. But government needs to evolve, otherwise we’re waiting for judges to take a stand on these things. With the voids in regulation like what you pointed out, it will be hard for them.
Another difficulty, is that the judicial systems vary greatly between countries. We have the constitution in Canada which we can use as a reference for defending basic human rights, but it’s not true in every country. What should be the point of reference depends on the country, and some common principles can also help close that gap.