4 Steps to Good Narrow AI

For people to start acting on a problem, much less pay attention to it, something has to really go wrong. As noise is being turned into information, I think we’ve had some fundamental shifts in our social contract that modern society is just beginning to pay attention to. In 2010, society was hardly aware of the kind of information organizations had on people. Now, at the advent of AI’s widespread integration into our lives, an increasing number of events in the digital world (EquifaxSnowden) is forcing us to think hard about the implications of the technology we use every day.

Many loud, influential voices (perhaps most notably Elon Musk) are wary of the next 40 years of technology, framing it figuratively and literally as its own autonomous being. AI is at the core of these discussions, and there are some related applications that deserve concern. AI is showing the ease with which pseudo-science can sneak back into our institutions and the power just a few companies have over what we think.

What can we do?

To start, practitioners could use a brush up on Data Science 101, especially concepts like “Correlation does not imply causation” and “informed consent”.

‘Correlation does not imply causation’ is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other. ‘Informed consent’ denotes the agreement by a person to a proposed course of conduct after the data scientist has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct. — From the Data Science Association’s Code of Conduct

Fudging these standards of science has directed questioning eyes at the tech industry. Blood is in the water when it comes to antitrust regulation. Leaders inside and outside the tech industry are also calling for laws and principles of safe technology, again with a central focus on AI. These range from Oren Etzioni’s update on Isaac Asimov’s famous Three Laws of Robotics, to the research community’s Asilomar AI Principles, to other executives’ own rules. [We have a full literature survey of principles, ethics, and other relevant rules for AI in society here.]

Most of these rules are about far-fetched AGI or high-level moral imperatives. No one really disagrees on the need for virtues, but what are they really changing? The reality of the industry is AI is still very narrow and splintered. Not only are we far away from tying those capabilities together into AGI, we are also without many common standards and the ideas put forth are not very actionable. We do have a few players coming together on missions like the Partnership on AI, but we need to do more to set high standards of quality and security and lay the foundations for even being capable of meeting the moral imperatives set by these philosophers and futurists.

These rules are not just about how we translate our human values into machine outcomes, but also how machine outcomes impact our values. In developing our methodology for AI-First Design at Element AI, we saw that as designers we can’t ignore that feedback loop and need to include it in our overall design process. It is time to stop treating AI like a black box and be willing to shine a light on what the technology is really doing in order to renovate our social contract consciously rather than automatically.

  From our AI-First Design (AI1D) Methodology
From our AI-First Design (AI1D) Methodology

Self-regulating will be as important as governmental regulation. For one, legislation will take some time to get up to speed, but tougher rules are coming thanks to the recognition of the great power big tech holds with its data.

There are those, too, who are calling for regulation to give themselves a chance to catch up. They base those calls on reasonable claims of the need for assuring consumers about the use of their data and the need for clarity and confidence in digital technologies/services.

When the government acts, hopefully, it will turn into something positive, but the industry should also show some leadership and help frame this debate. We should fight for the industry to be transparent, accountable, and good for humanity so that people don’t gang up against this tech in a backlash.

4 steps to good narrow AI

Transparency is the hard part. The enforceability of the regulation and accountability of the practitioners hinge on transparency. This is a real can of worms for our industry because at first glance it goes directly against many business models.

But we have a three-legged stool problem. For us to maximize the benefit of AI, we need to balance the benefits to the user, society, and the industry. If one leg is too long, or if one leg is broken or damaged (say due to unsafe AI), the whole thing threatens to topple over. That is why having clear, well-planned rules is important: to keep AI fair and working for good.

As the creators of AI systems, we are closest to ensuring the proper setup for keeping the stool balanced and have a vested interest in leading the healthy development of an industry that can be regulated from without and from within.

In order for our industry to start being accountable, I think we should follow four steps with the systems we are building:

  1. Make it Predictable – What is the purpose? Have you stated your intent of how you will make use of that purpose?

  2. Make it Explainable – Is it clear that you are achieving that intent? Can the user ascertain why a result happened?

  3. Make it Secure – Is the stated purpose stable? Have you tested it with some shock tests for corruptibility?

  4. Make it Transparent – Have you hit publish or made this information auditable?

Predictable

In laying out their ethics for narrow AI, Nick Bostrom and Eliezer Yudkowsky said, “[These are] all criteria that apply to humans performing social functions; all criteria that must be considered in an algorithm intended to replace the human judgment of social functions.” When you meet someone for an exchange, you are going to want to understand their intent. The digital world has tricked us into ignoring that, and I think getting to a point where we can no longer make a strong claim of “informed consent.”

We need to be clear that machines do not have their own intent. Right now we have many algorithms that seem to do the same thing, like image recognition, but their purposes are different. One may look at the clothes, pose, and background, while another may look solely at the permanent features of someone’s face.

Then it is what we do with these tools, our intent, that it is also important to be clear about. Why are you identifying faces? What are you doing with that output (or who are you selling it too)?

Explainable

The UI of software until recently has exposed everything that’s in the software. You could query it and get access to the database. Now software runs on the cloud and various devices, running all sorts of services in the back end the user would never know about. Sometimes it’s optimized for the user, but it doesn’t necessarily have their best interests in mind. And that’s ok if they know what those motives are, but I think most people unknowingly are being served experiences purely designed to get a hold of their attention and serve them ads. That relationship is opaque, and in my opinion unethical.

AI is making the software even more of a black box. For it to be explainable, it should provide the inputs it takes into account, the purpose of the software, what feedback it is gathering, and where that feedback is being used. This is where we can get back to achieving “informed consent”, and contrary to popular opinion, this is quite doable if it is done from the beginning of a project.

Secure

Just as we test banks to check their resilience against financial shocks, so should we test our algorithms against corruptive agents or data anomalies. Is it robust through false signals or introducing bias? Incorruptible against bots, trolls and other manipulations?

After all of this work clarifying the purpose of the machine and how it achieves that, it’s critical to show that purpose won’t change, otherwise undermining the other principles. In fact, the algorithms can become our canaries in the coal mine by alerting us as to when it is when it is time to take back control of the wheel.

Transparent

If we do this as an industry, we have an opportunity to be accountable. The principles others have put forth are highly subjective, so these things need to be transparent for everyone in order for our society’s collective values to be applied, not a single company’s (or board members’) interpretations.

Every stakeholder that wants AI to be for good should get moving. The users will have their consumer groups, society its policymakers, and industry its ethics boards. The key will be having regulation and consumer groups strong enough that they paralyze those who are not acting transparently.

We need to enforce transparency of what’s in software because it impacts society. If you’re a food company that believes in healthy eating, not just offering healthy options, you’re going to ask for better regulation of the industry as a whole, and at the same time invest in preparing yourself to not only meet the standards of healthy nutrition, but also preparing yourself to be transparent about meeting those standards.

Just by beginning action (beyond talk), we can create a powerful economic incentive for companies to enforce their own standards of transparency so that they can immediately jump the transparency hurdle and not disrupt their businesses.

I realize this proposal sounds like it’s blowing up business models as we know them. I think it is to an extent, but right now we face a few realities that I believe necessitate this.

  1. We need the trust of society to carry forward and innovate

  2. The trust is beginning to wane as externalities become apparent (to all of us)

  3. A lot of regulation is a blank slate and can change quickly, for better or for worse

  4. It will be for the best if we participate as an industry to enforce transparent standards

I am not proposing companies lay bare everything, but with the many splintered, narrow applications of AI, we all need to participate as we create the foundations for this fledgling industry. If you can’t prove you’re playing by the rules, should you be allowed to play at all? In order for AI to be for good, those building it have to be accountable to it, and in order for them to be accountable they have to be transparent.

I look forward to feedback and discussion on these steps. I also post these blogs on Medium and LinkedIn, and usually send it out first via the subscriber list for this site.

You can also see me speaking in more detail on this with Q&A at the AI Summit in San Francisco this week and the Web Summit in Lisbon November 6-9.

See the Literature Summary of Principles for Safe AI

    2 Comments
    • Reply Junghyun Chae

      November 21, 2017, 7:45 pm

      In my opinion:
      There are several implications for transparency. In terms of corporate culture, it implies the lack of hidden agendas and conditions, accompanied by the availability of full information required for collaboration, cooperation, and collective decision making. In terms of regulation, it implies a minimum degree of disclosure to which agreements, dealings, practices, and transactions are open to all for verification. [www.businessdictionary.com] I assume we’re referring to the latter.
      The level and aspect of transparency vary greatly on which industry we’re playing, mainly due to the nature, role and function of each. Some will require us to demonstrate that the system is secure. Some will require us to demonstrate that the system is fail-safe, or meets the minimum performance requirement. Artificial intelligence, as an emerging technology, will undoubtedly require these respective standards be revised, updated or conceived. This is not the first time the regulations are being reworked in the wave of new technologies – in fact, they are quite adept at doing so.
      One good example, the ISO/IEC 27001 security standard for healthcare information security management system, which regulates the protection of PII and PHI data among others, underwent a major revision in 2013 to adopt the technological changes: cloud computing. In Pharmaceutical industry, the code of federal regulation 21 CFR Part 11 dictates the requirements of computerized systems for data integrity, and will pretty much dictate the artificial intelligence in terms of its compliance to the industry. While it may be tempting to promote transparency of artificial intelligence on its own technological aspect, it is my understanding that such efforts should be limited to voluntary and free engagement, and set our major focus to the industries’ standards and regulatory bodies. A tool is valuable only within given purpose and context, and I personally tend to view artificial intelligence as an added layer of technology – rather than an industry by itself.
      Another example, still in pharmaceutical industry – how do we prove that the drugs we produce are effective, and will remain effective? Through countless clinical trials, well-established manufacturing processes, and thoroughly documented operating procedures. Through compliance with cGMP.

    • Reply Francois Labrie

      September 26, 2017, 8:41 pm

      Jean-François,
      Thank you for sharing your perspective on AI transparency.
      I have the chance to teach to executives the link between AI and their business models and without finding the right level of transparency and audit ability for AI we are opening the door to future back-lashes from consumers or governments.
      You are right when you write that the AI can transform the business model. I am of the opinion that understanding how AI will change the governance and the structure of an enterprise has more value for an enterprise that the AI by itself. If not integrated into the business processes and culture it could end up being an isolated solution looking for adoption.
      On the other hand, if executives integrate into their new business models the ethical aspect of the AI, this is where the long term value is created now. Without the right level of transparency and an open ethical ways to use AI there will be ramping suspicion in our society.
      Regards
      François Labrie

    Leave a comment