Building trust in AI
AI is no longer the future—it’s here. In our pockets, in our houses, in our cars. It has quickly become ubiquitous as AI technology has continued to expand its role in our lives. As this growth is set to continue indefinitely and lead to more and more ground-breaking achievements, there is an important question that we need to answer: What level of trust should we place in AI, and how can we build this trust among users?
This might sound like a straightforward question, but we have all seen the fearmongering headlines about how ongoing developments in advanced AI could lead to it taking over, enabling it to steal our jobs, or worse, enslave us. While this dystopian scenario is something that is easy for those of us in the know to laugh off, it is a serious concern for regular, non-technical people who are understandably hesitant.
So, just as trust needs to be established in our personal and professional relationships, it also needs to be established between AI systems and their users if the technology is to continue growing and benefiting our world. Technologies such as autonomous vehicles, for example, will only be possible if there are clear benchmarks for establishing trust in AI; nobody is going to put their life in the hands of a self-driving vehicle if they don’t trust the technology that controls it.
So, how do we do it?
According to IBM, building trust in AI will require a significant effort to instil a sense of morality in it, operate in full transparency, and provide education about the opportunities it will create for businesses and consumers. This effort, IBM says, must be collaborative across all scientific disciplines, industries, and government.
The most obvious way to achieve this would be to instil human values in AI. Indeed, as AI has grown, the concern over how we can trust that it reflects human values has also grown.
One scenario that has arguably been cited more than any other is the moral decision that an autonomous car might have to make to avoid a collision. In this scenario, a bus is heading directly towards a driver who must swerve to avoid being hit. If the car swerves left, it will hit a mother and baby. If it swerves right, it will hit an elderly person. What should the car do in this situation? Swerve left, swerve right, or continue straight ahead? This is of course impossible to answer. All three outcomes lead to a terrible outcome, and arguments can be made for and against all three courses of action.
It is also important to consider the problem of bias affecting the machine’s decision. As the Senior VP of Hybrid Cloud and Director of IBM Research, Arvind Krishna, puts it, it is possible to have the bias of a machine’s programmer playing a part in determining decisions and outcomes without proper care in programming. There are already several high-profile examples of machines demonstrating bias, and this makes it more difficult to build trust in AI systems.
The three dimensions of trust in AI
Software company DataRobot has organized the concept of trust in AI into three dimensions—performance, operations, and ethics. Each of these categories contains a series of areas that ML teams can look to optimize to start building trust in their own ML models and AI systems.
1. Performance
When evaluating the trustworthiness of AI systems, performance matters. If your model isn’t performing at its optimum, then it isn’t making accurate predictions based on the data it is analyzing. This naturally makes it less trustworthy. Key metrics for performance include:
- Data Quality—The performance of any ML model links directly back to the data that it was trained with and validated against. ML teams should therefore be prepared to verify the origin and quality of the data that they’re using so that they can be certain that they are building a more trustworthy model from the outset.
- Accuracy—Accuracy refers to model performance indicators that measure its aggregated errors. It is a multidimensional metric, and you need to evaluate it using multiple tools and methods to fully understand it.
- Speed—For model performance, speed is the time it takes for a model to make a prediction. The predictive speed of a model directly impacts how it can be used, and this is impacted by factors including dataset size and how quickly a prediction is required.
- Stability—How can ML teams ensure that their models will behave in a consistent and predictable way when confronted with changes or inconsistencies in data? Testing models to assess stability is an essential part of improving model performance.
2. Operations
Ensuring best practices are met in machine operations is just as important for building trust as the performance of the model itself.
- Compliance—Risk management and regulatory compliance must be met in many areas, including development, implementation, and deployment. Robust documentation of your end-to-end workflow and your compliance efforts is therefore crucial for establishing trust.
- Security—AI systems analyze and generate large amounts of sensitive data, so it is important that security is taken seriously. Independent standards such as ISO 27001, for example, can be used to verify that your system meets information security requirements.
- Monitoring—Governance is becoming more important in AI. To build trust, it is important that ML teams devise and implement a clear system for monitoring and redundancy.
3. Ethics
Ethics is relatively new in the context of AI. However, AI systems and the data that they use can have a huge impact, so it is important that they reflect the values of users and stakeholders.
- Privacy—Privacy is now seen as a fundamental right, and recent legislative action (e.g., the EU GDPR) has enshrined this position in law. However, the use and exchange of data in the field of AI complicates this somewhat. ML teams need to understand the data they are collecting and using, whether it is classed as personal data, and take steps to ensure compliance with key information security and privacy requirements.
- Bias—We have already touched on this. ML teams must understand what it means for a system to be biased, where bias can come from, and how to measure it. Only then can efforts to mitigate bias and unfairness, and ultimately improve the trustworthiness of your system, be implemented.
- Transparency—Familiarity and transparency are two of the most powerful ways to build trust between AI systems and users. If a user is familiar with how a system works and can interpret how it makes its decisions, it creates a shared understanding between machine and human.
Transparency is key to trust in AI
While there is no globally agreed process or standard for building trust in AI, enough forethought and understanding of what trust means can make all the difference in developing a robust—and of course, trustworthy—system that reflects the values of its users.
One factor that AI thought leaders note is more important than any in building trust between AI systems and users is transparency. To trust the decision of a machine, ethical or otherwise, the user needs to know how it arrives at conclusions.
Right now, it can be argued that deep learning performs poorly in this regard, but there are AI systems out there that are able to point to documents in their knowledge basis from which they draw their conclusions. Transparency is improving then, albeit very slowly.
Rachel Bellamy, IBM Research Manager for human-agent collaboration, reckons that we will get to a point within the next few years where an AI system can better explain why it has made its recommendation. This is something that is needed in all areas where AI is used. Once transparency in AI has been achieved, users will naturally have a much higher level of trust in the technology.
Our commitment to trust and ethics
Not everyone in the AI space sees trust, ethics, and transparency as a priority. At Qwak, however, we are committed to helping our clients build, develop, and deploy ML models that meet the three dimensions of trust that we discussed above.
Qwak achieves this by making it possible for our clients to record, document, reproduce, and manage everything in our end-to-end cloud space, helping them take their first steps towards more transparent and ethical ML model development without having to think about it.
Want to find out more about how Qwak could help you deploy your ML models effectively and efficiently? Get in touch for your free demo!