Trustable AI: Building Confidence – Part 1

We distrust what we do not understand

As the hype increases for the adoption of AI, more and more companies in various industries start to hire to leverage machine learning and utilize more advanced AI algorithms, including a variety of deep neural networks, the ability to provide clearly justifiable explanations for the different stakeholders becomes increasingly critical, from a business, legal, economic and ethical perspective.

As enterprises struggle with finding the right skills, the right design, implementation, data pipeline and deployment architectural patterns for artificial-intelligence-based systems, it’s important to factor in the societal, ethical and adoption repercussions of this newly re-discovered technology.

Much of AI seems to drive by black box prediction. However we humans find it difficult if not impossible to accept what we cannot see clearly and therefore understand. We need to see at least with some degree of precision or step by step process, how an algorithm has classified, clustered or regressed and provided to a human to decide what action to take. And since distrust goes hand in hand with lack of acceptance, it becomes imperative for companies to open the black box.

Bias Minimization

Bias Minimization, Ethical Factorization, Fairness Implications are integral parts of building trust in AI-based systems. Machine Learning generated recommendations, classifications, predictions will live or die on the vine if there is no trust.

Despite what technical executives may believe, these aspects are part and parcel of the business objectives, marketing repercussions and customer adoption scenarios that are critical to business continuity, reduction of customer churn, increase in customer loyalty.

These considerations will have material impact on business performance, optics and in many cases, as we start to rely on ML and AI systems for decisions, the trajectory of human lives: the approval of loans, the diagnosis of an ailment, the suggestions for care plans, educational roadmaps, etc.

Only by fundamentally embedding fairness, bias minimization and ethical principles and governance into the ML life-cycle can we build the trust we need in systems that people will begin to rely on more and more.

Human Factor Embedding

Paradoxically, Human Factor Embedding, is critical to AI success. Foundational to this is the issue and research topic of explainability of AI systems.

“How did we arrive at this conclusion?” “Well the ML Algorithm classified it is as such…” is not a good enough answer. Explaining why, or being able to do due diligence to ascertain whether the data was adequate, un-biased, non corrupt, not curated by a politically motivated or economically motivated influence during curation is key to the trust we must ultimately place on the outcomes of ML algorithms. Do we trust the algorithms? Maybe. Do we trust the humans who trained with possibly incomplete, imbalanced and biased curated data? No. We cannot.

AI Governance and Transparency

Thus AI Governance calls for transparency in each stage of the identification , candidacy of data elements, features, cleaning, balancing, regularizing, normalizing (where appropriate), de-biasing, augmenting (removing/augmenting nulls, NANs, etc) data, training, testing validating, refining, re-training, keeping the incoming pipeline fresh and current and relevant are all phases in the AI Governance cycle that lead to increased Trust in AI.

Interpretability, Explainability and Justifiability

Machine-learning systems will have as a governance and monitoring phase, the ability to provide a window to explain their rationale, providing human observers a window to assess and characterize their strengths and weaknesses, and provide data that a researcher or auditor can look into to convey an understanding of how they arrived at the categorization, clustering, regression and how they will behave in the future.

Interpretability requires transparency, explainability, justifiability and falsifiability.ahkgtpmtqfqqwvj68mtslw

Advertisement

Training Sets

A trained deep neural net model is only as good as the curated dataset that it was trained on. ,It will indeed be biased based on who gathered the data, how it was curated, how it was validated, how it was balanced.

It is likely that following the human example, e.g., of politicians, models are trained that think heavily biased statements are neutral, and vice versa.

In a real world situation, governance, transparency, accountability and traceability of the training data is key. This would include crowd sourced options, so that everybody can contribute to, and maintain it. We cannot necessarily trust the intentions or affiliations of people who (read: lobbied) trained the models.

#Machinelearning #bias #minimizebias #AI #classification #governance

Minimize Bias

Without doubt, we live in a world of viewpoints and therefore biases — therefore we cannot absolutely eliminate bias. Bias generally denotes views and opinions that are skewed through some prior experience, adherence to unreasonable perspectives that often not fact based.

Our aim is not to eliminate bias, which is futile, but to seek to gradually minimize bias.

We do so, from the perspective of machine learning and artificial intelligence. In AI and ML, we have a function called the “loss function” that algorithms seek to minimize.

Using a similar metaphor and heuristic we seek to identify features of bias and incrementally seek to identify and minimize this bias in machine learning training and predictive modeling and data.