We distrust what we do not understand
As the hype increases for the adoption of AI, more and more companies in various industries start to hire to leverage machine learning and utilize more advanced AI algorithms, including a variety of deep neural networks, the ability to provide clearly justifiable explanations for the different stakeholders becomes increasingly critical, from a business, legal, economic and ethical perspective.
As enterprises struggle with finding the right skills, the right design, implementation, data pipeline and deployment architectural patterns for artificial-intelligence-based systems, it’s important to factor in the societal, ethical and adoption repercussions of this newly re-discovered technology.
Much of AI seems to drive by black box prediction. However we humans find it difficult if not impossible to accept what we cannot see clearly and therefore understand. We need to see at least with some degree of precision or step by step process, how an algorithm has classified, clustered or regressed and provided to a human to decide what action to take. And since distrust goes hand in hand with lack of acceptance, it becomes imperative for companies to open the black box.
Bias Minimization
Bias Minimization, Ethical Factorization, Fairness Implications are integral parts of building trust in AI-based systems. Machine Learning generated recommendations, classifications, predictions will live or die on the vine if there is no trust.
Despite what technical executives may believe, these aspects are part and parcel of the business objectives, marketing repercussions and customer adoption scenarios that are critical to business continuity, reduction of customer churn, increase in customer loyalty.
These considerations will have material impact on business performance, optics and in many cases, as we start to rely on ML and AI systems for decisions, the trajectory of human lives: the approval of loans, the diagnosis of an ailment, the suggestions for care plans, educational roadmaps, etc.
Only by fundamentally embedding fairness, bias minimization and ethical principles and governance into the ML life-cycle can we build the trust we need in systems that people will begin to rely on more and more.
Human Factor Embedding
Paradoxically, Human Factor Embedding, is critical to AI success. Foundational to this is the issue and research topic of explainability of AI systems.
“How did we arrive at this conclusion?” “Well the ML Algorithm classified it is as such…” is not a good enough answer. Explaining why, or being able to do due diligence to ascertain whether the data was adequate, un-biased, non corrupt, not curated by a politically motivated or economically motivated influence during curation is key to the trust we must ultimately place on the outcomes of ML algorithms. Do we trust the algorithms? Maybe. Do we trust the humans who trained with possibly incomplete, imbalanced and biased curated data? No. We cannot.
AI Governance and Transparency
Thus AI Governance calls for transparency in each stage of the identification , candidacy of data elements, features, cleaning, balancing, regularizing, normalizing (where appropriate), de-biasing, augmenting (removing/augmenting nulls, NANs, etc) data, training, testing validating, refining, re-training, keeping the incoming pipeline fresh and current and relevant are all phases in the AI Governance cycle that lead to increased Trust in AI.
Interpretability, Explainability and Justifiability
Machine-learning systems will have as a governance and monitoring phase, the ability to provide a window to explain their rationale, providing human observers a window to assess and characterize their strengths and weaknesses, and provide data that a researcher or auditor can look into to convey an understanding of how they arrived at the categorization, clustering, regression and how they will behave in the future.
Interpretability requires transparency, explainability, justifiability and falsifiability.