The past decade has witnessed a wide adoption of artificial intelligence and machine learning (AI/ML) technologies.
However, a lack of oversight into their widespread implementation has resulted in harmful outcomes that could have been avoided with proper oversight.
Before we can realize AI/ML’s true benefit, practitioners must understand how to mitigate its risks. This book describes responsible AI, a holistic approach for improving AI/ML technology, business processes, and cultural competencies that builds on best practices in risk management, cybersecurity, data privacy, and applied social science.
Today, machine learning (ML) is the most commercially viable
subdiscipline of artificial intelligence (AI). ML systems are used to make
high-stakes decisions in employment, bail, parole, lending and in many
other applications throughout the world’s economies. In a corporate setting,
ML systems are used in all parts of an organization – from consumer-facing
products, to employee assessments, back-office automation, and more.
Indeed, the past decade has brought with it wider adoption of ML
technologies. But it has also proven that ML presents risks to it’s operators
and consumers. Unfortunately, and like nearly all other technologies, ML
can fail – whether by unintentional misuse or intentional abuse. As of today,
the Partnership on AI Incident Database holds over 1,000 public reports of
algorithmic discrimination, data privacy violations, training data security
breaches and other harmful failures. Such risks must be mitigated before
organizations, and the general public, can realize the true benefits of this
exciting technology. As of today, this still requires action from people —
and not just technicians. Addressing the full range of risks posed by
complex ML technologies requires a diverse set of talents, experiences, and
perspectives. This holistic risk mitigation approach, incorporating technical
practices, business processes, and cultural capabilities, is becoming known
as responsible AI.
Who Should Read This Book
Non-technical oversight personnel – along with activists, journalist, and
conscientious folks – need to feel empowered to audit, assess, and evaluate
high-impact AI systems. Data scientists often need more exposure to
cutting-edge technical approaches for responsible AI. Both of these groups
need the appropriate critical literacy to appreciate the expertise the other has
to offer, and to incorporate shared learnings into their respective work.