top of page
Toby Ord, Senior Research Fellow in Philosophy at Oxford University. estimates a one in ten likeliness that unaligned AI will cause human extinction or permanent and drastic curtailment of humanity's potential in the next hundred years. One fundamental problem with both current and future AI systems is that of the alignment problem. The idea is that an artificial
intelligence designed with the proper moral system wouldn’t act in a way that is detrimental to human beings in the first place.
Outlined in detail in Stuart Russell’s recent book ‘Human Compatible’, the alignment problem is simply the issue of how to make sure that the goals of an AI system are aligned with those of humanity. While artificial intelligence powerful enough to operate outside of human control is still a long way off, the problem of how to keep them in line when they do arrive is still an important one. Aligning such machines with human values and interests through ethics is one possible way of doing so, but the problem of what those values should be, how to teach them to a machine, and who gets to decide the answers to those problems remains
The past few years have seen growing recognition that Artificial Intelligence raises novel challenges for ensuring non-discrimination, due process, and understandability in decision-making. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of AI, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions.
Enabling the responsible development of artificial intelligence technologies is one of the major challenges we face as the field moves from research to practice. Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of AI in many current and future real-world applications. Now there are calls from across the industry (academia, government, and industry leaders) for technology creators to ensure that AI is used only in ways that benefit people and to engineer responsibility into the very fabric of the technology by ensuring there is fairness, accountability and responsibility while building these systems.
bottom of page