The AI Forum NZ recently kicked-off six working groups to investigate a range of emerging issues around artificial intelligence and society.
Working group #5 has its focus on Growing the AI Talent Pool.
New Zealand is facing a shortage of technical workers capable of developing AI applications. In what follows I argue that ‘growing’ is the right metaphor to apply to responsibly solving this problem in the long term.
We will clearly need to increase the size of the available talent pool. That will be a multifactorial task that includes increasing the numbers of people choosing AI, data science and machine learning as a career; increasing the throughput of formal learning institutions; increasing the availability and uptake of on-the-job and mid career training; and increasing the supply of talent from outside of New Zealand.
However, having an ideal talent pool is not merely about the numbers, it is also about ensuring that the talent we grow is the right kind of talent, with the right traits and characteristics to best to enable a prosperous, inclusive and thriving future New Zealand. This means developing skills that go beyond technical capability. It also means ensuring that non-technical specialists understand machine learning and the capabilities of AI in order to make optimal and ethical use of it.
Impacting society at scale
With any technology that affects society at scale (as AI can clearly do) we have an obligation to develop it responsibly. In the past the industrial revolution was poorly managed resulting in exploitation of factory labour. Technological innovation in the Twentieth Century began the catastrophe of atmospheric pollution. More recently, we can note that:
“In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign. If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.” – William A. Galston (Ezra K. Zilkha Chair in Governance Studies)
When the things we do and create impact society at scale there is a responsibility to get it right. That’s why we have professional certifications and a structured programme of non-technical knowledge and skill learning embedded in courses such as engineering, medicine, and law. Take for example the University of Auckland’s Medical Humanities programme and compare that to the course list for University of Otago’s computer science department, where only one paper mentions ethics as a designated component.
AI talent origin stories
Furthermore, machine learning practitioners and AI developers do not come from any one particular development pipeline. You do not need to specifically have a PhD in AI to fill these roles. AI practitioners can come from any mathematically rigorous training programme. Graduates in computer science, math, physics, finance, logic, engineering, and so on, often transition to AI and machine learning.
One glaring issue is that some of these generalist disciplines do not have a programme of social responsibility and professional ethics embedded in them (engineering may be an exception). Nor are there professional certification requirements for a lot of these skilled workers. This is in stark contrast to other professional disciplines such as accounting, law, nursing, teaching, medicine, and many others.
Social responsibility and professional ethics
To ensure responsible development of the developers we either need to embed responsibility development in all these programmes that can lead to AI practice, or take the whole thing a step further back to high school, or, stepping vertically, we need to ensure institutional codes and professional regulation. Probably all these are required.
Society expects the developers of intelligence to respect public institutions, privacy, and people as autonomous agents among many other things. We do not want to be phished for phools for profit or to further an agenda. Just because something affecting society is possible does not mean it is automatically acceptable.
Just like medical writers sign up to a code of medical writing ethics to push back and rein in the whims of Big Pharma (who employ most of them), we need to have faith in the talent pool who will be developing AI if it affects us all.
The problem may not be so great when workers are employed by businesses that are ethical, socially responsible, and whose aims are aligned with those of societal flourishing. It can be argued that several of the big tech firms are moving in this direction. IBM, Google and Microsoft, for example, have published ethical and/or social codes for development of AI in 2018. But not all developers will migrate from their technical training into socially responsible firms.
IBM’s Everyday Ethics for AI report notes the following: “Nearly 50% of the surveyed developers believe that the humans creating AI should be responsible for considering the ramifications of the technology. Not the bosses. Not the middle managers. The coders.” – Mark Wilson, Fast Company on Stack Overflow’s Developer Survey Results 2018
Growing true AI talent through deep learning
Growing the talent pool is an appropriate metaphor. We do not just want a wider harvest of inadequate talent, nor do we merely want the planting of many more average seeds. We also need to choose the right educational soil and to add the right fertilizer of ideas, concepts and socially responsible skills.
Intervention is needed at three levels and across three time horizons. We need broad social, ethical, civics, and society education prior to the choice of a career specialization.
We need to cross-fertilize tertiary training in all disciplines that lead into AI practice with courses and dialogue on social responsibility, human flourishing, ethics, law, democracy and rights. And we need to ensure that professional engineering, AI and machine learning institutions mandate adherence to appropriate codes of conduct.
We need deep learning around all these issues from early on.
We need to begin now with current practitioners, we need to foster these ideals in those who have selected AI as a career, and we need to prepare the future generation of AI talent.
If the tech specialists don’t see the force and necessity of these points, then that in itself proves the truth of the argument.
Who is responsible?
Here I am with no background in AI or machine learning telling those who would make a career in these fields that they must study soft skills too. So why do all our voices count in this space?
We are talking about the applications of intelligence, and as intelligent beings we are all qualified to talk about how intelligence is distributed in society, how it is deployed and what functions it has.
When you go to a conference on nuclear physics everyone at the conference may be a nuclear physicist. But those that develop a technology are not automatically those who get to decide how we use it.
We see this when policy makers, ethicists and the public determine whether those nuclear physicists are permitted to detonate atomic weapons. We see this when the committees at the FDA determine whether medical technologies and pharmaceuticals will be licensed for use in society. AI and machine learning applications bear much in common with these other domains.
With great intelligence (and the development of great intelligence) comes great responsibility.
Importance and urgency
The reason all this is important is because digital technology now infuses every domain in society, and AI is rapidly becoming an integral part of Law, Medicine, Engineering, and every other professional discipline. We are going to need professionals who understand AI, but we are also going to need technical developers who understand the professional aspects.
There are tasks in society that are urgent and those that are important. There are interventions that will have a narrow impact and those that will have a wide ranging impact. In addressing those issues that are urgent and narrow (and therefore manageable) we cannot forget the issues that are ongoing and less well-defined, but highly impactful.
The most important things moving forward are to ensure a just and cohesive society that supports democratic institutions and upholds social norms and rights; a society that does not use exploitation or manipulation as key processes for generating profit. A society in which technological innovation respects the evolution of institutions.
We must ensure that as a society we develop a pool of talented, socially aware, and responsible AI practitioners.