The other day, while working with some awesome and inspiring people, we started thinking about some of the processes that would be done in the future by artificial intelligence or a robot.
Thoughts are wonderful things that can take you in all kinds of directions, and my mind started racing to the question of wages, the “who, what, where, when and hows” of paying a robot or AI to do a job.
My mind jumped to the differences in pay around the world, from different countries, to different religions, to different races, genders and skillsets. We decry pay inequality for humans, so is it fair for a robot or AI to make less than a human? What about the person who developed the AI or robot? Should she get paid for that work, as well as the extra work completed by the AI? And what about when a robot can learn on its own and create new code to make things better — who should be compensated for that work?
In a recent CSC Tech Town Hall event, CTO Dan Hushon discussed the rise of Artificial Intelligence Ethics and the idea that lawyers are now joining DevOps teams. I am not at all surprised by this. As the world changes, we will need to consider the boundaries and the grey areas. We may have to reconsider what is right and what is wrong.
Taking this idea a step further, I wondered what one would pay AI or robots with. Data? They love data, surely, or is data just an input they find to be a chore? Would you pay them with Bitcoins? Or would they develop their own form of currency to keep us humans out?
Will they want to take a break or a holiday? Alas, what would they do in their spare time — and what would they spend their wages on??
If we were to pay in royalties, similar to that for software or music, then how would we ensure that the monies were transferred correctly, with no stealing of code from others sources? Would this mean that OpenSource AI and robots would not be valuable? Or perhaps the value is in the number of users of the AI or robot, similar to an application.
To think about what AI and robots would value, we need to jump into their shoes and look at what they do not have: creativity, impulsiveness, spontaneity, wisdom, emotional awareness and mortality.
Or perhaps we can think of it another way. Nick Malik, now CEO of Vanguard, talks about knowledge and learning from Bloom’s Taxonomy, positing that AI and robots will seek help from humans to understand, use knowledge in new situations, break things down using critical thinking, put things together to join unknown dots and finally judge those ideas.
How many humans do we know that can do all of these tasks at the same time? Some days I even struggle to recall my own name, and we all have those moments. Alas AI and robots lack the consciousness, the awareness of wrong and right, the ability to judge based on the information in front of them. (If you have ever played the Moral Machine game http://moralmachine.mit.edu/, you have some perspective over how we make difficult judgments. It certainly makes you think.)
At the moment, there is a low level of guidance around these questions going on at a country by country level. The field of AI ethics seems to be led now by some of the wealthiest people and companies in the world. This could be dangerous. We as developers, consultants, Dev Ops teams, technicians and IT professionals need to be aware of what is happening. Otherwise AI ethics will pass us by and be developed without us being included in the discussion.
The more we are aware of the ideas or legislation proposed in this matter, the more we can affect what we develop, code, invent and innovate.
This whole thinking around AI and robots needs to be undertaken sooner rather than later. If you want to know more, have a read of the IEEE proposed standards here http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf or take a look at some of the origins of AI ethics from the work of Nick Bostrom and Eliezer Yudkowsky from the Singularity University, http://www.nickbostrom.com/ethics/artificial-intelligence.pdf.
This certainly is an area to watch or to get involved in, because what may seem a silly question now — how much should a robot make? — will be very pertinent in the near future.