Ethical AI or Ethics by AI?


Making AI ethical might be impossible, but AI can make us more ethical!

I don’t think it will ever be possible to guarantee that AI will behave fully ethical. Marvin Minsky once said: “Will robots inherit the earth? Yes, but they will be our children.” What he means is that because we made them, the robots will follow our ethical understanding and thus be ethically well-behaved - unfortunately there will always be unethical hackers, leading to unethical robots.

Isaac Asimov defined the three law of robotics which say that (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  Looking at the military uses of AI, where both Russia and the US announced that they are working on self-guided missiles that use AI to find their target autonomously, the first law has already been violated.  Unfortunately, there will always be rogue programmers that either for amorally selfish purposes, or under the premise of jingoistic nationalism will program AI and robots to do their unethical calling.

However, what we can do, is use AI to measure our own ethical and moral values. In a series of research projects, we have developed AI models that measure one’s ethical values based on body signals tracked with a smartwatch, based on the words that one uses, and even based on the “honest signals” computed from the email interaction network and dynamics – without even looking at the words that are used.

This approach will enable you to know your personal moral and ethical values, as well as the ones of the people interacting with you. As has been shown in previous research projects, we are very bad judges of our own personal and moral values; our family and friends are much better in assessing them for us. Our tools will show you a virtual mirror of your own moral values and ethics, thus assuming the role of family and friends in showing you how caring, fair, honest and collaborative you truly are - not how you see yourself, but through the eyes of the people you are interacting with, aggregated and computed through AI.

Getting such a virtual mirror will help us to live up to the ethical values that we are aspiring to, and become the good person that we would like to be!

If you would like to know more, my new Happimetrics book, which just came out this week, will lay out all aspects in great detail.


Comments

Popular posts from this blog

Coolhunts for 4/16/07 to 5/11/07

Scientists, Monks and Bankers - It's All About Love

If good people make AI, can AI make good people?