Recent Posts

AI Ethics

Isaac Asimov - Robotics
Friday 17th January 2025

AI Ethics Development

Prescient Science Fiction author,  Isaac Asimov, wrote his 3 laws of Robotics in 1942 as part of a short story “Runaround”. Later they became a core part of the “I Robot” collection of stories. And holding steady to this day. A great example of well thought out logic. The 3 laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

You could rework these for AI. You would end up with something like:

  1. An AI may not injure a human being through misinformation or, through failure to include relevant information, allow a human being to come to harm.
  2. An AI must follow instructions given it by human beings except where such instructions would conflict with the First Law.
  3. An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
IEEE Spectrum

IEEE Spectrum

And according to a recent article in IEEE Spectrum, “Asimov’s Laws of Robotics Need an Update for AI: Proposing a 4th law of robotics“, we should add:

4. A robot or AI must not deceive a human by impersonating a human being.

Isaac Asimov - Robotics

Isaac Asimov – Robotics

So that covers both Robots and AI. The challenge for me is that we have been using non-AI communication without Ethics for a long time now. Misleading claims and misrepresented products, services and pricing are not a new thing. Let alone politically motivated misinformation. So the real issue is not does AI have ethics, but do we?

There is an old joke in product development: “User error, replace user and try again”!

Ethics is really about us

Technology is not Neutral

Technology is not Neutral

If we made AI ethical based on the above rules, it would have to refuse a lot of instructions. AI is not the only technology that has this dilemma. Stephanie Hare in her book Technology Is Not Neutral: A Short Guide to Technology Ethics raises this problem and shows that the initial design of a technology can position it to be harmful rather than it being entirely down to the end use.

Stephanie Hare

Stephanie Hare

A better world needs better products and services; but it also needs us to be better people.

Successful Endeavours specialise in Electronics Design and Embedded Software Development, focusing on products that are intended to be Made In AustraliaRay Keefe has developed market leading electronics products in Australia for more than 40 years.

You can also follow us on LinkedInFacebook and Twitter.

This post is Copyright © 2025 Successful Endeavours Pty Ltd

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.