Ethics, not morals, but yes, this is a core part of "alignment" or making sure the machine wants the same thing we do. It turns out alignment is really hard because ethics is really hard. The classic AI doomsday story is based on an AI that took utilitarianism as the highest end goal (the best way to save humanity is to destroy it); that's an ethical framework used to justify genocide.
Thank you for your reply and I think you are in the same headspace as I am on this topic. I will check out the links you posted. A side note: what is the difference between ethics and morals?
They talked about a survey about what people would want AI and/or robots to do for them. At the top of the list was doing dishes and cleaning up the house. They referenced the cleaning robot Rosy in their discussion which was funny.
It’s an interesting question, what do you we want to use AI for?
All of the info on AI development seems to be hush hush so I am very curious what it is going on with developing AI.
I remember a couple years ago there were calls to cease AI development with concerns about world safety and then the commercial media outlets sort of stopped talking about it.