If we build decision making into an AI, how does one build an AI that will not decide to either ignore an outcome or build it’s own set of morals as it matures? A true AI is supposed to learn from decisions it makes. Like a child, it is supposed to make mistakes, learn from them, and then not make the mistake again.