What I think will be critical as we integrate these into real world machines over time, and as their capabilities become more generalized and layered, will be to build in a sort of moral constitution that behaves like a concrete engine (a calculator), that has the model recognize when something might be a questionable behavior or cause an undesirable outcome, and then call on the "constitution" to make the decision to act or not