No, self-driving cars don’t require we solve “trolley problem” moral dilemmas

Stop me if you've heard this: Now with self-driving cars, engineers will be faced with dilemmas. They will have to decide the answers to certain contentious questions in moral philosophy. For example, should a car go straight and hit the child, or divert and hit the man? How should the software be programmed to behave? … Continue reading No, self-driving cars don’t require we solve “trolley problem” moral dilemmas

Malevolent Artificial Intelligence (Sam Harris is Wrong, Part 4)

The┬ásecond-ever┬ápost on this blog is about the question of malevolent AI. I will now revisit the question with more detail, having had time to refine my thoughts. I don't think the question of malevolent AI, more broadly, the "AI alignment problem," can be simply dismissed. If my view has changed at all since the last … Continue reading Malevolent Artificial Intelligence (Sam Harris is Wrong, Part 4)

If we design strong AI, it might not take over the world. It might just sit around masturbating.

We know what we as humans are designed for. Ostensibly, the goal of all life is to survive and reproduce, (or by any other means spread its genetic information). Our intelligence did not evolve as an end in itself. It evolved because it happened to be useful for the above purpose in our environment. You … Continue reading If we design strong AI, it might not take over the world. It might just sit around masturbating.