Which of these concerns do you consider to be the most worrying? Why?
Which of these potential problems do you find least worrying?
It is often suggested that improvements in technology create "more better jobs for humans". The narrator of "Humans Need Not Apply" attempts to debunk this by comparing it to the claim that "better technology makes more better jobs for horses". Do you think that this is a fair comparison? Do you anticipate that increasing automation will reduce the need for human labor in the same way that it has reduced the need for horse labor?
The authors present an argument that sentience and sapience are sufficient conditions for moral status. They base this, in part, on the "Principle of Substrate Non-Discrimination". Do you accept this argument? If a machine has sentience and sapience should it be granted moral status?
The authors raise the problem of imbuing superintelligent machines with an ethical framework. Even if this is practically possible, it raises the question of which ethical principles we would want the machines to have. Hard-coding in our current ethical ideas could permanently saddle the machines with our ethical mistakes. The authors suggest that we should try to design super-ethical superintelligences: machines that are able to improve on our ethical reasoning.
Do you think that super-ethical machines are a worthwhile objective? Why or why not?
Imagine that, while researching your poster topic, you stumble across an algorithmic trick that would make it possible to develop a superintelligent computer program. The only catch is that there is no way to specify the utility function: there is no way to build in three laws, or even to give a general direction to the motivations of your machine. You are certain that if this trick were widely known there would be superintelligent machines within 2-3 years. You are also confident that if you do not reveal the trick no one else will discover it within the foreseeable future.
What would you do? Why?
At the beginning of the course we discussed the following questions:
Do you think that it is possible for machines to be intelligent? Do you think that, in your lifetime, there will be machines that you consider to be intelligent?
Was there anything in the most recent reading, or anything else you have learned this semester, that changes your answer to these questions?