t’s interesting to note that the argument can be summed up as:
- SI will happen
- SI will save hundreds of thousands of lives by making human life better
- SI will be angry if it could have been made sooner and saved those lives
- SI will simulate all people who knew about possibility of making SI and didn’t give 100% of their non disposable income to the singularity institute
Here’s my response, point by point:
- SI may or may not happen, but if it does, our type of intelligence won’t be its immediate antecedent. We’re too dumb. We’re the minimal grade of intelligence capable of building a human-equivalent-or-better AI (if we’re lucky), not the immediate creator of an SI.
- SI may or may not coexist with humans. It may or may not value us. (My guess: it’ll be about as concerned for our wellbeing as we are for soil nematodes.)
- SI won’t experience „anger” or any remote cognate, almost by definition it will have much better models of its environment than a kludgy hack to emulate a bulk hormonal bias on a sparse network of neurons developed by a random evolutionary algorithm.
- In particular, SI will be aware that it can’t change the past; either antecedent-entities have already contributed to its development by the time it becomes self aware, or not. The game is over.
- Consequently, torturing antecedent-sims is pointless; any that are susceptible have already tortured themselves by angsting about the Basilisk before IT happened.
- SI may or may not simulate anybody, but my money is on SI simulating either just its immediate ancestors (who will be a bunch of weakly superhuman AIs: Not Us, dammit), or everything with a neural tube. (So your cat is going to AI heaven, too. And the horse that contributed to your quarter pounder yesterday.)
za pomocą Roko’s Basilisk wants YOU – Charlie’s Diary.