Earlier this week there was a debate on Consciousness in the Machine, basically asking whether machines can be conscious. In a somewhat different manner than myself, Bernardo Kastrup rejects the idea. Kastrup says that it’s a hypothesis not worth entertaining, and from entertaining the idea bad things follow. From Kastrup’s blog,
Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.
Further in the blog post Kastrup elaborates that the positive argument (i.e. in favour of machine consciousness) basically amounts to “If brains can produce consciousness, why can’t computers do so as well?” Kastrup’s counterargument that came up in the debate was, “If birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well?” Kastrup’s argument was countered with: if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird.
In my view, this boils down to definitions: Do airplanes really fly or do they only simulate flying? Airplanes fly only in a metaphorical sense. Airplanes do not fly without a pilot, i.e. what flies is airplane+pilot. A conceivable counterargument can be: But now we have drones! I’d reply: And we have rockets too. Are cannonballs conscious because they fly after having been launched from the cannon?
Another strong point Kastrup makes in the blog post is, “[If] there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious…” So perhaps without you guys knowing, your smartphone is truly intelligent already and every time you are turning it off your are killing consciousness. Well, in the Unix world, various kill commands are the norm, so Unix people apparently don’t shy away from murder.
From the last point, namely if modern computers are potentially intelligent/conscious/alive already, it follows, according to Kastrup, that we should seriously consider the rights of AI entities. Anybody ready to go in that direction?