Can a machine have a mind, consciousness, and mental states? Accoroding to philosophers, neuroscientists and cognitive scientists, consciousness refers to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we see something, know something, mean something or understand something. Some arguments that a computer cannot have a mind and mental include Searle's Chinese room, Leibniz' mill, and Davis's telephone exchange.
Can a machine have a mind, consciousness, and mental states?
A computational approach abstracts away from the specific implementation details of a cognitive system, such as whether it is implemented in carbon versus silicon substrate. Instead, it focuses on a higher level of analysis: the computations, algorithms, or programs that a cognitive system runs to generate its behavior. Another way of putting this is that it focuses on the software a system is running, rather than on the system’s hardware. The computational approach is standard in the field of cognitive science (e.g., Cain, 2015) and suggests that if artificial entities implement certain computations, they will be conscious. Overall, there is a broad consensus among the books that artificial consciousness is possible. According to the computational approach, which is the mainstream view in cognitive science, artificial consciousness is not only possible, but is likely to come about in the future, potentially in very large numbers. If we create artificial sentience, they will be capable of suffering, and we will therefore have moral responsibilities towards them. The physical approach focuses on the physical details of how a cognitive system is implemented; that is, it focuses on a system’s hardware rather than its software. The hardware of current digital computers has very little integrated information, so they could not be conscious no matter what cognitive system they implement at the software level. Thus, although artificial consciousness is possible on the physical approach, it typically predicts fewer conscious artificial entities than the computational approach.
What defines consciousness in humans is being able to experience sensation, emotion, and thought to produce willful behavior in response to stimulation from the environment? A conscious person is aware of their general state of awareness, meaning they know when they enter or exit certain trains of thought to their recollection. But that’s actually not true for computers as artificial intelligence consciousness is impossible. More than 80 years ago, British computer scientist Alan Turing proved that there was no way of ever proving that any particular computer program would stop on its own – and yet the ability is central to consciousness. the most well-known objection is John Searle's Chinese Room argument. The argument asks us to imagine a non-Chinese speaker locked in a room with a large batch of (unbeknown to them) Chinese writing and a set of instructions written in a language they understand. The instructions tell the person how to match up and respond to inputs arriving through a slot in the door to the room, which are questions in Chinese. As the person responds with the appropriate outputs based on the instructions, and becomes increasingly good at this, it appears from the outside like they understand Chinese. However, Searle claims that the person clearly does not truly understand Chinese; from their perspective they are merely manipulating meaningless symbols based on syntactic rules. Since computer programs work in essentially the same way, they cannot have true understanding either.