Do you believe a perfect simulation of a human mind would result in something with an identical behavior to that of the person whose mind we're simulating? yes or no.
If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption in the question (that identical behavior implies identical properties). If no, then, again, you're contradicting the assumption in the question. You think consciousness is not determined by behavior. Ok, then what determines whether or not something is conscious?
> Do you believe a perfect simulation of a human mind would be able to create something with an identical behavior to that of the person whose mind we're simulating? yes or no.
Sure, I'll admit the possibility exists.
> If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption given in the question (that identical behavior implies identical properties).
I must've missed this... who was saying that identical behavior implies identical properties? Sure, I'd probably disagree that the A.I. is conscious in a subjective sense, and I'd also probably disagree that identical behavior implies identical properties. People can imitate each other without taking on the properties of the imitated.
I don't really have an empirical "test" for subjective consciousness beyond my own immediate, first-person experience of it. This may sound like a concession or even a defeat, but I think I'm allowed to posit that phenomena exist which we currently lack the empirical tools to investigate. "Currently" is the key word; as I said before, it is arrogant to assume consciousness will forever remain a mystery to scientific inquiry, just as it is arrogant to assume it must be a simple extension of existing theories.
I admit I have nothing beyond my own experience to validate the idea of subjective perception, and I have no evidence beyond intuition as to whether or not a machine can "experience" input the same way a brain can. However, I think I'm still entitled to believe that subjective experience is a real phenomena whose nature can and should be explained, and that our scientific understanding is presently inadequate for this task.
EDIT: I can understand the fear of relying on intuition. After all, it's the same thing that led us to believe that lightning came from the gods. But that doesn't mean that we should throw out the entire experience of perceiving lightning. Clearly lightning is a phenomena we experience, but we still don't understand how photons entering our eyes produce the subjective experience of blinding whiteness, or how vibrations from thunder translated into electrical signals by the ear result in the subjective experience of the sound itself. The information is in the brain, but we still don't know how information becomes experience. This doesn't mean we have to explain it via gods, but it does mean we still have something left to explain.
Humans exhibit "intelligence". Humans exist in the physical world. There is some physical process which produces "intelligence". A priori, there is no reason to believe this process cannot be understood, engineered, and re-implemented (whether it be in silicon or in a biological way).
Your argument could have been made for every piece of technology before it existed. Here's what scientists, engineers, and mathematicians do: they either keep trying, or they find a reason why it's impossible.
1. Do we have to engineer how much it's being rewarded in each game?
2. What happens in the infinite case? With pacman, there are always a finite number of choices at each state.
3. If it doesn't have to be told the rules, then how does it make decisions? In theory, it may be able to learn how to play jeopardy, but it may be way too inefficient in practice. Humans don't even start with a blank state.
4. I don't think I ever consciously apply philosophical principles like ockham's razor when I problem solve or learn. It makes me a little uncomfortable that we're starting with a philosophy, rather than having the system discover things itself. I would be ok with it if there was some parallel between ockham's razor and physics (not the methodology of science).
> I don't think I ever consciously apply philosophical principles like ockham's razor when I problem solve or learn. It makes me a little uncomfortable that we're starting with a philosophy, rather than having the system discover things itself. I would be ok with it if there was some parallel between ockham's razor and physics.
Why do you think it is relevant what you do consciously? The only things that you do consciously are those things which your brain is ill-equipped to do. The vast majority of your thinking processes are subconscious, as are the principles that drive your conscious thinking. And I guarantee you, Ockham's razor is in there whether you realize it or not. When things get complicated do you purposefully look for a simpler solution? When trying to understand an unknown situation, do you start with something simple and ad complexity as needed? Ockham's razor.
> I would be ok with it if there was some parallel between ockham's razor and physics.
... there's not?
EDIT: As an AI researcher, I'd be more interested in creating an artificial scientist than "artificial science." So what makes the scientist work? Ockham's razor is at the foundation of that.
About countably infinite state space: with about 10^56 states, Pacman still remains a very challenging domain ! Just to put things into perspective, my colleague recently ran some experiment where he would exhaust 8Gb of RAM, but could reduce this number down to 4Gb using random projections for dimensionality reduction.
Also, don't forget that this is a partially observable domain (with only local sensory information) with no apriori knowledge of the overall goal to achieve. It learns completely from scratch ! If we find this task easy as human beings, it's because we have a lot of prior knowledge that we can transfer into this task
Ockham's razor here isn't a "philosophical principle", it's math.
AIXI is doing a shortest-first search for predictors that match the observed environment, except that it's searching them all simultaneously in infinite parallel (that's why it's not computable) and what gets moved around is probability weighting. Ockham's razor describes the starting state of the weightings when there hasn't yet been any evidence: shorter predictors are given more weight.
If yes, do you disagree that both are conscious? If you do, then you're contradicting the assumption in the question (that identical behavior implies identical properties). If no, then, again, you're contradicting the assumption in the question. You think consciousness is not determined by behavior. Ok, then what determines whether or not something is conscious?