If we assume that humans are conscious, then yes, it is possible for a machine to be conscious. The only arguments categorically differentiating humans from electronic machines are religious in nature.
I define consciousness as having an internal model of the world that includes yourself, as well as your own though processes (at a lower degree of fidelity). This says how you compute things, not what you are computing, so it is orthogonal to Church-Turing.
Whether it is achieved will depend on economic forces. I don't see much economic value in making conscious computers, or things which seem to be down that line. So I expect consciousness will come out of pure research (perhaps within a corporation, like IBM), well after computers have exceeded the raw processing power needed.
Because they will be so different from humans, it will have to demonstrate a significantly higher degree of consciousness than a human needs to in order for most people to be comfortable with the term.
I know virtually nothing about CS, which may be an advantage in seeing the sense in your thinking in abstract.
If we assume that humans are conscious & material, without bothering to define either, we at least know it is possible to have consciousness embodied in something material. At least in theory we could figure out how human consciousness works and replicate it. There may be easier ways, but this is at least one theoretical possibility.
You could in the same way have inferred that humans could figure out a way of creating flight from the fact that it exists in nature.
There is one nonreligious factor that significantly differentiates humans from machines: that humans evolved, while machines were built by humans. There's an argument to be made that humans cannot understand consciousness well enough to build it, because consciousness is our only tool for doing that.
Of course, this only applies to intentional creation of consciousness. It says nothing about the distinct possibility that we could create a consciousness by accident.
But we know how to get the effects of evolution... on steroids: unsupervised, feedback/effect based learning (generic algorithms, NNs, etc.)
We can already generate big enough NNs that we can understand them in general (some areas get "specialised" in some way and more active in some conditions), and we know that the result is what we expect - but there's no way someone will take a look at the weights / resulting model and tell you what it does. I think we can create consciousness intentionally, but not understanding the process completely.
That doesn't mean it can't be used to achieve a goal.
Selectively breeding cows for increased milk production allows evolution to find a way or ways of making that happen. The breeder doesn't need to worry about hormones or glands. Evolution doesn't need to worry about the breeder's motives, why are only the high milk producers producing offspring?
No, but it is driven by external pressures, which we could apply to artificial consciousness by hacking it :) Ok, artificial selection - but no reason we couldn't game it to be genetic.
A computer-based consciousness would also be a result of evolution (albeit in a different way than things we know usually) being the byproduct of something that an evolved species created to help itself adapt.
Sometimes, when I'm getting overlogical, I see machine intelligence as being pretty much our destiny (i.e. that singularity thing) and that organic life may be very much obsolete once it gets into its own.
As organic life was to the universe before it, machine life will be to us.
Evolution is not directed by a consciousness, so you cannot loosely use the term "evolution" to describe the DEVELOPMENT of computer-based consciousness.
ADDED: but yes, extended phenotype and all that jazz.
Just because biological evolution was not directed by intelligence does NOT mean you can't use evolution to describe a process of gradual improvement based on intelligent improvements rather than random changes and natural selection. Both are evolution, just in different domains, using different methods. (Your complaint sounds sort of like saying reading on a computer shouldn't be called reading because there is no actual printing involved in producing it.)
Bah....I call bullshit. This is pure anthropomorphism. Humans think they are the shit, but in fact they are only story-telling animals (which does give us an evolutionary advantage, incidentally. We are not limited in our information transfer inter-generationally by genes alone.) We are limited by the same physics as the chips we make. This innate quality you speak of is pure vapor. Even if humans were somehow able to become mentats, we'd still be limited by the tenants of information theory and what is computable. The fact that human intelligence is emergent leads me to believe that machine intelligence will be the same, albeit very different than a simian mammal's intelligence. Fish are smarter than we are at swimming. Think ants.
Bah....I call bullshit. This is pure anthropomorphism. Humans think they are the shit, but in fact they are only story-telling animals
The poster you are responding did not claim that humans are the shit. They merely pointed out the fact that we have so far proven incapable of proving that we aren`t said shit. Now, you could argue that such a hypothesis could only be falsified by the construction of a machine consciousness. That, however, is orthogonal to the fact that it remains possible that there is in fact some bizarre quality of the universe or the human race that makes machine consciousness impossible. Not a terribly scientific position to take, perhaps, but a perfectly sound philosophical one.
yeah perhaps I drew him into this one, but "innate quality of human organic compounds" seemed like a bit of human elitism to me. To which I say "hey buddy, just because we are (apparently) the most dominant species on the planet (which I also am not too certain I agree with), doesn't mean we are the end all be all, or worse, somehow different than all the animalia we happen to try to place ourselves "above" or something. Trust me, when humans wear out their welcome here (which seems to be coming with great alacrity with regard to cosmic timescales), the insects will be more than happy to eat our corpses and continue on happily without us. We are adaptable, but not THE MOST adaptable organism on the planet. And regarding the philosophical position business, I also call bollocks, as there was not one rational argument presented to back his premise regarding the "innate quality of organic human compounds". Is the human neuron somehow magically different than a chimp's neuron? Methinks not, i.e. it is particularly unsound to somehow make our molecules different than any other organisms molecules simply because we are human.
No, human intelligence is an emergent quality, and I fathom that even our massive representations of humanity's information (akin to what Google is compiling) will soon begin to exhibit interesting qualities of its own once it becomes complex enough to exhibit perhaps interesting, unanticipated emergent qualities (in fact, if it didn't I would be absolutely shocked). Many strange and unpredictable (or at the very least unanticipated) things arise from even the simplest of "complex systems" (Conway's game of life and some of Wolfram's automata), much less the wonderful systems detailed by our individual neural mappings and our individual genomes. (q.v. http://en.wikipedia.org/wiki/Emergent_behavior)
People always think that some human will "write" an AI like HAL or the like, but it is much more likely that nature will roll its own AI once we have made a comfy enough nest for it to germinate. After all, isn't that how we got here? (from an evolutionary biologists standpoint anyhow...)
And again, sorry for being contentious. I just despise the "religious" argument (even if there is no "named religion" being expounded... Let's keep Ockham's Razor at the ready here....)
There is a difference between being the most dominant species on the planet and being composed of living cells, resulting in breathing, aging, dieing, and reproducing. An organic thing should not be looked at in the same light as something that is not.
@nwatson - That is a religious belief. It may not be a conventional religious belief, but it is a religious belief. Not to dismiss all religions (although, as an atheist, I tend to), but more to note that the GGP had pointed out that he assumed no religious belief.
humans are unique in the universe among all life forms and inanimate objects, are more than the sum of their physical parts, have a connection with something larger than the universe, and though in an insignifcant corner of an insignificant galaxy have an eternal significance.
There's no way to prove we're unique in the universe. At most you could say the known universe, but even then you're going to have a hard time convincing people of that.
Known to who? I don't like this very self-referencing way of thinking. Very similar to the discovery of America... people were already in America way before Columbus came and "discovered" it. (although apparently he didn't even realize he hadn't arrived to India)
It's akin to having a group of people sketching on a large piece of paper, trying to build on each other's marks to create an accurate representation of a scene, and someone comes in a scribbles all over it saying "but I see scribbles! All pencil marks are valid! Don't be so limited!".
He/She's allowed to have such an opinion, but this discussion is trying for a particular feel and that isn't contributing helpfully to it.
How are we unique? Do you know there is no other life in the universe similar to us? Can you prove there is no other life in the universe similar(or the same) as us? What do we have a connection to? God? Am I less significant because I don't feel this connection, because I don't feel humans have eternal significance?
Then we'll make machines out of "human organic compounds" to exploit those processes.
We're machines. That we may not have made a machine of the same class ourselves is an implementation detail. Maybe a large one, but still not really an argument against making conscious intelligent machines. Biology can do it, sooner or later so can we.
We are biological. So if we do it, it is really just a phenotypical expression of our genes.
Perhaps the disagreement is about when do machines (keeping in mind they're mechanical by definition) cross over to becoming more and more like biological entities (replication). And once they do, is it still coherent to think of them as machines as we use the term today. Or would we then think machines have been elevated to the biological level.
This is certainly a good point. At least I somewhen crossed the line when I just started to see things as input-output-devices with more or less complicated algorithms in the middle.
The human has a hideous complicated algorithm in there, involving a live-long history, internal feedback and reflection and arbitrary side constraints.
A spider for example is much simpler. 'If the net shakes, walk where it shakes and eat.' (Certainly, + a batch of regulations to be able to walk and sense, but the point stands).
However, imo at a certain point, a computer program passes the complexity of, say, a spider. Just consider modern compilers. These things are of baffling complexity and do things inside no human can imagine in details :) Or, imagine data mining software, or even just very complicated, security aware network guards. All of these softwares are very, very complicated in their input-output-behaviour, and even though they are not as complicated as a human, they certainly can compete with a spider, at least for me.
And exactly this view is what caused some pretty nasty discussions for me, since some people are just not crossing the line of 'everything is an input-output device of different forms' and they stand hard on the ground that machines and animals are different, because they are machines and animals (and some go ridiculous ways, 'god made animals, humans made machines', and whatsoever, not even some 'but the spider might be more complicated than FOO, because, which whould be a nice discussion :) ).
So, overall, I, as a person who is working hard on being a tolerant, non-racist person (which is really hard), don't see a reason to exclude the possibility of machines and robots being conscious, just because 'they are electrical and not organic' (which has a ring of "he is black, he CANNOT do science" to me. Sorry if I just offended a lot of people).
Er.. no, I'm saying if there's something magically special about our organic composition required to make a conscious being, then we can build machines out of the same organic compounds. If you want to get those compounds by disassembling people, well.. I guess waste not want not, but it isn't really what I had in mind ;)
"Consciousness is not computable" just means we need to build a new class of machine. Whether it's a blob of organic jelly in a box, or an IC with currently unknown structures on it to exploit certain handwavy quantum processes, or whatever; it just makes the task more complex, it doesn't make it impossible.
You are assuming that there is no innate quality of human organic compounds and processes that differentiates us from electrical components.
It may very well be the case that this is either true or false. We simply don't have enough evidence.
It's true, we can't prove it one way or the other at the moment, but that usually just means that Occam's razor should guide our speculations and assumptions. Since the brain seems like all it's doing is performing some computations, why would we ever assume that its true function is to do something more than that?
And given the fact that we are discovering new properties of matter and organic reactions all the time, there is a bias towards this being false.
There's a good chance that the way the brain achieves its computations is indeed a bit more complex than, for instance, an artificial neural network, absolutely. But that's an implementation detail, and there's certainly no reason to assume that there's no computational model that can account for it - biologists are very close to having full working models of individual neurons already. Further, it's highly likely that the brain does its job in a biologically convenient way, not a logically convenient one, and I'd give good odds that there are a lot of logical simplifications that could be made to end up with a cleaner architecture that performs the exact same tasks.
Considering we've already simulated the basic building blocks of our brain and understand all the basic molecules involved, I'd say that we already have a very strong indication that we can eventually have a machine conciseness similar to our own.
The only two physical process that we know of that are fundamentally impossible to compute with our computing model are quantum computations and chaotic systems. And for chaotic systems we most certainly can simulate them in a way that the output has all the correct properties as far as we know. It's more that we can't reliably do prediction in such systems due to finite precision.
The simple fact is that computers are already our superiors on many tasks. And those tasks are not simple either.
a) The phenomena of qualia remains unexplained. This has no bearing on making something that can mimic consciousness, but does become significant if one wants the real thing.
The last argument on this: http://news.ycombinator.com/item?id=766462
b) QM may be intrinsically linked to consciousness. (see Roger Penrose, _The Emperor's New Mind_) Of course, this a mute point given the QM computing seems to be around the corner.
The fact that qualia are an inherently internal experience means that there can be no way to communicate them to anyone else. We are willing to accept that the other minds problem isn't a problem because we believe they are similar to us. This is why I say that machine intelligence would have to display a very high level of consciousness for us to accept it as conscious. Humans get a pass.
I discount Penrose's argument because he offers no phenomena that actually require his QM effects to explain and offers no plausible evolutionary path by which human minds could have evolved the mechanisms relying on QM. Do other primates have QM effects in their brains? What do these explain, given that the current models of the brain that we have are capable of explaining everything that we see? (Though we don't yet have the practical capability to simulate an entire complex brain and suitable io.)
I guess the question we're really asking is this: is the human brain a Universal Turing Machine? Everything we've learned so far points to yes (our study of neural networks has revealed a possible mathematical structure which is Turing Complete), therefore my vote is with the 'yay's.
P.S. I am an atheist, therefore I do not believe there is anything metaphysical about the human brain. Maybe there is something special about our mushy carbon structure -- doesn't matter. In that case, we'll just build our artificial brain with wetware. Hardware is hardware is hardware (and physics is physics). Eventually we'll get an artificial neural net as complex as our biological one. The real question is software...
Wouldn't you agree that it feels awfully "weird" to be a living, breathing human? Doesn't it seem to be beyond what can be expressed by mere computer algorithms?
I realize it is not a defensible argument to say that the human experience is just to "weird" to be computable- I wish I had a better argument for defending my position...
Nonetheless, I find it hard to understand people, such as yourself, that are probably exposed to this same "weirdness" in their heads as I am and yet are so confident it is merely an illusion created by a sufficiently complicated computation.
Weird compared to what? I have no experience being anything other than a living, breathing human. Thomas Nagle wrote up the classic "qualia" argument you're making as "What is it like to be a Bat?" (here: http://www.clarku.edu/students/philosophyclub/docs/nagel.pdf..., it seems persuasive: that there is a certain subjective quality to consciousness & conscious experience— it must feel like something to be a bat, or a person, or whatever? Hofstadter's and Dennett's book, The Mind's I (http://www.amazon.com/Minds-Fantasies-Reflections-Self-Soul/...) is an interesting introduction to this kind of stuff, the authors have a lot of fun @ qualia's expense.
If you're digging into Hofstadter (and if you aren't, you should be, whether or not you agree with him, almost every word he's ever written is worth reading, including the seemingly irrelevant stuff about translations), I Am A Strange Loop (http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/...) is a good read, as well.
The "weirdness" (or whatever you want to call the enormous explanatory gap between our mental lives and inanimate matter) deserves an answer. It's not something to sweep under the rug as "subjective" or "unscientific" or "poorly defined".
I agree, though, Hofstadter and Dennett have done an extraordinary job of devising such an answer. There is lots more work to be done, of course.
The "weirdness" (or whatever you want to call the enormous explanatory gap between our mental lives and inanimate matter) deserves an answer. It's not something to sweep under the rug as "subjective" or "unscientific" or "poorly defined".
But what if the answer to the "weirdness" question is, in fact, that it's just an ill-formed question? What if it's just an illusion, and your brain is tricked into seeing something magical about itself where there is nothing there?
Apart from a mystical explanation, I cannot imagine any satisfying answer to the question (i.e. one that doesn't leave you feeling uneasy like Hofstadter's answer does to most people), which usually means that there's something wrong with what we're asking, not with how we're trying to answer it.
I would be satisfied with a good explanation of why the question is ill-formed, how the illusion comes about, etc.
I really don't think the question is ill-formed, though. There is a big explanatory gap. Denying that is simply dishonest. It would be like, if you don't know why a bicycle is easier to balance when it's moving, pretending that there is nothing to explain. "Oh, it's just bicycle parts."
Wouldn't you agree that it feels awfully "weird" to be a living, breathing human?
No - it's just normal. I think it would be weirder if it wasn't some kind of computation, if I was a unique spirit out of several tens of billions that appeared from some unknowable place and origin, or whatever else 'i' might be.
Doesn't it seem to be beyond what can be expressed by mere computer algorithms?
I have no reason to think it is - I haven't felt what it's like to be a quicksort or a face recognition algorithm for comparison, but I certainly feel like a heap of evolved feedback loops at times - when I fear the dark in a room I know, when I catch sight of living shapes where there are none, when I feel fight-or-flight at certain caller ID numbers, when I desire things that I also don't want, when I feel judgements of people based on some trivial detail.
Nonetheless, I find it hard to understand people, such as yourself, that are probably exposed to this same "weirdness" in their heads as I am and yet are so confident it is merely an illusion created by a sufficiently complicated computation.
What else could it be? There isn't anywhere else it could reasonably be that we know of. You're either suggesting something unknowable (e.g. magic) or some kind of cruel joke (i.e. like a quine is a program that prints it's own source code, we could be a consciousness that sees the world except for the blind spot around the part that allows us to see how we work - and a cover over where that would be).
I'm not confident that we are 'just' a computation, but I'm fairly confident that I am in accordance with the laws of physics (including any we don't know yet) and that places limits on what's possible with the amount of matter in my head, the energy input and output, the sensory input and output bandwidths, the timescales involved, the known behavioural results of varying localities of brain damage, etc.
Besides, what do you mean illusion? Consciousness is not 'fake'.
"Wouldn't you agree that it feels awfully "weird" to be a living, breathing human?"
From my perspective, being a computer would feel a lot weirder.
I think drastically spoken, feelings are just firing neurons. I don't think there is really anything special about (apart from the degree of complexity of the human body). If a computer has an algorithm that says
if(user hasn't typed anything for 14 days) lonely = true
then it has feelings, too. It might sound absurd, but only because it looks so simple. But imagine an enourmously complex program, and the information "lonely" trickles through it. There might be a routine somewhere
if lonely = true connect_irc_channel(#depression)
and so on and so on. From a certain level of complexity onwards, it won't be so obvious anymore, and we won't be able to prevent feeling that the computer really feels.
About those economic forces - I'm betting on games to get there first. With Creatures and Sims there have been already two examples that the public is absolutely keen on getting software that features lifelike behaviour. And the wide distribution of games also means that games can get access to a lot of distributed processing power beating even supercomputers. Also game AI programmers are probably the one with the most practical experience because they have actually to produce stuff that works.
I play Multiplayer FPS even though I know my opponents are conscious. Doesn't prevent me from shooting them because I know I only kill their avatar and not themselves. Same with AI - I shoot at an avatar I won't destroy it's code.
The ethical issues will rather be the moment when AI's are clever enough that they can be teached. What will humans teach them? Personally I believe the best protection from abuse will be to get that learning process in the public (as compared to AI's learning from companies or military for specific purposes). So my own long term target are distributed virtual worlds in which AI's improve by getting passed around. A single computer user might still teach them bad stuff, but I hope some sort of selection based on many people watching and exchanging bots will get the best possible results even though completely preventing abuse won't be possible.
Code is not the avatar - it is the knowledge and memories (probably written in form of a big blob of neural net coefficients) that makes a concious being. Each time the game is restarted there would be new beings in it (unless their data is saved between games somewhere).
Erasing knowledge and memories of such AI would be analog to killing, and it will become problem, because throwing out data is easier, and writing you enemies to DVD after each game of Quake 10 becouse of moral issues will be problematic after some time.
>I define consciousness as having an internal model of the world that includes yourself, as well as your own though processes (at a lower degree of fidelity). This says how you compute things, not what you are computing, so it is orthogonal to Church-Turing.
Do you have any proof or any testing that can be done to validate that?
But we can't even successfully predict how proteins will fold (only about 70% are predicted correctly) and we presume we know all about chemistry and physics. Yet what we model, and what we observe, are quite different.
If true machine consciousness is possible, it's a lot further off than we would like to think.
What if as Julian Jaynes says consciousness is a social creation rather than a biological one? Machine social interaction would be tremendously different and could be incapable of sustaining consciousness.
Is that actually a quantifiable distinction? Does how something is constructed have any affect on it's properties? (assuming equivalent precision of the tools involved)
Building a car by hand versus building one with a modern robotic assembly line both yield the same output.
Ok, but how is a constructed object and a ... (randomly? naturally? what ever word you want to use) evolved object quantitatively different?
One could follow all the steps involved in getting a human from his evolutionary ancestor to his current state by using a pair of magic tweezers to make each mutation in the genome happen at the right time and in the right way. Could you tell the difference based on the result?
Yes, you can easily tell the difference between entities that have evolved by natural selection, and constructed entities.
(BTW, mutation is random, but evolution is not a random process).
Two example quantitative differences (although, I'm not sure why the specific exclusion of qualitative differences):
1) The human eye, for example, has flaws (that are solved in other similarly evolved eyes) that an engineer laughs at - if you were building a human eye, you wouldn't make the same "mistakes". The deviation from the "better" version is measurable.
2) Evolved entities exist because of procreation. And their only reason for existence is to assist genetic material to replicate. Constructed objects reproduce exactly 0 times and have 0 genetic (or other replicator material).
I think we are arguing at cross purposes here. I'm not arguing what we call evolution doesn't lead to machines very different from the ones that we might chose to engineer ourselves. Rather, I am arguing that, as far as I can tell, I could, given enough time, money and energy, construct a living being using methods very different from natural selection and get the same result. In other words, I don't see why there is anything special about the path taken (some paths may require less energy though. ;-) )
Therefore, I don't consider designing vs evolving as a good way to separate machines capable of consciousness from machines not capable of consciousness.
There is something very special about the path taken (although this phrase is a little misleading) in evolution - not from a design perspective, but from a result perspective - evolution by natural selection is neither a random process, nor a goal-directed design.
Organisms (and by extension, or by reason) are alive because their ancestors have been lucky enough to survive long enough to have offspring.
Note, I'm not arguing that machines cannot be capable of consciousness.
But I am saying that constructed machines are necessarily distinguishable from organisms primarily because they're designed with a goal in mind (unlike organisms - these aren't designed but are the result of the co-operation of genes into higher levels of complexity, team work that has happened to be useful to the survival of the replicating matter - DNA, RNA, or possibly other such material elsewhere in the universe).
I would posit that because of the evolved nature of organisms, there may be flaws in their consciousness (e.g. through chemical imbalance, irrational deduction etc.). It is more likely that machines that are designed for intelligence & consciousness would be gifted (or cursed, depending on your convictions) with perfection, rationality, normalcy, as attributes, or at least their designers would attempt that.
So the the nature of their consciousness would be qualitatively different from that of organisms.
In his 1986 book, The Blind Watchmaker, Dawkins comments:
Any engineer would naturally assume that the photocells would point towards the light, with their wires leading backwards towards the brain. He would laugh at any suggestion that the photocells might point away, from the light, with their wires departing on the side nearest the light.
Yet this is exactly what happens in all vertebrate retinas. Each photocell is, in effect, wired in backwards, with its wire sticking out on the side nearest the light. The wire has to travel over the surface of the retina to a point where it dives through a hole in the retina (the so-called "blind spot") to join the optic nerve.
This means that the light, instead of being granted an unrestricted passage to the photocells, has to pass through a forest of connecting wires, presumably suffering at least some attenuation and distortion (actually, probably not much but, still, it is the principle of the thing that would offend any tidy-minded engineer). I don't know the exact explanation for this strange state of affairs. The relevant period of evolution is so long ago.
You often come across odd stuff like this in machinery. The usual reason is that it made fabrication (or access for repair) easier. Engineering is all about tradeoffs, the "right" design is just one of them.
That's really fascinating--thanks for elaborating.
That said, this isn't, for me, a strong refutation of design. Perhaps the designer's purpose was to create creatures with imperfections such as these, with the higher purpose of communicating something deeper.
Or, is there any way of knowing that we won't some day discover there is a very good reason for this?
An example of something that joubert is referring to is the fact that photoreceptors (rods and cones) in the human eye are actually located behind the retinal ganglial cells, nerve fibers, and capillaries. So light has to pass through a layer of tissue before being detected. One consequence of this is the blind spot.
An engineer would probably try a different ordering.
The more I think and learn about evolution as something abstract (IE something that can exist in principle only, happens to be embodied in biology and could be embodied elsewhere), the more I think that evolution is creation. At the very least it is so similar to "creativity" that any discussion on the topic would probably quickly degrade to boring semantic argument.
From this I think there are two potentially interesting products:
- Evolution by natural selection may be usable as an engine of machine consciousness. Some variant of evolution may be at the core of our own consciousness.
- Evolution by natural selection is a mechanical process that we can "take apart" and understand relatively easily. It also happens to be the engine of the process that created species. Happens to be. Even if the evolution of species had never happened, evolution would still exist in abstract. This put it in very good position to be discovered. Perhaps there are equally powerful concepts waiting to be discovered. Perhaps one of them is at the core of our own creativity. Perhaps one of them can be at the core of machine consciousness.
Eliezer made what I think was a similar point in posts at Overcoming Bias and Less Wrong; he also pointed out that evolution is really stupid. If we can understand the result of an evolved process, we should, using intelligence, be able to do much better. The key is understanding.
I define consciousness as having an internal model of the world that includes yourself, as well as your own though processes (at a lower degree of fidelity). This says how you compute things, not what you are computing, so it is orthogonal to Church-Turing.
Whether it is achieved will depend on economic forces. I don't see much economic value in making conscious computers, or things which seem to be down that line. So I expect consciousness will come out of pure research (perhaps within a corporation, like IBM), well after computers have exceeded the raw processing power needed.
Because they will be so different from humans, it will have to demonstrate a significantly higher degree of consciousness than a human needs to in order for most people to be comfortable with the term.