The idea here is using "Stochastic Functional Programming" (see some of the work by Goodman, Mansinghka, Roy, and others at http://web.mit.edu/vkm/www/ ). Basically, you write down your AI problem in a "forward" direction, suggesting how the data came to be. You then "fix" the outputs. The engine generates a distribution on "possible program histories" that preserves the statistical properties you care about.
For people familiar with inverse methods, what they basically have here is a generalized inverse solving engine that obeys the laws of probability.
Of course, right now, this approach ("solving AI by running programs backwards") is a bit slow, but some startups are rethinking the entire computing stack ( http://www.naviasystems.com ) in an attempt to rectify that. [I'm one of the people at said company]
I suspect that, like any other simplistic and too-obvious-to-be-interesting theory, this one too will never go beyond the question of whether birds fly. Just a slightly improved expert system. And good luck in entering all the rules of the world and watching your "AI" failing at the most simple questions such as "what time is it?" or "what was my previous question?" - questions that even kids can answer. Shame it's published under such a monumental title at mit.edu.
This project doesn't make any progress on that, but nor was that its goal. The whole "grand unified" business seems to just be editorializing by the author.
I think he meant fuzzy logic derived from fuzzy set theory. It was pioneered by Lotfi Zadeh. It has gained more ground in Japan than the rest of the world. Here is the wiki on it: http://en.wikipedia.org/wiki/Fuzzy_logic
Right -- I'm just saying that fuzzy logic is crap (at least compared to proper statistical inference; maybe there are other, better, uses for fuzzy sets).
Why use ad hoc schemes when you can just maintain a probability distribution?
Fuzzy logic is not crap, and the fact that you are comparing it to statistics in this manner shows that you have little understanding of either.
Fuzzy logic is just like binary logic only it allows for partial truth.
Probability relates to how likely something is to happen.
To take an example (I didn't make this up, but I don't remember the source):
If you take a series of data points to determine whether or not I am in my living room at 7:00 on any given evening and determine that the probability is 50%, that means that I am in my living room 50% of all nights.
However, if you give me a 50% fuzzy logical value of being in my living room, this means that I am lying in the doorway between my living room and my bathroom, such that exactly half of my body is in one place and half of my body is in another.
These are two different things and the mechanisms do not apply at all to the same problem sets.
"However, if you give me a 50% fuzzy logical value of being in my living room, this means that I am lying in the doorway between my living room and my bathroom, such that exactly half of my body is in one place and half of my body is in another." Or it could mean any number of other things depending on what the "Fuzzy logician" finds convenient.
In other words, "Fuzzy logic" can mean anything vague related to numbers. In other words, it's just a buzz word that was trendy in the eighties for quantifying something without any particular logic behind it. In other words, it is crap.
I mean, seriously, the "discovery" of Fuzzy Logic involved no original or interesting mathematical machinery whatsoever, it just involved y Lotfi Zadeh coining a word to cover ad-hoc quantifying processes. It's the flimsiest of "pop" mathematics and it hasn't had much following for a while now. Sure you can "use" it in the sense that still engage ad-hoc quantification but you could do that before Zadeh came around.
Fuzzy logic is useful for appliances. Let's say your dryer knows the humidity and temperature of incoming and outgoing air and approximately how dry you want your clothing, and how long it's been running. At what point should it turn off? Now, let's add it has various sensors that have some level of accuracy which you can extrapolate by how effective the device is at drying clothing at a given temperature and humidity.
Now you could setup a wide range to test cases with various loads, temperatures faulty sensors etc. Or you can figure out a reasonable approximation by hand based on Fuzzy logic and ship it.
Note: your solution must run on a 4bit 32khz cpu with 400 bytes of ram.
Fuzzy logic can be useful when combined with a frequentist distribution for doing natural language processing (e.g. how many people would refer to someone at height X as "tall"?).
It's really just a special case of Bayesian inference: p(A calls B "tall" | B is a 6'1 man) is a combination of what you know about who is called tall in general and what you know specifically about who A thinks is tall. Unfortunately, for some reason many linguists don't like thinking in these terms, so it is easier to communicate with them using fuzzy logic vocabulary than Bayesian inference.
It's really just a special case of Bayesian inference: p(A calls B "tall" | B is a 6'1 man) is a combination of what you know about who is called tall in general and what you know specifically about who A thinks is tall.
As far as I know, thats not true. Fuzzy logic is meant to encapsulate the idea that someone is "sort of" tall.
I believe fuzzy logic, or something similar, is used in some handwriting recognition software. E.g. as you are looking at a letter, you start with "This letter is an A" as having value 1/26, etc... and start to change them as you look at it. In this case its very similar to probability. I'm not sure of any other uses.
However, I remember reading studies that seemed to indicate that apes/chimps use fuzzy logic. I don't remember who wrote it or how they tested it, but it seemed fairly convincing at the time.
So, I guess I'd say its not so useful now (at least not as an independent concept) but if its true that humans use it, it might become useful in the future.
In the end though, fuzzy logic isn't going to solve your problems for you, at least not alone. The way you use the fuzzy logic is going to be much more important.
Or, as jey said, you can actually use probabilities to control your appliance, and then have a lot of theory behind your inference process. i.e. not something ad hoc, like fuzzy logic, which can ultimately be transformed into probabilities regardless.
Fuzzy logic is not about creating actual intelegence just a quick and dirty aproach that happens to be useful. When selling bread makers you are vary limited in your development budget and the HW you send to people. So yea it's overly simple add hock solution but it's also cheap.
There is a lot of theory behind fuzzy logic as well and you get the bonus of it being very simple to implement. Thus it's use in appliances where cheap/tiny processors are the norm.
I agree that fuzzy logic is crap, in the sense that it just involves adding a once-trendy buzzword to ad-hoc approaches.
However, it should be noted that statistical inference is not a necessarily an effective learning approach given that it was created to deal with random variables and the world we are trying to understand has many non-random, orderly aspects.
Humans aren't good at doing the things that statistics is good at but statistics isn't good at doing the things humans are good at. Just as an example, a person can indeed act effectively in uncertain but somewhat ordered environment but virtually no human being can tell you anything like the probability distribution of the events which they deal with in daily life.
So basically, we do indeed need new approach different from both the probabilistic and the pure-logical approaches. But problem is that melding these various existing approaches into something coherent and usable is far more easily said than done. One clear problem with any such system is that the complexity explodes for a formal specification which involves both probability and logical process.
I suggest people call their approach "a general theory" after they do something impressive with it. We're waiting.
Perhaps the intended title of article was "There Ought to Be A General Theory Of AI". That I'd agree with...
I attended a tutorial by Noah Goodman last year. It's an interesting idea, but there's not much there in the way of implementation, at this stage, and it's not clear to me how that implementation is going to work. For instance, he was talking about doing MCMC to recover semantics from natural language using a kind of "blurring" of the observed text. That is, he didn't have a way to derive a reasonable initial parse in which the observed text would have a reasonable probability, so he intended to look for parses in which "nearby" texts had high probability. Then presumably he intended to do some kind of simulated annealing which would progressively tighten up the neighborhood of acceptable text. Of course, the devil is in the details of "nearby..."
Based upon this article, I am also not sure what is really new here. Probabilistic AI has been around since the 1980s (see Judea Pearl: http://en.wikipedia.org/wiki/Judea_Pearl).
There's a lot of new interest in probabilistic programming languages, which is what Noah et. al. have developed. See http://probabilistic-programming.org/wiki/NIPS*2008_Workshop for a workshop from about a year ago that explored the space. Probabilistic programming languages enable you to express concepts such as recursion and universal quantification in a way that Pearl's graphs don't easily allow.
It makes for a handy mental filter, though: claims to have found the grand unified theory of AI can be treated with the same piles of salt as claims to have found a grand unified theory of physics, or to have solved the P/NP question (not impossible, but unlikely in any given instance).
It's also somewhat comforting that this is coming from MIT. This guy doesn't seem anywhere near full-blown general intelligence (whose risks he probably hasn't even considered) if he's only barely discovered Bayesian Inference...
Interesting only insofar as this article had nothing new, yet MIT found it timely to publish. Hasn't bayesian inference been around for a while?
I haven't followed Cyc in quite a while, but I think they tried incorporating some probibilistic reasoning. I wonder if they ever gave a shot at incorporating exception assertions? Using the analogy of birds, penguins are birds, but the bird assertion of flight does not apply.
Hmmm... this article is too breezy to figure out if there is any substance behind this claim. Unfortunately, "church programming" is a dead end on Google, for obvious reasons.
Anyone find any more substantive info relating to this article?
The Church probalistic extentions to Scheme (http://projects.csail.mit.edu/church/wiki/Church) look interesting enough. I am not sure how practical things like the fuzzy list equality (using Levenshtein distance) would be, but still really interesting ideas.
A lot of what we learn and know as human beings is not the result of deliberate heuristic strategy, but simply what we call "intuition" ... which I usually interpret as unconscious pattern-seeking based in experience -- spread out over a considerable amount of time so that the result "cooks out" like a slow stew.
If I understand this journalist's description of what the AI folk are talking about, 'Church' is the old weighting strategy again. Since we don't really understand how our 'tacit knowledge' develops (or -doesn't-), this model may result in something similar.
It's ground-breaking IFF the computer can really resolve 'reality' without continual hand-holding. To my knowledge this hasn't been achieved yet (how many years are we into the CYC project now?), but certainly it makes sense to use some rules that 'seed' growth. It may require our best intuition to create that seed.
For people familiar with inverse methods, what they basically have here is a generalized inverse solving engine that obeys the laws of probability.
Of course, right now, this approach ("solving AI by running programs backwards") is a bit slow, but some startups are rethinking the entire computing stack ( http://www.naviasystems.com ) in an attempt to rectify that. [I'm one of the people at said company]