I attended a tutorial by Noah Goodman last year. It's an interesting idea, but there's not much there in the way of implementation, at this stage, and it's not clear to me how that implementation is going to work. For instance, he was talking about doing MCMC to recover semantics from natural language using a kind of "blurring" of the observed text. That is, he didn't have a way to derive a reasonable initial parse in which the observed text would have a reasonable probability, so he intended to look for parses in which "nearby" texts had high probability. Then presumably he intended to do some kind of simulated annealing which would progressively tighten up the neighborhood of acceptable text. Of course, the devil is in the details of "nearby..."
Based upon this article, I am also not sure what is really new here. Probabilistic AI has been around since the 1980s (see Judea Pearl: http://en.wikipedia.org/wiki/Judea_Pearl).
There's a lot of new interest in probabilistic programming languages, which is what Noah et. al. have developed. See http://probabilistic-programming.org/wiki/NIPS*2008_Workshop for a workshop from about a year ago that explored the space. Probabilistic programming languages enable you to express concepts such as recursion and universal quantification in a way that Pearl's graphs don't easily allow.