Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SLAC results disagree with the Standard Model (arxiv.org)
44 points by Marge on May 27, 2012 | hide | past | favorite | 21 comments


To whomever changed the title to eliminate the phrase "new physics": that wasn't in any way sensational or editorial. This is a common phrase within the physics community. From the conclusions part of the paper:

>The results presented here disagree with the SM at the 3.4σ level. Together with the measurements by the Belle Collaboration, and the sizable difference between the measured and predicted branching fraction of B− → τ−ντ [35–39], this could be an indication of new physics processes affecting decays with a τ lepton in the final state.



>3 sigmas is called evidence. >5 sigmas would be called discovery. This is the standard particle physics convention. 3 sigmas also seems to be the unofficial limit after which you are allowed to get a little excited and start to speculate about the explanations. :) As the paper says the 3.4 sigmas corresponds to a p value of 6.9 x 10^-4, which means that supposing only stuff included in the Standard Model exists, getting these results (or something even stronger) just by chance has the probability of 6.9 x 10^-4. That doesn't pop the champagne yet, but it's definitely something to talk about.


3.4 sigmas is the within-model uncertainty - that is, given the assumptions about their instruments and methodologies, the result stands at 3.4 sigma, or about 1/1000.

I think that the uncertainty about the model and assumptions is already larger than 1/1000, especially given the recent failed "faster-than-light neutrinos" LHC experiment. So while 5 sigmas would be nice, I would assume that the physicists over there are busy double-checking everything, and trying to reproduce the result, rather than improve their "sigma-score".

Can any physicist confirm? I'm not all that familiar with how these things work. Furthermore, if they perform once more the experiment and get the same 3-sigma result, wouldn't that compound into a >3-sigma total?


The faster-than-light neutrinos probably got explained by a loose cable, sorry. :) http://arstechnica.com/science/2012/02/faster-than-light-neu...

As to the double-checking, the analysis behind this result used the whole data set collected by the BaBar experiment at the Stanford Linear Accelerator (SLAC). So that's years of data, and the experiment hasn't been running since 2008. The question of course is, can some other experiment produce the same results. As this is B-physics I would guess LHCb, one of the 4 main experiments at CERN's LHC, might be able to do this, but I really don't know for sure.

If you can repeat the same experiment and get another 3 sigma deviation from the Standard Model predictions then yes, you could combine the results to get a >3 sigma total (though they don't simply just add up to 6).


The faster-than-light neutrinos probably got explained by a loose cable, sorry. :)

Right:) That's exactly what I was referring to: the uncertainty about the model/experimental setup is larger than the N-sigmas uncertainty reported by the model and instruments.


Except the scientists behind the loose cable up front said "we don't understand these these results, we think they are false, please help us figure out why". At no point were they published for legitimate consideration.


Right, sorry, I misunderstood. You are right and this was explained well by yk a couple of posts below.


There are different sources of errors, usually categorized as statistical and systematic. The statistical error will decrease with higher statistics but the systematic error will not. An example for a systematic error would be the error of the energy calibration. In the paper they claim for one of the measured ratios R(D)=0.440+-0.058 (statistical) +- 0.042 (systematic) and the result would at best get about sqrt(2) as good if you would get rid of the statistical error. (Assuming the errors are uncorrelated they combine as the root of the squares).


I wouldn't put much stock in the faster-than-light neutrino result. That was never peer-reviewed, contradicts the results of other experiments on the same path, and led to two of the leaders of the OPERA project being ousted:

http://latimesblogs.latimes.com/world_now/2012/04/faster-tha...

It was a lot of media hype over a preprint of preliminary results that have never been confirmed.


It was a cable. Just that. They published the data knowing it was wrong - they asked fellow scientists to explain why.


The idea of "New Physics" is sensational. There's only improvements to current models, and maybe new models for new situations. The Standard Model has worked out for too many situations for it to be just "wrong".


The phrase "new physics" refers to the jargon of calling various subfields something-physics (like "Higgs-physics" or "B-physics"). New physics is correspondingly any additional field in the standard model ( and therefore just a tweak).


I recently took a course in the university about elementary particles. The expression "new physics" means they are measuring something that is unexpected, and that they think that to explain it they will have to make some correction to the "standard model", for example, add another particle/field/interaction.


Yes.

I lack the background to judge whether this will probably be explained by something relatively boring such as a new resonance particle composed of already known fundamental particles (i.e. a new baryon) or maybe an excited state of an already known particle, or something fundamentally new, like supersymmetry, Higgs or something else. This is why I posed the title as a question.

I find it very interesting though that the paper speaks about excluding the "type II two-Higgs-doublet (2HD) model charged Higgs", but I don't know what "type II" there means. The 2HDM is the simplest way you can extend the Standard Model Higgs field (which is what all the Higgs search buzz is about) in to many beyond the standard model theories, like supersymmetry. At least the apparently quite popular minimal supersymmetric model has a 2HD. But the key question is what's type II, and where's my type I?


The resonances and composed particles are used in the calculations, so if this result is true, it would involve new things that are not in the standard model.

We studied only the simple version that has only one Higgs field :(, so I don’t know what type I/II means, but using Google, I found this (see slide #6), that starts with a not very technical introduction: http://www.umich.edu/~mctp/SciPrgPgs/events/2007/kanefest/ha...


Yeah resonances and such would be new, but nothing as exciting as say a charged Higgs. I think they found some previously undiscovered resonance particle at the LHC a few months ago, but it didn't really even make the news outside physics.

Thanks for the link, that was actually really helpful. I wonder why the paper doesn't say anything about if their data would fit a type I or type III charged Higgs. Probably has something to do with whether leptons are considered to be up or down type (?) and thus how the charged Higgses would couple in the given process.


I would say that Relativity, and Quantum Mechanics, were 'new physics' over the old Newtonian model, although for many, many, many problems, Newtonian mechanics are still the best model. If this finally has found a major flaw in the Standard Model, I consider that extremely exciting news.


Relativity & Quantum Mechanics were massive paradigm shifts. In contrast, any flaw in the Standard Model, while exciting, will likely be orders of magnitude smaller in terms of impacting how we view the world.

Also we already know the Standard Model is not the complete picture (it cannot integrate QM & General Relativity).


That's a REALLY BIG "If" there. Like the "neutrinos observed moving faster than light" recent non-story, these individual experimental anomalies should be regarded with a high degree of suspicion.

We have a long way to go as far as accumulating evidence contrary to the Standard Model before we get to an analogous situation to where Einstein was when he postulated Relativity.


Newtonian physics has worked out for too many situations for it to be just "wrong".

That doesn't sound quite right, does it?

I'm all for keeping sensationalism off of science, but I don't think "it's been working great so far" is a good reason to discount improvements to a model that we know is incomplete. Of course, a lot depends on what you mean by "wrong".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: