Hacker Newsnew | past | comments | ask | show | jobs | submit | Valectar's commentslogin

It's not exactly whizzing around at light speed in the way that you imagine if it's beneath an event horizon. The entire point of an event horizon is that everything beyond it physically cannot move outward. The horizon isn't some kind of physical barrier, it is just the farthest distance at which the black hole's gravity becomes inescapable, that doesn't stop applying just because you've moved past the horizon, and in fact it only becomes more true.

If you imagine any sphere beneath the event horizon which is centered on the singularity, that sphere is just as inescapable as the farthest such sphere which we call the event horizon.

Additionally, the event horizon is inescapable to everything. Even gravitation is transmitted at the speed of light, and cannot outrun the cascade of inwardly falling spacetime beneath the event horizon. So no possible change in the distribution of mass within a black hole could have any effect on the configuration of spacetime at the event horizon, or even on any spacetime further from the singularity than itself.


The point of event horizon is that everything at it expeiriences infinite gravitational time dilation in relation to everything even barely outside it.

So talking about things inside event horizon is about as physical as talking about things traveling at superluminal speeds. You can consider such object, it just takes more than infinite acceleration time to get there.

I don't really understand why people think they can get rid of the discontinuity at event horizon by changing coordinate system. It's as if we were considering function 1/x in new coordinate system xi where xi is defined as 1/x and suddenly we have a nice continuus function without anything special happening around 0. It's a mathematical sleigh of hand.

Sure, from the point of view of infalling matter nothing special happens at event horizon but why would we, outside people, should care about this perspective since the universe will evaporate before this infalling matter crosses event horizon?

So when black hole becomes made of mostly photons, they are all still outside of the event horizon, because infinite time has not yet passed outside.


This betrays a very naive concept of "knowledge" and "understanding". It presupposes that there's some kind of platonic realm of logic and reason that an AGI just needs to tap in to. But ultimately, there can be no meaning, or reasoning, or logic, without context. Matching a pattern of shapes presupposes the concept of a shape, which presupposes a concept of spatial relationships, which presupposes a concept of 3 or even 2 dimensional space. These things only seem obvious and implicit to you because they permeate the environment that your mind spent hundreds of millions of years evolving to interpret, and then tens of years consuming and processing to understand.

The true test of an AGI is it's ability to assimilate disparate information into a coherent world-view, which is effectively what the pretraining is doing. And even then, it is likely that any intelligence capable of doing that will need to be "preloaded" with assumptions about the world it will occupy, structurally. Similar to the regions of the brain which are adept at understanding spatial relationships, or language, or interpreting our senses, etc.


Yes, AGI was here at AlphaGo. People don't like that because they think it should have generalized outside of GO, but when you say AGI was here at AlphaZero which can play other games they again say not general enough. At this point is seem unlikely that AI will ever be general enough to satisfy the sceptics for the reason you said. There will always be some domain that requires training on new data.


You're calling an Apple an Orange and complaining that everyone else wont refer to it as such. AGI is a computer program that can understand or learn any task a human can, mimicking the cognitive ability of a human.

It doesn't have to actually "think" as long as it can present an indistinguishable facsimile, but if you have to rebuild its training set for each task, that does not qualify. We don't reset human brains from scratch to pick up new skills.


I'm calling a very small orange an orange and people are saying it isn't a real orange because it should be bigger so I show them a bigger orange and they say not big enough. And that continues forever.



Maybe not yet, but what prevents games from getting more complicated and matching rich human environments, requiring rich human like adaptability? Nothing at all!


But AlphaZero can't play those richer games so it doesn't really matter in this context.


Famous last words!


"AI will ever be general enough to satisfy the sceptics for the reason you said"

Also

People keep thinking "General" means one AI can "do everything that any human can do everywhere all at once".

When really, humans are also pretty specialized. Humans have Years of 'training' to do a 'single job'. And they do not easily switch tasks.


>When really, humans are also pretty specialized. Humans have Years of 'training' to do a 'single job'. And they do not easily switch tasks.

What? Humans switch tasks constantly and incredibly easily. Most "jobs" involve doing so rapidly many times over the course of a few minutes. Our ability to accumulate knowledge of countless tasks and execute them while improving on them is a large part of our fitness as a species.

You probably did so 100+ times before you got to work. Are you misunderstanding the context of what a task is in ML/AI? An AI does not get the default set of skills humans take for granted, its starting as a blank slate.


You're looking at small tasks.

You don't have a human spend years getting an MBA, then drop them in a Physics Lab and expect them to perform.

But that is what we want from AI, to do 'all' jobs equally as great as any individual human in that one job.


That is a result we want from AI, it is not the exhaustive definition of AGI.

There are steps of automation that could fulfill that requirement without ever being AGI - it’s theoretically possible (and far more likely) that we achieve that result without making a machine or program that emulates human cognition.

It just so happens that our most recent attempts are very good at mimicking human communication, and thus are anthropomorphized as being near human cognition.


I agree.

I'm just making point that for AI "General" Intelligence.

That humans are also not as "General" as we assume in these discussion. Humans are also limited in a lot of ways, and narrowly trained, make stuff up, etc...

So even a human isn't necessarily a good example for what AGI would mean. Human is not a good target either.


Humans are our only model of the type of intelligence we are trying to develop, any other target would be a fantasy with no control to measure against.

Humans are extremely general. Every single type of thing we want an AGI to do is a type of things that a human is good at doing, and none of those humans were designed specifically to do that thing. It is difficult for humans to move from specialization to specialization, but we do learn them with only the structure to "learn, generally" being our scaffolding.

What I mean by this is that we do want AGI to be general in the way a human is. We just want it to be more scalable. It's capacity for learning does not need to be limited by material issues (i.e. physical brain matter constraints), time, or time scale.

So where a human might take 16 years to learn how to perform surgery well, and then need another 12 years to switch to electrical engineering, an AGI should be able to do it the same way, but with the timescale only limited by the amount of hardware we can throw at it.

If it has to be structured from the ground up for each task, it is not a general intelligence, it's not even comparable to humans, let alone scalable beyond us.


So find a single architecture that can be taught to be an electrical engineer or a doctor.

Where today those are being done, but specialized architectures, models, combination of methods.

Then that would be a 'general' intelligence, the one type of model that can do either. Trained to be an engineer or doctor. And like a human once trained, they might not do the other job well. But they did both start with same 'tech', like humans all have the same architecture in the 'brain'.

I don't think it will be an LLM, it will be some combo of methods in use today.

Ok. I'll buy that. I'm not sure everyone is using 'general' in that way. I think more-often people think a single AI instance that can do everything/everywhere/all at once. Be an engineer and doctor at same time. Since it can do all the tasks at same time, it is 'general'. Since we are making AI's that can do everything, could have a case statement inside to switch models, half joking. At some point all the different AI methods will be incorporated together and will appear even more human/general.


Right, but even at that point the sceptics will still stay that it isn't "truly general" or unable to do X in the same way a human does. Intelligence like beauty is in the eye of the beholder.


But if humans are so bad, what does that say about a model that can't even do what humans can?

Humans are a good target since we know human intelligence is possible, its much easier to target something that is possible rather than some imaginary intelligence.


No human ever got good at Tennis without learning the rules. Why would we not allow an AI to also learn the rules before expecting it to get good at tennis.


> Why would we not allow an AI to also learn the rules before expecting it to get good at tennis.

The model should learn the rule, don't make a model based on the rules. When you make a model based on the rules then it isn't a general model.

Human DNA isn't made to play tennis, but a human can still learn to play it. The same should be for a model, it should learn it, the model shouldn't be designed by humans to play tennis.


So you're saying AI can be incompetent at a grander scale. Got it.


Yes. It can be as good and bad as a human. Humans also make up BS to answers.


I'm not sure if I missed something in the article, but it appears to me that the article does not take in to account the change in supply due to change in effective wage caused by decreased utilization. The supply of drivers available would be based on what money drivers actually take home, not just the nominal price, so any decrease in the money taken home would result in a decrease in supply, meaning the positive impact on drivers wages would be larger than that computed in the article.


At high enough energies the laws of physics are actually different. Two of the four fundamental forces in physics, the electromagnetic force and the weak interaction, are actually a single force which only appears to be two separate forces at "low" energies/temperatures, with low being the pretty much all temperatures in the universe after the Big Bang.

It is completely reasonable to test whether phenomenon that hold at low energies still hold at high energies, and that may be the only way you're going to find more fundamental physical laws. Especially when we know quantum theory is incomplete, since it is currently incompatible with general relativity.


There's a lot of discussion here in the comments on whether this can meaningfully be called a vulnerability if you can only "see the temperature of your server".

Setting aside that the vulnerability doesn't actually allow that, isn't this potentially a Spectre / Meltdown vulnerability? This is an unprotected endpoint that conditionally executes code taken from user input. If the branch predictor can be trained to speculatively execute arbitrary code from the input, information could be extracted via endpoint timing using a similar methodology to Spectre or Meltdown, right?


This is what annoys me about internal security teams. No, this doesn't make us vulnerable because it's in the DMZ, we have ACLs, it's behind a firewall, we have traffic monitoring, process monitoring, MFA, geofences, etc etc. Just because there's a possibility this could be exploited in some convoluted way in a targeted attack doesn't mean all the other walls we have stood up around this are suddenly useless. I'm constantly pestered and forced to waste my time explaining that your little CVE scanner tool is not the end all for our security posture.

Not to snap at you, but I'm forced to deal with these "what if" scenarios weekly and it drives me nuts. I know the security guys have a job to do, but I feel like half of their job is just trying to drum up scary looking things to justify their employment.


We don't have the rules that we want the neural network to learn, if we did we could just directly use those rules, and there would be no need for ML to solve the problem. In this case we want the ML to learn how to infer spatial information from two dimensional images, and the process that generates the data it trains on cannot do this at all. It can create two dimensional images from spatial information, which is a much simpler and effectively solved process.

There are cases when we want a machine learning model to do the same thing as the process which generate it's data, like in the case of model's learning to replicate physics simulations, but even then the entire point is for the machine learning model to accomplish the same or a similar result but in a more computationally efficient way.


You act like this is the first time in the history of the planet a country has enacted measures to stop the spread of disease. It's not, we've done it before, and the world didn't collapse because people were inconvenienced temporarily.

https://healthblog.uofmhealth.org/wellness-prevention/mask-r...


Where did your conclusion of "masks definitely don't work for SARS-2" come from? Just in your quote it stated that the control that cloth mask use was compared to was a population with a high proportion of mask wearing. The study does not compare cloth mask usage to no mask usage, and only says that it is possible that cloth masks are harmful.

Also if you would like some more up to date information, as well as a larger number of studied, which are specific to COVID, the CDC website has a lot of information here:

https://www.cdc.gov/coronavirus/2019-ncov/science/science-br...


> Just in your quote it stated that the control that cloth mask use was compared to was a population with a high proportion of mask wearing. The study does not compare cloth mask usage to no mask usage, and only says that it is possible that cloth masks are harmful.

That was me debunking the “before COVID, masks were known to work” claim of the GP. That was not a study of SARS-2 but rather Influenza.

The “masks don’t work for sars-2” was in reference to sars-2 aerosol transmission, which masks mechanistically don’t protect against. There is only one RCT of sars-2 in a community setting, and it failed to demonstrate an improvement in the primary endpoint of self-infection. There is no study showing that masks slow the spread of sars-2 in a community. Yet despite the lack of any studies, various medical authorities like the CDC are issuing statements that they do exactly that, which is a classic case of an institution using its credibility to advance baseless claims


> There is no study showing that masks slow the spread of sars-2 in a community

You say that, and yet "At least ten studies have confirmed the benefit of universal masking in community level analyses"

https://www.cdc.gov/coronavirus/2019-ncov/science/science-br...

And the website goes on to list each of them, as well as a number of other studies relating to the effectiveness of masking.


Those studies all have fundamental flaws. The basic problem is those are associative studies, which can’t separate the effects of masking from the normal curve of a viral epidemic.

You are right though I should have been much more specific than just “study”.

A question though: if masking is so great, why wouldn’t the health authorities have performed an actual RCT to conclusively prove they work? (We have one RCT for SARS-2 which showed no statistically significant effect on the primary endpoint)


So, you linked a number of articles from early 2020, when there was not significant community spread of COVID and thus the primary strategy for dealing with it was isolating and quarantining individual cases, and most people in the country would probably not come in to contact with someone carrying COVID.

If you want to know what the WHO recommends, you need only look at their website:

https://www.who.int/emergencies/diseases/novel-coronavirus-2...

"Make wearing a mask a normal part of being around other people. The appropriate use, storage and cleaning or disposal of masks are essential to make them as effective as possible."


Perhaps you could point us to what cdc website you are referring to? Because on cdc.gov wearing a multilayer cloth mask is recommended, and they cite a number of studies on the effectiveness of masks which all show significant reductions in transmissions in people and populations which where masks vs those who don't. See "Human Studies of Masking and SARS-CoV-2 Transmission" on this page:

https://www.cdc.gov/coronavirus/2019-ncov/science/science-br...


It's notable that the only randomized controlled trial ever conducted for masks and Covid [1] was not listed under the personal protection section of this page, and was instead was inaccurately summarized:

> A community-based randomized control trial in Denmark during 2020 assessed whether the use of surgical masks reduced the SARS-CoV-2 infection rate among wearers (personal protection) by more than 50%. Findings were inconclusive,54 most likely because the actual reduction in infections was lower.

This is inaccurate, and should make anyone doubt the judgment of whomever wrote this summary. The paper in question was quite clear that there was no statistical significance between groups:

> The between-group difference was −0.3 percentage point (95% CI, −1.2 to 0.4 percentage point; P = 0.38) (odds ratio, 0.82 [CI, 0.54 to 1.23]; P = 0.33). Multiple imputation accounting for loss to follow-up yielded similar results. Although the difference observed was not statistically significant, the 95% CIs are compatible with a 46% reduction to a 23% increase in infection.

To characterize this as "inconclusive" is misleading at best, dishonest at worst. In general, I find this page to be a highly selective reading of the available literature, which was (and remains) predominantly inconclusive regarding the effectiveness of masks. Even the WHO meta-analysis of mask literature found only a weak positive effect after mixing together the results of studies ranging from cloth masks to respirators, in medical (most of the data) and non-medical settings [2]. This CDC page has instead chosen to lean on a few poorly controlled observational and/or correlative studies, and ignore the higher quality evidence that came before.

[1] https://www.acpjournals.org/doi/10.7326/M20-6817

[2] https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


Ok so you say that characterizing the results as incoonclusive results is dishonest, but in the article the limitations sections literally lists "Inconclusive results" as their first item.

On top of that they state: "Although the difference observed was not statistically significant, the 95% CIs are compatible with a 46% reduction to a 23% increase in infection."

Which sounds like their saying they can say with 95% confidence that there is between a 46% reduction and a 21% increases, which sounds pretty damn inconclusive to me.


The between-group results are not "inconclusive" -- they found no statistically significant difference between the masked and unmasked cohorts. There's no reason to reject the null hypothesis ("masks are not protective") based on this data.

The authors say this:

> The most important limitation is that the findings are inconclusive, with CIs compatible with a 46% decrease to a 23% increase in infection.

This is not saying that that study is inconclusive. It's saying that the protective effect is inconclusive -- masks could be anything from slightly protective, to slightly harmful. The fact that the writer characterizes the entire paper as "inconclusive" is incriminating: it's an editorial bias, a complete misunderstanding of statistics, or a combination of both.

> Which sounds like their saying they can say with 95% confidence that there is between a 46% reduction and a 21% increases, which sounds pretty damn inconclusive to me.

No. They're stating the confidence interval on the point estimate. All point estimates have confidence intervals, and the existence of a confidence interval does not mean that the result is uncertain. To restate basic statistics: the top-line conclusion of the study is that the 95% confidence intervals of the two groups overlap to such a degree that you can't reject the conclusion that they're the same.


I didn't mean to say the fact that the interval exists makes it uncertain, I meant to say the fact that it is a 67% spread makes it uncertain. They are saying that they have narrowed the effect down to within that spread with a 95% chance. That combined with sources of uncertainty not covered by that intervale (Listed in their limitations: missing data, variable adherence, patient-reported findings on home tests, no blinding, and of course inconclusive results) definitely make it sound pretty inconclusive.

There is also the limitation that there was "no assessment of whether masks could decrease disease transmission from mask wearers to others."

The article explicitly states "The findings, however, should not be used to conclude that a recommendation for everyone to wear masks in the community would not be effective in reducing SARS-CoV-2 infections, because the trial did not test the role of masks in source control of SARS-CoV-2 infection."


> That combined with sources of uncertainty not covered by that intervale (Listed in their limitations: missing data, variable adherence, patient-reported findings on home tests, no blinding, and of course inconclusive results) definitely make it sound pretty inconclusive.

The study is not inconclusive. The study failed to reject the null hypothesis. That much is definitive.

Whether or not there might be some smaller difference that the study wasn't powered to detect...we don't know. It's still a definitive rebuttal of any claim that "masks reduce personal risk of infection by 50%", and the fact that it's not in the "personal protection" section of the CDC webpage is simply editorial bias. At the very least, this paper is the best study ever performed on masks and SARS-CoV2, and it severely limits any real-world claim of protectiveness.

> There is also the limitation that there was "no assessment of whether masks could decrease disease transmission from mask wearers to others."

The study wasn't designed to do that. It should still be in the "personal protection" section of the CDC webpage, and is not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: