Hacker Newsnew | past | comments | ask | show | jobs | submit | saltcured's commentslogin

In theory, a computer should be able to do the same. It could do sensor fusion with even more sense modalities than we have. It could have an array of cameras and potentially out-do our stereo vision, or perhaps even use some lightfield magic to (virtually) analyze the same scene with multiple optical paths.

However, there is also a lot of interaction between our perceptual system and cognition. Just for depth perception, we're doing a lot of temporal analysis. We track moving objects and infer distance from assumptions about scale and object permanence. We don't just repeatedly make depth maps from 2D imagery.

The brute-force approach is something like training visual language models (VLMs). E.g. you could train on lots of movies and be able to predict "what happens next" in the imaging world.

But, compared to LLMs, there is a bigger gap between the model and the application domain with VLMs. It may seem like LLMs are being applied to lots of domains, but most are just tiny variations on the same task of "writing what comes next", which is exactly what they were trained on. Unfortunately, driving is not "painting what comes next" in the same way as all these LLM writing hacks. There is still a big gap between that predictive layer, planning, and executing. Our giant corpus of movies does not really provide the ready-made training data to go after those bigger problems.


The superstitious bits are more like people thinking that code goes faster if they use different variable names while programming in the same language.

And the horror is, once in a long while it is true. E.g. where perverse incentives cause an optimizing compiler vendor to inject special cases.


I would prefer to see a version that was skillfully translated to modern orthography so that we could appreciate shifts in vocabulary and grammar.

To me, it is nearly like trying to look at a picture book of fashion but the imagery is degraded as you go back. I'd like to see the time-traveler's version with clean digital pictures of every era...


so replace the long s with just s and the thorn (þ) with th? others?

In a study of voluntary use like this, it seems like the null hypothesis is that people who were going to be diagnosed with mental illness chose to self-medicate ahead of time?

I don't understand how you can counter this potential selection bias, given that it would not be ethical to do a randomized trial instead of voluntary usage.


The finding is:

> past-year cannabis use was associated with a significantly increased risk of incident psychotic, bipolar, depressive, and anxiety disorders by age 26 years

They claim correlation, not causation.

They also address this subject in the discussion:

>Existing research suggests that the relationship between adolescent cannabis use and psychiatric disorders is complex and bidirectional. Adolescents with mental health symptoms or diagnoses may use cannabis as a way to mitigate distress, cannabis may contribute to mental health symptoms and diagnoses through neurobiological changes, and there may be shared social and biological risk factors that contribute to both mental health symptoms and cannabis use.

>Despite potential use of cannabis to self-medicate mental health symptoms, ongoing use of cannabis is associated with worsening mood symptoms29 and poorer adherence to medication treatment and psychotherapy.

>While it is not possible to definitively determine causality, this study had a strong retrospective cohort design. The temporal order of cannabis use preceded incident psychiatric disorder diagnoses by a mean of 1.7 to 2.3 years, supporting the possibility of a contributory role.

>The findings remained significant even after adjusting for a history of psychiatric disorders and other time-varying substance use and excluding adolescents with any history of a psychiatric disorder in sensitivity analyses, indicating unique associations between adolescent cannabis use and psychiatric disorders that go beyond broader adolescent psychopathology or substance use.

>This was a conservative approach, as other psychiatric disorders might be mediators or confounders depending on the timing and the underlying pathways. Furthermore, E-values indicated that only a strong unmeasured confounder with HRs of 3.79 to 1.79 could explain the associations, suggesting that adolescent cannabis use may be an independent risk factor.

>However, reverse causation cannot be ruled out, as some individuals may begin to use cannabis to self-medicate prodromal symptoms of psychiatric disorders even before a diagnosis is made. Future research with more nuanced measurement of cannabis use, including frequency, mode of use, and product strength, alongside regular screening and assessment for psychiatric disorder symptoms and diagnosis would help to further elucidate the timing and mechanisms underlying these associations.


At UC Berkeley in the early-mid 90s, I think I had two digital design courses. The first was low level basics like understanding logic gates, flip flops, gray coding, PROM, ALUs, multiplexers, etc., with a physical project using 7000-series chips on breadboard. The second was the whole 32 bit MIPS/SPIM pipelined CPU design and simulation project based on the Patterson and Hennessy text book.

But, I seem to recall there were ways to bypass most hardware background knowledge for a CS degree. You had to do intro math and physics that did classical mechanics, but you could stop short of most of the electromagnetic stuff or multivariate calculus. You could get your breadth credits in other areas like statistics, philosophy, and biology. I think you could also bypass digital design with mix of other CS intro courses like algorithms, operating systems, compilers, graphics, database systems, and maybe AI?


With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?

Sometimes I think they are a mash-up of three things from older media

1. A laugh track

2. An inset with a sign language interpreter

3. An imaginary friendship with a low budget meta commentator, like Mystery Science Theater 3000


As someone with no inner monologue, I think I could just as easily "flow" about a non-verbal task like spatial reasoning or a verbal task like reading, writing, or even engaging in a particularly technical or abstract conversation. Unlike you, my resting state is non-verbal and I would not be able to correlate verbal content with flow like that.

To me, flow is a mental analogue to the physical experience of peak athletic output. E.g. when you are are at or near your maximum cardiovascular throughput and everything is going to training and plan. It's not a perfect dichotomy. After all, athletics also involve a lot of mental effort, and they have more metabolic side-effects. I've never heard of anybody hitting their lactate threshold from intense thinking...

My point is that the peak mental output could be applied to many different modes of thought, just as your cardiovascular capacity can be applied to many different sports activities. A lot of analogies I hear seem too narrow, like they only accept one thinking task as flow state.

I also don't think it is easy to describe flow in terms of attention or focus. I think one can be in a flow state with a task that involves breadth or depth of attention. But, I do suspect there is some kind of fixed sum aspect to it. Being at peak flow is a kind of prioritization and tradeoff, where irrelevant cognitive tasks get excluded to devote more resources to the main task.

A person flowing on a deep task may seem to have a blindness to things outside their narrow focus. But I think others can flow in a way that lets them juggle many things, but instead having a blindness to the depth of some issues. Sometimes, I think many contemporary tech debates, including experience of AI tech, are due to different dispositions on this spectrum...


I would read it as there being a different threshold for what is citation-worthy versus presumed background knowledge.

Imagine if every graphics paper had to cite every concept they use from arithmetic, trigonometry, and linear algebra textbooks...


This was citation worthy because it's new knowledge to the field. Even in a graphics paper, you can cite whatever basic techniques you're using if it's not clear that everyone will be familiar with them.


I agree in broad strokes. If I am incapacitated, that is when things like durable power-of-attorney, medical advance directives, and living trusts come into play.

The important thing is to ensuring your computer is not a single point of failure. Instead of losing a password, you could have theft, flood, fire, etc. Or for online accounts, you are one vendor move away from losing things. None of these should be precious and impossible to replace. I've been on the other side of this, and I think the better flow is to terminate or transfer accounts, and wipe and recycle personal devices.

A better use of your time is to set up a disaster-recovery plan you can write down and share with people you trust. Distribute copies of important data to make a resilient archive. This could include confidential records, but shouldn't really need to include authentication "secrets".

Don't expect others to "impersonate" you. Delegate them proper access via technical and/or legal methods, as appropriate. Get some basic legal advice and put your affairs in order. Write down instructions for your wishes and the "treasure map" to help your survivors or caregivers figure out how to use the properly delegated authority.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: