Hacker Newsnew | past | comments | ask | show | jobs | submit | evolvingstuff's commentslogin

Very excellent point. One simple possibility I could imagine to mitigate this would be having an additional device/button one had to physically press in order for the thoughts to be vocalized. I think that would be an extremely simple but clear indication of intentionality.


From the study:

"It should be remarked that, in the majority of the cases, people adopting plant-based diets are more prone to engage in healthy lifestyles that include regular physical activity, reduction/avoidance of sugar-sweetened beverages, alcohol and tobacco, that, in association with previously mentioned modification of diet [62], lead to the reduction of the risk of ischemic heart disease and related mortality, and, to a lesser extent, of other CVDs."

"It has also been described that vegetarians, in addition to reduced meat intake, ate less refined grains, added fats, sweets, snacks foods, and caloric beverages than did nonvegetarians and had increased consumption of a wide variety of plant foods [65]. "


I assume that this is in response to > Do you have any evidence that plant-based diets are more "health conscious", and that that by itself explains why they are healthier?

Good to know, I didn't know this was the case. I wonder if going plant-based makes you more prone to engage in healthy lifestyles, or if having a healthy lifestyle makes you more prone to go plant-based.


You are correct, that is an error in an otherwise great video. The k+1 token is not merely a function of the kth vector, but rather all prior vectors (combined using attention). There is nothing "special" about the kth vector.


I suspect in the near future there will be a number of cases where individuals will intentionally release a bunch of AI "chaff", in the sense that having a very large number of bad videos/texts about them, of which many are clearly false, will disguise the actual bad behavior. I'm not sure what term/phrase will be used for this particular tactic, but I am absolutely certain one will arise.


This almost certainly already happens, just without AI. For one example look at the surfeit of UFO stories, many of which can be plausibly attributed to state efforts to cloud intelligence about actual classified air and space technology


This was a plot point in the 2019 Neal Stephenson novel Fall; or, Dodge In Hell. In an effort to prove information on the internet was unreliable a massive spam/defamation campaign was launched on a person -- to the point that the information was unbelievable and it was obviously fake.

In the novel, despite several attempts from different parties to encourage people to think more critically about what they read on the internet, highly sensational AI generated content radicalizes and stupefies the population.


I think you'll also see this as a defensive measure from some forward-leaning targets of deepfakes.

First, it's awful that this could be even considered as needed in the future, and developers behind the open source projects should consider the future they are enabling. It's not all just harmless tech.

So with that in mind, I've spoken to people who have theorized about releasing preemptive deepfake porn of themselves. This is due to the currently awful but very much existing trend of revenge porn, and possible expansions of the theme that w/ deepfakes. Can't blackmail someone via embarrassment if it is all plausibly deniable.

For example, I was in a con with a presenter who was a leading advocate against revenge porn, and the the call got zoom-bombed. Awful stuff. I could see someone on the receiving end of that treatment going full nuclear in the manner I described, in the off chance those sort of measures ended the threat finally.


This (minus the AI part until now) is pretty much the strategy of political operator Steve Bannon, who pithily summarized it as 'flood the zone with shit'. Think of all those junk 'news' sites that are just barely curated content farms using automated 'spinners' to pump out content, rage farming pundits etc.


This kind of happened in The Office, where Michael Scott spread a rumor about everyone to cover up the fact that he was about to get caught gossiping inappropriately about a co-worker.


This is already a thing but it's more for people with money right now.

There are a lot of 'reputation management services' whose job is to flood out bad press and replace it with anything else.


I love Nomnoml. I have lately been using it to visualize hierarchical tag structures in a browser-based PKM project I am working on. I find it creates fairly clean layouts.

Example:

https://imgbox.com/9A1mDyNv


Why would RWKV have a particular advantage in this context? (I may be missing some key intuitions)


RNN inference on a smaller edge controller (all history is cached in a single state point for each layer, so much less memory and computation requirements IIRC) :')

Very mobile-device and battery-powered systems friendly. :')))) ;'DDDD


I havent yet fully grokked RWKV..

Just how much compute/memory are we saving here?

My understanding is that a 1BN transformer is about 2BN flops/inference, so about 1TFLOP for a 500 sequence of inferences (and also about several GB of memory)

What would be the equivalent RWKV (let ignore the inevitable loss penalty which could be significant..)


It's an RNN, there is no N^2 component over time.

It only requires the previous state.

(there's a discord, you should join it with further questions! I unfortunately am not as informed as I should be on this one, other than the fact that it is _very_ mobile friendly). The performance diff is slight but not too bad really, all things considered. And I think it comes out on top for raw efficiency per parameter/flop, IIRC.

An interesting concept, for sure! :'DDDD :'))))


Sigh. Do discussions about RWKV always end with suggestions that I join the Discord? If I do join the Discord, will I soon begin suggesting that others join the Discord as well? What I mean is, I've seen this come up a few times on HN and discussions usually end prematurely with suggestions to join the Discord. [0]

If this technique is good, I'll wait until I can learn about it without joining the Discord.

[0]: https://news.ycombinator.com/item?id=35508692


You have been shamelessly self-promoting your Hopf algebra/deep learning research on a very large percentage of posts I have seen on HN lately, to the degree that I actually felt the need to log in so as to be able to comment on it. Please. Stop.


People need to know. Also I'm not promoting my research in this port, I'm promoting Hopf algebra.


Easy. Have a setting where someone can consent to (or not) receiving images that may potentially have nudity in them.


What about nudists who want unsolicited nude pictures of their friends enjoying dinner together, but not unsolicited dick pics?


I have found the books System Design Interview (volumes 1 and 2), to be really nice and informative reading material on this topic.


Maybe I'm missing something obvious, but this seems like a way to calculate a mode, not a median.


I think the idea in parent post is: fill the dictionary (I would use a 200 entries array though), iterate over the keys in sorted order summing the values, stop when you are at N/2. The 'iterate over keys in order' part depends heavily on the data structure choice of the dictionary (good for treemaps, bad for hashmaps, best for preallocated array).

Incidentally this falls in the 'sort an then pick the middle' because the original text says nowhere that is must be a n*log(n) sort and what I described here is essentially the same but with counting sort (and a couple redundant steps removed).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: