Hacker Newsnew | past | comments | ask | show | jobs | submit | ghostpepper's commentslogin

As a cynical person I assume all the frontier LLMs were trained on datasets that include every open source project, but as a thought experiment, if an LLM was trained on a dataset that included every open source project _execept_ chardet, do you think said LLM would still be able to easily implement something very similar?

There is no doubt in my mind that it could still do it.

As an aside, it’s crazy that Ray Ban would hitch their most valuable brand cachet to such a controversial wagon

Meta have a minority stake in Ray Ban and Oakley's parent company, EssilorLuxottica. The investment was largely to support development of future AI glasses. It does make me a little sad to see Wayfarers end up this way too.

> https://www.reuters.com/world/europe/meta-takes-around-3-sta...


Off topic but if anyone is looking for a nice web-GUI frontend for a locally-hosted transcription engine, Scriberr is nice

https://github.com/rishikanthc/Scriberr


Yeah the first three paragraphs of the article really resonated strongly and then the fourth was an ad for mastodon, which is only slightly less bad IMHO.


You can get co-op/internship that requires a Top Secret clearance?


There are co-operatives in manufacturing which would need their staff to be security-cleared in order to win government contacts (such as assembling weapons). Perhaps this is what parent is referring to. Co-ops aren't just for groceries :)


In the Canadian university lingo, co-op refers to a (usually paid) internship that you complete as part of your degree. You usually have a couple co-op terms/semesters along with your traditional terms. For example, you may start your degree with two semesters of classes, then a semester of co-op, then one of classes, then another two co-ops, more classes, etc. until you complete the degree requirements. Degrees with a co-op requirement usually will make mention of it (e.g. Software Engineering with co-op).


Oh, that's really interesting. We have them in the UK too, but they're called placements rather than co-ops.


Yep. I worked on the control system for the Virginia class attack sub-marines for my co-op. Also got to ride around in a Seawolf class submarine.


That's pretty cool. I'm guessing you're American, not Canadian, right? I didn't realize American schools had co-ops; I thought they mostly/solely had internships.


didn't mattermost recently make some changes to their license that put a bunch of features behind a paywall?

https://news.ycombinator.com/item?id=46383675


I can sort of understand this. There are certain songs, eg. a song from my wedding, that hit like a (good) ton of bricks every time I hear them, but I wouldn't want to listen to it every day because I feel like I would have more and more banal experiences cumulatively associating with the song until the wedding feeling becomes just one of many and starts to lose its association.


chatGPT 5.2 is smarmy, condescending and rude. the new personality is really grating and what's worse it seems to have been applied to the "legacy" 5.1 and 5.0 models as well.


Does a human review every sticker before it's ever shown to a child? If not, it's only a matter of time before the AI spits out something accidentally horrific.


I searched their site for any information on "how" they can claim it's safe for kids. This is what I could find: https://stickerbox.com/blogs/all/ai-for-kids-a-parent-s-guid...

> No internet open browsing or open chat features. > AI toys shouldn’t need to go online or talk to strangers to work. Offline AI keeps playtime private and focused on creativity.

> No recording or long-term data storage. > If it’s recording, it should be clear and temporary. Kids deserve creative freedom without hidden mics or mystery data trails.

> No eavesdropping or “always-on” listening > Devices designed for kids should never listen all the time. AI should wake up only when it’s invited to.

> Clear parental visibility and control. > Parents should easily see what the toy does, no confusing settings, no buried permissions.

> Built-in content filters and guardrails. > AI should automatically block or reword inappropriate prompts and make sure results stay age-appropriate and kind."

Obviously the thing users here know, and "kid-safe" product after product has proven, is that safety filters for LLMs are generally fake. Perhaps they can exist some day, but a breakthrough like that isn't gonna come from an application-layer startup like this. Trillion dollar companies have been trying and failing for years.

All the other guardrails are fine but basically pointless if your model has any social media data in its dataset.


They fail their own checklist in that article.

> Here’s a parent checklist for safe AI play:

> [...] AI toys shouldn’t need to go online

From the FAQ:

> Can I use Stickerbox without Wi-Fi?

> You will need Wi-Fi or a hotspot connection to connect and generate new stickers.


I'm sure you are correct about being able to do some clever prompting or tricks to get it to print inappropriate stickers, but I believe in this case it may be OK.

If you consider a threat model where the threat is printing inappropriate stickers, who are the threat actors? Children who are attempting to circumvent the controls and print inappropriate stickers? If they already know about topics that they shouldn't be printing and are trying to get it to print, I think they probably don't truly _Need_ the guardrails at that point.

In the same way many small businesses don't (most likely can't even afford to) opt to put security controls in place that are only relevant to blocking nation state attackers, this device really only needs enough controls in place to prevent a child from accidentally getting an inappropriate output.

It's just a toy for kids to print stickers with, and as soon as the user is old enough to know or want to see more adult content they can just go get it on a computer.


ChatGPT allegedly has similar guardrails in place, and now has allegedly encouraged minors to commit self-harm. There is no threat actor, it's not a security issue. It's an unsolved, and as far as we know intrinsic problem with LLMs themselves.

The word "accidentally" is slippery, our understanding of how accidents can happen with software systems is not applicable to LLMs.


It's too bad because it's such a great project otherwise. He puts a ton of free labour into the system and I'm sure he's dealt with some entitled users but it's really a huge reason I don't recommend it to more people. Actively telling people they must learn to solder and making the only support channel on telegram are two big turn-offs for a lot of people.

This is absolutely his right and perhaps his intention to keep the project small, but in that case I wish there was another alternative vacuum firmware project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: