Hacker Newsnew | past | comments | ask | show | jobs | submit | kenjackson's commentslogin

Depends on the industry and work. They are still paying top dollar for AI talent.

Bubble money is paying that out, much like in the past

I have to admit, I have almost no problems with Teams. The one big issue I had was performance when screen sharing. But I got a new laptop and this problem went away. Seems so odd that so many people have major problems with it, while I feel like within my workgroup there are almost no problems to speak of.

This was discussed before: if your Windows computer doesn't have a valid HEVC license installed, then Teams falls back to software encoding and performs horrible. Most manufacturers include the license, but not all. It's also only 99 cents on the Microsoft store (which might be unavailable on enterprise managed devices)

That: and microsoft routes all calls through their servers.

Fine if you live near a datacenter.

In Sweden though, you go through France.

Not ideal.


How extensively do you use it? When my team was just using it for meetings and the attached chats, it did actually work completely fine. When broader orgs started pushing more communications through it (the "teams" in teams, and all the weird chat room/forums that entails) all of the rough edges became very apparent. All of that is just a shockingly disorganized mess.

One day they will discover threaded conversions.

And then we will get rid of them again, because some suits are telling us that we don't actually want them, that they are "complicated", we must trust them and that recursive data types are too hard to get right. Let's all write SMS again. Or better yet, send fax.

Some engineers will facepalm super hard but won't be listened to, as usual, and we will enter the next cosmic age of self-inflicted suffering.


Thankfully I only have to use Teams in very specific projects, thus I still have them. :)

Big difference is being out of office. I expect Trump to get a ton of money after leaving office, because people like proximity to fame, but I don't like the stench when he's in office and has direct political influence.

That said, Trump also investigated Obama for the Netflix deal. Will he investigate Melania now?


Being out of office is irrelevant. "Do this for me now, I'll make sure you're taken care of when you retire." This is so common the revolving door in government is a well worn trope.

As far as I can tell no executive branch agency investigated the Netflix deal.


> This is so common the revolving door in government is a well worn trope

On TV and Reddit. In the real world you’re not getting policy outcomes today for a handshake of a payout tomorrow without someone in office to guarantee your end.


Exactly. Anyone willing to bribe you is more than willing to rescind when you have no real power.

Except if they back out of the deal after getting what they want, they'll never be able to make this kind of deal ever again.

Regardless, the revolving door is well known. It's been talked about since the 1800s. There's a wikipedia page for it. Pretending it doesn't happen doesn't change the fact that it happens and is quite common.


> if they back out of the deal after getting what they want, they'll never be able to make this kind of deal ever again

If someone is stupid enough to go running their mouth on a bribery gone bad, or one willing to give on policy in exchange for promises, you either didn't need to bribe them or are wasting your time and money.

These deals don't happen that way because they can't. It's why e.g. Bob Menendez winds up with gold bars, Melania is being paid now and Trump's crypto is being purchased and sold.

> the revolving door is well known. It's been talked about since the 1800s

Sure. But not in the way you describe. You hire the ex politician not to pay them back for a favour earlier but to curry favour with the folks still in power.

> Pretending it doesn't happen doesn't change the fact that it happens and is quite common

Straw man. Nobody said it doesn't happen. Just that the way you're describig it is wrong.


I've heard that one of the advantages of this administration is that you don't need data or convincing arguments -- just bribery and flattery. If you're OK with bribery and flattery then you'll find this administration much easier to work with. Getting your way is a simpler path.

It’s crazy to think that Instagram Reels, owned by Meta, is preferable to TikTok now. At least Reels now is at least competitive in terms of content - unlike two years ago when people were worried about TikTok being banned and Reels was not a good alternative.

Reels is just AI and engagement slop

Isn't Reels content more right-wing, while TikTok has lots of both left-leaning and right-leaning content.

TikTok historically has, but if this is truly the new owners trying to block content then that can change rapidly.

Reels skews older in the user-base, which skews the average to the right.

Sample of one, but I scroll Reels at least 30 mins a day and I've never seen any right-wing content on my feed.

This is what I read as a middle schooler learning 6502 on a C64. Does a good covering the basics in a very conversational manner.

Has anyone tried creating a language that would be good for LLMs? I feel like what would be good for LLMs might not be the same thing that is good for humans (but I have no evidence or data to support this, just a hunch).

The problem with this is the reason LLMs are so good at writing Python/Java/JavaScript is that they've been trained on a metric ton of code in those languages, have seen the good the bad and the ugly and been tuned to the good. A new language would be training from scratch and if we're introducing new paradigms that are 'good for LLMs but bad for humans' means humans will struggle to write good code in it, making the training process harder. Even worse, say you get a year and 500 features into that repo and the LLM starts going rogue - who's gonna debug that?

But coding is largely trained on synthetic data.

For example, Claude can fluently generate Bevy code as of the training cutoff date, and there's no way there's enough training data on the web to explain this. There's an agent somewhere in a compile test loop generating Bevy examples.

A custom LLM language could have fine grained fuzzing, mocking, concurrent calling, memoization and other features that allow LLMs to generate and debug synthetic code more effectively.

If that works, there's a pathway to a novel language having higher quality training data than even Python.


I recently had Codex convert an script of mine from bash to a custom, Make inspired language for HPC work (think nextflow, but an actual language). The bash script submitted a bunch of jobs based on some inputs. I wanted this converted to use my pipeline language instead.

I wrote this custom language. It's on Github, but the example code that would have been available would be very limited.

I gave it two inputs -- the original bash script and an example of my pipeline language (unrelated jobs).

The code it gave me was syntactically correct, and was really close to the final version. I didn't have to edit very much to get the code exactly where I wanted it.

This is to say -- if a novel language is somewhat similar to an existing syntax, the LLM will be surprisingly good at writing it.


>Has anyone tried creating a language that would be good for LLMs?

I’ve thought about this and arrived at a rough sketch.

The first principle is that models like ChatGPT do not execute programs; they transform context. Because of that, a language designed specifically for LLMs would likely not be imperative (do X, then Y), state-mutating, or instruction-step driven. Instead, it would be declarative and context-transforming, with its primary operation being the propagation of semantic constraints. The core abstraction in such a language would be the context, not the variable. In conventional programming languages, variables hold values and functions map inputs to outputs. In a ChatGPT-native language, the context itself would be the primary object, continuously reshaped by constraints. The atomic unit would therefore be a semantic constraint, not a value or instruction.

An important consequence of this is that types would be semantic rather than numeric or structural. Instead of types like number, string, bool, you might have types such as explanation, argument, analogy, counterexample, formal_definition.

These types would constrain what kind of text may follow, rather than how data is stored or laid out in memory. In other words, the language would shape meaning and allowable continuations, not execution paths. An example:

@iterate: refine explanation until clarity ≥ expert_threshold


There are two separate needs here. One is a language that can be used for computation where the code will be discarded. Only the output of the program matters. And the other is a language that will be eventually read or validated by humans.

Most programming languages are great for LLMs. The problem is with the natural language specification for architectures and tasks. https://brannn.github.io/simplex/

There was an interesting effort in that direction the other day: https://simonwillison.net/2026/Jan/19/nanolang/

I don’t know rust but I use it with llms a lot as unlike python, it has fewer ways to do things, along with all the built in checks to build.

I want to create a language that allows an LLM to dynamically decide what to do.

A non dertermistic programing language, which options to drop down into JavaScript or even C if you need to specify certain behaviors.

I'd need to be much better at this though.


You're describing a multi-agent long horizon workflow that can be accomplished with any programming language we have today.

I'm always open to learning, are there any example projects doing this ?

The most accessible way to start experimenting would be the Ralph loop: https://github.com/anthropics/claude-code/tree/main/plugins/...

You could also work backwards from this paper: https://arxiv.org/abs/2512.18470


Ok.

I'm imagining something like.

"Hi Ralph, I've already coded a function called GetWeather in JS, it returns weather data in JSON can you build a UI around it. Adjust the UI overtime"

At runtime modify the application with improvements, say all of a sudden we're getting air quality data in the JSON tool, the Ralph loop will notice, and update the application.

The Arxiv paper is cool, but I don't think I can realistically build this solo. It's more of a project for a full team.


yes "now what?" | llm-of-choice

What does that even mean?

In my 30 years in industry -- "we need to do this for the good of the business" has come up maybe a dozen times, tops. Things are generally much more open to debate with different perspectives, including things like feasibility. Every blue moon you'll get "GDPR is here... this MUST be done". But for 99% of the work there's a reasonable argument for a range of work to get prioritized.

When working as a senior engineer, I've never been given enough business context to confidently say, for example, "this stakeholder isn't important enough to justify such a tight deadline". Doesn't that leave the business side of things as a mysterious black box? You can't do much more than report "meeting that deadline would create ruinous amounts of technical debt", and then pray that your leader has kept some alternatives open.

It’s really just A. Point B is pretty much just derived from there.

Where did you get that they are stored in plaintext?

It doesn't matter how it's stored. So long as it isn't E2EE, they (and anyone who can ask for it) will be able to access the drives

The title of the article: "Microsoft gave FBI set of BitLocker encryption keys to unlock suspects' laptops"

Doesn’t say they were stored in plaintext.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: