Hacker Newsnew | past | comments | ask | show | jobs | submit | deathanatos's commentslogin

Good grief, that video suffers the YouTube-ism of "ramble on about how you don't understand X" for way too long.

Video alleges people think ISO makes the sensor "more sensitive or less sensitive". (I … don't think this is common? But IDK, maybe this is my feldspar.)

(The video also quibbles that it is "ISO setting" not "ISO" … while showing shots of cameras which call it "ISO", seemingly believing that some of us believe ISO is film speed, in a digital camera?)

Anyways, the video wants you to know that it is sensor gain. And, importantly, according to the video, analog gain, not digital gain.¹ I don't know that the video does a great job of saying it, but basically, I think their argument is that you want to maximize usage of the bits once the signal is digitized. Simplistically, if the image is dark & all values are [0, 127], you're just wasting a bit.

You would want to avoid clipping the signal, so not too bright, either. Turn your zebras on. (I don't think the video ever mentions zebras, and clipping only indirectly.)

The video does say "do ISO last" which I think is a good guideline. Easier said than done while shooting, though.

… also while fact checking this comment, I stumbled across Canon's KB stating to use as low an ISO as possible, which the video rails against. They should talk to Canon, I guess?

¹with the caveat that sometimes there is digital gain too; the video notes this a bit towards the end.


ISO changes the analog gain and in a way yes, it does make it more sensitive to a certain range of brightness.

This is because the ADC (analog to digital converter) right after can only handle so many bits of data (like 12-16ish in consumer cameras). You want to “center” the data “spread” so when the “ends” get cut off, it’s not so bad. Adjusting the ISO moves this spread around. In addition, even if you had an infinite bitrate ADC, noise gets added between the gain circuit and the ADC so you want to raise the base signal above the “noise floor” before it gets to the ADC.

Gain is not great — it amplifies noise too. You want as low ISO as possible (lowest gain), but the goal is not actually to lower gain; your goal is to change the environment so you can use a lower gain. If you have the choice between keeping the lights off and using higher ISO versus turning on the lights and using a lower ISO, the latter will always have less noise.

Most photo cameras have one gain circuit that has to cover both dark and light scenes. Some cameras like a Sony FX line actually have two gain circuits connected to each photosite and you can choose, with one gain circuit optimized for darker scenes and the other optimized for brighter scenes. ARRI digital cinema line cameras have both and both are actually running at the same time (!).


> your goal is to change the environment

...or integration time.


> The video does say "do ISO last" which I think is a good guideline. Easier said than done while shooting, though.

> … also while fact checking this comment, I stumbled across Canon's KB stating to use as low an ISO as possible, which the video rails against. They should talk to Canon, I guess?

Isn't ISO last the same as setting it as low as possible? Obviously it's always set to something, so I thought 'doing it last' means start with it low, set exposure & shutter, increase as necessary?

(Shutter speed being dictated by subject and availability of tripod, essentially it's just exposure & ISO which becomes about how much light there is and how it's distributed, I suppose.)

I'm not really into photography though, so perhaps that's all nonsense/misunderstanding.


Sounds like you'll be spending your day making a better video! :-)

i think we need to differentiate between raw or derivative format. canon KB might cater to wider audience thus the latter

… on my MBP, if we discount the icons that ship with macOS, the limit is 4 items. Past that, they're hidden by the notch.

I don't get why an overflow arrow once the limit is reached is so hard here.

Or letting users decide what the order of items in the bar should be.


command-click-and-drag them to where you want 'em. don't need bartender for this

> Why would an Azure customer need to query this service at all? I was not aware this service even exists- because I never needed anything like it.

The "metadata service" is hardly unique to Azure (both GCP & AWS have an equivalent), and it is what you would query to get API credentials to Azure (/GCP/AWS) service APIs. You can assign a service account² to the VM¹, and the code running there can just auto-obtain short-lived credentials, without you ever having to manage any sort of key material (i.e., there is no bearer token / secret access key / RSA key / etc. that you manage).

I.e., easy, automatic access to whatever other Azure services the workload running on that VM requires.

¹and in the case of GCP, even to a Pod in GKE, and the metadata service is aware of that; for all I know AKS/EKS support this too

²I am using this term generically; each cloud provider calls service accounts something different.


& I have thus far made a large portion of my living off of fixing bad code "later".

… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".

So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.


> I understand "free services" eventually come to the conclusion of either charging or using ads to finance and even make money out of them.

The endgame is not "or", it's "and": eventually come to the conclusion that, why choose between revenue streams when we could just have both?


I think we'll see companies increasingly adopting the X approach: charged tiers for 'fewer' ads. With no actual guarantee as to the absolute quantity of ads, just 'fewer, relative to the people who aren't paying as much'. We're basically on a downward slope where not seeing ads is going to get steadily more and more expensive over time.

> The Telnyx platform, APIs, and infrastructure were not compromised. This incident was limited to the PyPI distribution channel for the Python SDK.

Am I being too nitpicky to say that that is part of your infrastructure?

Doesn't 2FA stop this attack in its tracks? PyPI supports 2FA, no?


PyPI only supports 2FA for sign-in. 2FA is not a factor at all with publishing. To top it off, the PyPA's recommended solution, the half-assed trusted publishing, does nothing to prevent publishing compromised repos either.

No. I was one of the "lucky" ones forced to use 2FA from the beginning.

I also wrote the twine manpage (in debian) because at the time there was even no way of knowing how to publish at all.

Basically you enable 2FA on your account, go on the website, generate a token, store it in a .txt file and use that for the rest of your life without having to use 2FA ever again.

I had originally thought you'd need your 2FA every upload but that's not how it works.

Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners. Of course if the developer's token.txt got compromised, there's a chance also his private ssh key to push on github got compromised and the attackers can push something that will end up on pypi anyway.

Remember that trusted publishing replaces GPG signatures, so the one thing that required unlocking the private key with a passphrase is no longer used.

python.org has also stopped signing their releases with GPG in favour to sigstore, which is another 3rd party signing scheme somewhat similar to trusted publisher.

edit: They deny this but my suspicion is that eventually tokens won't be supported and trusted publishing will be the only way to publish on pypi, locking projects out of using codeberg and whatever other non-major forge they might wish to use.


A stolen PyPI token was uses for the compromized litellm package. I wouldn't be surprized if tokens will be decommissioned in the aftermath of these recent hijackings. That wouldn't prevent these attacks as you mentioned SSH keys were stolen (and a Github token in the case of litellm). It would be a way for PyPA to brush off liability without securing anything.

I’ll bypass the technical inaccuracies in this comment to focus on the main important thing.

> Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners.

There’s no particular reason it wouldn’t work; it’s just OIDC and Codeberg could easily stand up an IdP. If you’re willing to put the effort into making this happen, I’d be happy (as I’ve said before) to review any contributions towards this end.

(The only thing that won’t work here is imputing malicious intent; that makes it seem like you have a score to settle rather than a genuine technical interest in the community.)


Did I misunderstand this conversation? https://discuss.python.org/t/new-oidc-providers-for-trusted-...

It didn't look to me like codeberg was being seriously considered for inclusion.


I wrote in that thread that I think Forejo (or more precisely Codeberg) probably clears the bar for inclusion[1].

[1]: https://discuss.python.org/t/new-oidc-providers-for-trusted-...


Yeah at this point I’d really like for pypi to insist on 2FA and email workflows for approving a release.

Yeah it means you don’t get zero click releases. Maybe boto gets special treatment


Betteridge's Law of Headlines hard here.

> The era of purposefully frustrating humans is over. The Chinese open source model running on the box under my desk can pass the Turing Test. When you call, e-mail, text, or show me an ad, you’ll never know if it’s me or my model seeing it.

But at some point, you're going to want to do something, like, e.g., buy something. Then you're right back to the problem in the opening quote:

> things take time, patience runs out, brand familiarity substitutes for diligence, and most people are willing to accept a bad price to avoid more clicks.

& we're already seeing AI used to do this. E.g., Amazon listings where product photos are AI generated. (… not that many product photos weren't "bad photoshop of product onto hot sexy model who is obviously not using our product" before … but now it's AI!) Whereas before someone would have had to spend a modicum of time badly using Photoshop, now AI can just churn out the same fraudulent result in a fraction of the time.

Now, if I have a problem with a product, instead of just calling a number, browsing a phone tree, getting put on hold, and finally having to struggle to get some human to understand the basic logistics of "I paid for X, I did not get X, I demand X or refund", I get to do all that but with the extra step of "forced engagement with an AI that is incapable of actually solving my problem". (This somehow still manages to apply even when the problem is seemingly trivial enough that I find myself thinking "… this actually should be something an AI can do" but inevitably, no, the AI is "sorry", it cannot do that.)

And besides, calls, emails, etc. are already handled without AI: I (and everyone I really care about) have either allowlisted all inbound comms, or abandoned the medium altogether. Moreover, any communications medium is useful because it is not infested with spam, and will eventually be destroyed by spam. At least until we grow laws for mediums like phone/email, maybe named things like "Do Not Call" or "CAN-SPAM" and those laws are enforced. But the GOP has no interest in enforcing any level of consumer protection, so here we are.


Yes, I think it was, but that was also b/c, IIRC, the pregnancy tester had a CPU, too. A CPU can actually run things.

DNS … cannot, and that's why the person upthread is criticizing the use of the word "run" here. DNS ran nothing.


No it wasn't, it just was the display. My commented example in this thread states that in every device your are running Zork I-III or any z-machine v3 compatible game it's actually hosting the interpreter and the game itself, from the Game Boy to an smartphone, a PC, an old PDA...

Data can be stored in A records, too, just less efficiently.

(Or AAAA, or CNAME, or…)


I am confused; did you ever actually email anyone about the vuln? The AI suggests emailing security emails multiple times, but as I'm reading the timeline, none of the points seem to suggest this was ever done, only that a blog post was made, shared on Reddit, and then indirectly, the relevant parties took action.

I'm hoping this just isn't on the timeline.


The first line of the post is:

> I'm the engineer who got PyPI to quarantine litellm.

In guessing they used a tool other than Claude Code to serve the email.


"got" can be read as "indirectly, via a blog post, which I think they reacted to"

I've updated the timeline to clarify I did in fact email them. I’m not yet at the point of having Claude write my emails for me, in fact it was my first one sent since joining the company 10 months ago!

Wait, what? You sent a single email being in a company for ten months?? Or was it the first external email?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: