Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, no and no. Safety critical systems must be validated to be safe in the operational domain and for that they must be deterministic. AI is anything but that.


ATC controller doing a bad job and almost killing a bunch of people is anything but great

skip the buzzword, why couldn’t a computer program of any kind helped avoid this


I was thinking A.I. could be useful as an assistant for ATC. But yeah, it would inevitably be over-relied-upon and over-trusted.


AI is entirely deterministic. ChatGPT and StableDiffusion and friends are fed endless amounts of random seeds along with every input to keep them from just saying always the same thing.


"Deterministic" only insofar as it's repeatable, but their behaviour is not predictable. If the behaviour is not predictable, how can it be validated as being correct?


If for a given set of inputs there is a deterministic output, then the overall behaviour to a series of inputs is just as deterministic and predictable.

I'm not sure what you mean with "is not predictable" when you also admit that it's repeatable.


Predictable as in, can say in advance what it will do.

Repeatable means you get the same output a second time, given the same input. Predictable means you can say in advance what it will do, and can then check the output against your prediction.

If you can't predict the outcome, you can't validate the process, and can't guarantee its performance.


That much is clear but any remotely continuous AI model is very predictable.


I think the point is that reality is essentially a chaotic system. That is, yes, you can repeat a failure from an input once you've seen it, but the search space is too big to enumerate beforehand.


That entirely depends on what exactly you implement here, it's entirely possible to implement an AI with continuous & linear properties, meaning that you can extrapolate it's behaviour between a set of inputs with decent accuracy and it won't suddenly change it's behaviour between continuous & linear inputs.

But AI isn't different than existing software systems either. Both will take an input from reality and take actions upon it.


That's useless in this case. You need to be able to prove that it will work with all inputs, and there are too many combinations of inputs to exhaustively enumerate.


That would be true of any software system we put into an airplane, yet we have deployed software into airplanes.

If your AI has linear / continuous output, testing it should be no different than any other software.


There is no way to determine that a non-trivial neural network won't drastically diverge in output due to small changes in input (eg one pixel attacks on image classifiers). This is true for all current models I know of.

Almost all neural network implementations have continuous outputs (ie the nodes in the output layer produce a value between 0 and 1). That doesn't change the above issue at all.

This is much less of an issue with traditional methods


The output for a given input (absent random seeds) may be constant, but is the output deterministically correct?


What does that question even mean?


A generative AI may well be deterministic and generate repeatable output for a given inpiut, but that doesn't mean the output is correct for every input. It may merely generate the wrong answer consistently.


Roughly, they are asking whether you can take an AI system and effectively reduce it to an analytic function and tell, without actually running the AI, what the output is going to be with a particular input.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: