It's water that builds up limescale that it's harmful for the carnivorous plants. The peat moss substrate that carnivorous plants like is acidic and the limescale neutralizes that.
Yes there is plenty of evidence that carnivorous plants die in alkaline or neutral soil conditions.
Read literally any book about caring for them. They like acidic soil. Rain water is slightly acidic.
It might take a year or more to kill a plant by slowly draining its soil of acidity. Just like it can take a year to kill a big plant via inadequate lighting.
We know how the next token is selected, but not why doing that repeatedly brings all the capabilities it does. We really don't understand how the emergent behaviours emerge.
It feels less like a word prediction algorithm and more like a world model compression algorithm. Maybe we tried to create one and accidentaly created the other?
Why would asking a question about ice cream trigger a consideration about all possible topics? As in, to formulate the answer, the LLM will consider the origin of Elephants even. It won’t be significant, but it will be factored in.
Why? In the spiritual realm, many postulated that even the Elephant you never met is part of your life.
Eh I feel like that mostly just down to; yes transformers are a "next token predictor" but during fine tuning for instruct the attention related wagon slapped on the back is partially hijacked as a bridge from input token->sequences of connections in the weights.
For example if I ask "If I have two foxes and I take away one, how many foxes do I have?" I reckon attention has been hijacked to essentially highlight the "if I have x and take away y then z" portion of the query to connect to a learned sequence from readily available training data (apparently the whole damn Internet) where there are plenty of examples of said math question trope, just using some other object type than foxes.
I think we could probably prove it by tracing the hyperdimensional space the model exists in and ask it variants of the same question/find hotspots in that space that would indicate it's using those same sequences (with attention branching off to ensure it replies with the correct object type that was referenced).
I did the same too, but funnily enough I stumbled upon it by accident when I was trying to find things that pass IR through but not visible light (like coke, sunglasses, ink etc.) using my Sony camera with nightvision mode. I was on a quest to get "x-ray vision".
I have one where I just removed the filter and added some IR led's for my 3d printer cam, that way I don't need visible light and the room can stay dark if I need it to be.
What he means is that while the models tags code as code, for the model itself this is just relationship between tokens,like the code open and close tags,same as parenthesis, commas, uppercase or verbs and conjunctions...
What you say is achievable only if another system external to the model takes some tagged model output, makes computations or lookups, and feeds the results back to the model in the form of text input.
Then it's game on for the model to trigger some form of code execution through this external system and escape the jail...
This reminds me of an episode of "Person of Interest" where a teacher tries to spark interest in his students by telling that all their life past and future events are already "encoded" in PI digits.