Pinning dependencies is trading one problem for another.
Yes, your builds will work as expected for a stretch of time, but that period will come to an end, eventually.
Then one day you will be forced to update those pinned dependencies and you might find yourself having to upgrade through several major versions, with breaking changes and knock-on effects to the rest of your pipelines.
Allowing rolling updates to dependencies helps keep these maintenance tasks small and manageable across the lifetime of the software.
You don’t have to update them manually. Renovate supports pinned GitHub Actions dependencies [1]. Unfortunately, I don’t use Dependabot so can’t say whether it does the same.
Just make sure you don’t leak secrets to your PRs. Also I usually review changes in updated actions before merging them. It doesn’t take that much time, so far I’ve been perfectly fine with doing that.
Dependabot does support pinned hashes, even adds the comment after them with the tag. Dependabot fatigue is a thing though, and blindly mashing "merge" doesn't do much for your security, but at least there's some delay between a compromise and your workflow being updated to include it.
Not pinning dependencies is an existential risk to the business. Yes it’s a tradeoff, you must assign a probability of any dependency being hijacked in your timeframe yourself, but it is not zero.
That isn't even the biggest problem. That breaks, and breakage gets fixed. Other than some slight internal delays there is little harm done. (You have a backup emergency deploy process that doesn't depend on GitHub anyways right?)
The real problem is security vulnerabilities in these pinned dependencies. You end up making a choice between:
1. Pin and risk a malicious update.
2. Don't pin and have your dependencies get out of date and grow known security vulnerabilities.
Personally I've found that you need to define the strategy yourself, or in a separate prompt, and then use a chain-of-thought approach to get to a good solution. Using the example you gave:
Hey Chat,
Write me some basic rust code to download a url. I'd like to pass the url as an string argument to the file
Then test it and expand:
Hey Chat,
I'd like to pass a list of urls to this script and fetch them one by one. Can you update the code to accept a list of urls from a file?
Test and expand, and offer some words of encouragement:
Great work chat, you're really in the zone today!
The downloads are taking a bit too long, can you change the code so the downloads are asynchronous. Use the native/library/some-other-pattern for the async parts.
Whew, that's a lot to type out and you have to provide words of encouragement? Wouldn't it make more sense to do a simple search engine query for a HTTP library then write some code yourself and provide that for context when doing more complicated things like async?
I really fail to see the usefulness in typing out long winded prompts then waiting for information to stream in. And repeat...
I'm going the exact opposite way. I provide all important details in the prompt and when I see that the LLM understood something wrong, I start over and add the needed information to the prompt. So the LLM either gets it on the first prompt, or I write the code myself. When I get the "Yes, you are right ..." or "now I see..." crap, I throw everything away, because I know that the LLM will only find shit "solutions".
This is actually a great approach. Essentially you're using time travel to prevent misunderstandings, which prevents the context from getting clogged up with garbage.
I have heard a few times that "being nice" to LLMs sometimes improves their output quality. I find this hard to believe, but happy to hear your experience.
Examples include things like, referring to LLM nicely ("my dear"), saying "please" and asking nicely, or thanking.
Well consider it's training data. I could easily see questions on sites like stack overflow having better quality answers when the original question is asked nicely. I'm not sure if it's a real effect or not but I could see how it could be. A rudely asked question will have a lot of flame war responses.
I'm not sure encouragment itself is the performance enhancer, it's more that you're communicating that the model has the right "vibe" of what your end goal is.
I use to do the "hey chat" all the time out of habit and when I thought the language model was something more like AI in a movie than what it is. I am sure it makes no difference beyond the user acting different and possibly asking better questions if they think they are talking to a person. Now for me, it looks completely ridiculous.
Non-native speakers find it useful since it doesn't just fix spelling but also fixes correctness, directness, tone and tense. It gives you an indication of how your writing comes across, e.g. friendly, aggressive, assertive, polite.
English can be a very nuanced language - easy to learn, difficult to master. Grammarly helps with that.
Yes, your builds will work as expected for a stretch of time, but that period will come to an end, eventually.
Then one day you will be forced to update those pinned dependencies and you might find yourself having to upgrade through several major versions, with breaking changes and knock-on effects to the rest of your pipelines.
Allowing rolling updates to dependencies helps keep these maintenance tasks small and manageable across the lifetime of the software.