Thanks a lot for this. Also one question in case anyone could shed a bit of light: my understanding is that setting temperature=0, top_p=1 would cause deterministic output (identical output given identical input). For sure it won’t prevent factually wrong replies/hallucination, only maintains generation consistency (eq. classification tasks). Is this universally correct or is it dependent on model used? (or downright wrong understanding of course?)
> my understanding is that setting temperature=0, top_p=1 would cause deterministic output (identical output given identical input).
That's typically correct. Many models are implemented this way deliberately. I believe it's true of most or all of the major models.
> Is this universally correct or is it dependent on model used?
There are implementation details that lead to uncontrollable non-determinism if they're not prevented within the model implementation.
See e.g. the Pytorch docs for CUDA convolution determinism: https://docs.pytorch.org/docs/stable/notes/randomness.html#c...
That documents settings like this:
torch.backends.cudnn.deterministic = True
Parallelism can be a source of non-determinism if it's not controlled for, either implicitly via e.g. dependencies or explicitly.
- a collection of effects (various distortions, color glitches, demoscene-like) applied over incoming video stream, maximum of 4 effects stackable on top of one another
- midi controllers support (controlling actions/params of effects via CC)
- modulation of effects params via LFO and audio events (bpm, kick, tonal detection)
- loading of .glsl shaders (eq. shadertoy.com)
- dynamic input resolution, output either 360p (rpi 4/5) or upscaled to 720p / 1080p (networks like SRGAN over Hailo / RPI or RK3588 with a Radxa 5B SBC).
Given a 2nd screen (timeline editor) would love to evolve this to something like a hardware editor, somewhat in the line of DAWs in the audio world. Most things are working with biggest challenge now being building a control surface (buttons, rotaries + associated oleds, etc) and attempting laying it all on a PCB, a process I don't know much about. If there's interest welcome comments and could elaborate more.
FWIW, I care a lot more about Indesign and Premiere Pro (and I'd need the hardware acceleration to work for the latter): I barely ever use Photoshop (though I do use Lightroom Classic and, every now and again, Illustrator). But like, having to run Windows to do my audio/video editing work is getting unbearable recently with all the BS Microsoft thinks they can feel justified doing :(.
Great solution if you're looking to spend hours installing CS6 built for a 1280x720 display, or CS18 (high DPI support) with very buggy operation and constant crashes. I've been there, done that.
first $100: pdf photography courses with weekly chapters
first $1000+: built a shop selling silk scarves (100-odd) I collected myself from Laos/Cambodia/Thailand to balance trip budget (25yo back then, mad travelling). Sold all scarves, closed shop.
The “billion dollar mistake” was not the null concept itself, it was having all reference types be nullable, i.e no distinction between nullable and non-nullable in the type system.
This. Same scenario here, fondly remember 1080ti, that was the workhorse for us alongside cloud training (startup with ~100 nlp prod models, ner, siamese, count models, etc). ULMFiT and language transfer was the moment when upgrades felt necessary though 2080ti’s vram unfortunately stayed the same (never had access to Titans for instance so perspective may be limited).
Signed it too :) Can’t remember a particular reason though in our case (former cofounder) we were discovering the actor model, immutability, streams,.. tbh remember a great deal of fun programming during those days, insane schedules and whatnot. Though truth be told our most productive systems are Go these days (sold, stack still holding).