Two quick errata: In Sound / Biology, a control inversion: low pitch (slider and sound) animates blue short hairs, and high pitch animates long red. In the windtunnel > Wing, lift is reported but the airstream remains undeflected.[1]
In case anyone else was curious, `ai.studio/apps` apps apparently require a google account login by default. Google ai-mode says allowing public access is an author option, but then (1) API costs are borne by the author, and (2) there are some (unclear) compliance implications.
A thought-provoking vision. There seem many many underexplored opportunities for catalyzing social connections - match making.
When I'm doing "broad and shallow" at a meetup, there are invariably "oh, you'll want to talk with X over there, they <overlap on some intent interest>". It can feel tragic to look out at a room of nifty people, in largely desultory conversations, knowing that there are some highly-valued conversations latent there, which won't occur, because our social and cultural and technical collaborative infrastructure still sucks so badly at all this.
In lectures augmented by peer-instruction, addressing the "if you think your lectures are working, your assessment also isn't" problem, one version has students clicker-committing to a question answer, then turning to discuss it with a neighbor, then clickering again. One variant (which you now many not be able to use commercially, because of the failed-startup-to-bigco patent pipeline), has the system chose who everyone turns to (your phone tells you "discuss with the person behind left"), attempting to maximize discussion fruitfulness, using its insight into who is confused about what. So perhaps imagine Tamagotchis as part social liaison - "hey, did you know the gal at the optometry shop here also enjoys heavy bluewater sailing?"
So on the topic question: Want to incentivize greater social contact...? Increase the payoffs.
A cautionary user experience report. The default hotkey upon download is ctrl+space. Press to begin recording, release to transcribe and insert. Key-up on the space key constitutes hotkey release. If the ctrl key is still down when the insertion lands, the transcribed text is treated as ctrl characters. The test app was emacs. (x64 linux x11, with and without xdotool)
Since the page didn't load for me several times, and the title is ambiguous, here's the Abstract: Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained. We show that LLMs’ responses to psychological measures are an outlier compared with large-scale cross-cultural data, and that their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies but declines rapidly as we move away from these populations (r = -.70). Ignoring cross-cultural diversity in both human and machine psychology raises numerous scientific and ethical issues. We close by discussing ways to mitigate the WEIRD bias in future generations of generative language models.
I was struck by explicit orders being used as suggestive of both commonness[1] and rareness[2]. But maybe it resolves as validating an interpretation of commonly reported events, vs amplifying the reported unusualness of one.
[1] "Heck, Tacitus has a Roman general lay out the sequence in an order to his men" (in a context of "examples [...] are not hard to come by").
[2] "‘engage at discretion’ order needed to be given as an order [...] If [...] this was the standard way of fighting, there would be no point in Plutarch having Aemilius order it" (in a context of "Livy noting the unusual nature").
Laptops sometimes have stickers. For a time, I instead had a transparent slip cover, to vary the sticker set, user-test alternatives, and throttle conversations. Science education topics (Boston/Cambridge subway). Anti-patriarchy stickers drew proto-MAGAs. Some backpacks now have low-res screens built into the back, suggesting new possibilities.
One Laptop Per Child, at its peak, generated fun continuous crowd conversations.
> a pair of glasses with a screen inside of them
I've no idea what current tech is like, but I use to proselytize aphysical UIs, where a small head motion results in larger screen motion, to reduce neck swiveling.[1]
> weirder
Laptop harness walking desks are a thing. And one can do hand and head tracking[2] (I had that setup at a meetup where the swag was little stick-on privacy shutters for laptop webcams :). Boston/Cambridge is perhaps culturally a best case for such games - I've not tried them in NYC... hmm.
> but something very complex, [...] instead sketch out a diagram on a piece of paper [...] keep a small notebook in my bag
Same. I've tried swapping in an iPad, but it hasn't stuck.
Fwiw, I've done a pinch-nail-hand-arms 1-10-100-1000 mm "body as size reference" a couple of times around 5ish. And a 1000x "micro view" "pinch is zoomed to arms size" "it's like a scale model or doll playset - everything zoomed together" world of "bacteria sprinkles, red blood cell candies (M&M minis or concave Smarties minis or Sweetarts - there's lots of cell candy analogs), hair poles, salt/sugar boxes". Stories of sitting on a grain of salt and eating... etc; pet eyelash mites. No idea if it actually worked.
I did some user-test videos, now only on archive.org.[1] Hmm... the "Arms, hands" video there now doesn't seem to play inline? - but does wget'ed and browsered. :/
Hmm, perhaps with flying? When stuck on the ground, people's feel for size gets poorer as things get bigger (tall buildings, clouds, map distances). I think of having 4ish orders of magnitude available for visual reference in a classroom (cm to 10 m), plus less robustly 100 m and km in AR. At that micrometer per meter, a grain of salt towers over a city skyline - "nano view" in [1] (eep - a decade ago now - I was about to take another pass at it as covid hit).
Hmm, err, that could be misleading... 4ish for visible lengths in a large class. But especially in a small group, one can use reference objects of sand (mm) and flour (fine 100 um, ultrafine 10 um). The difference between the 100 um and 10 being more behavioral and feel (eg mouth feel) than unmagnified visible size. Thus with an outdoor view (for 100 m), one can use less-abstract "it's like that there accessible length" concrete-ish analogues across like 8 orders of magnitude. Or drop to 6, or maybe push for 9, as multiples of 3 nicely detent across SI prefixes.
Nice. Two quick UI thoughts. Upon loading, perhaps start with some unit selected, and a default amount 1, so there's immediate content to be seen? And to extend the experience, maybe add a "dice roll" button, so users can "see more neat things" click-click-click without the cognitive overhead of pathing the option space
[1] randomly, fwiw, I've used cloud deck slicing to illustrate downdraft, eg https://imgur.com/4hhZ7zq https://www.youtube.com/watch?v=0HIddtgGzDE . Or perhaps moments of "yoink" like... err, https://www.youtube.com/watch?v=dfY5ZQDzC5s&t=154s .
reply