Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A File Structure for the Complex, the Changing, and the Indeterminate [pdf] (andymatuschak.org)
79 points by Smaug123 on April 15, 2021 | hide | past | favorite | 28 comments


Andy Matuschak pointed out on Twitter that this paper from 1965 has a jaw-droppingly beautiful title, but then I read it and it's actually fascinating. It's an exploration from the idea of the memex, as imagined before personal computers were a thing, towards what kind of structure is sufficiently general to capture the vast, sprawling, inconstant domains of thought.


Are you familiar with the author or does the Tweet mention him? He would famously go on to spend a large part of his career attempting to implement something roughly like this paper describes, and inspired by the Memex.

https://en.wikipedia.org/wiki/Ted_Nelson https://en.wikipedia.org/wiki/Project_Xanadu


Oh, amazing - I knew the name in a "that rings a bell" way, and I knew what Project Xanadu was, but I didn't realise they were linked! Thanks :)


> but I didn't realise they were linked!

I see what you did there.


I understand the author must be the "Ted Nelson" (https://www.youtube.com/user/TheTedNelson). He has always been such a visionary. Such an immensely articulate person. His youtube channel is full of gems; his Computer for Cynics' series is great. His chapter on How Bitcoin Actually works (https://www.youtube.com/watch?v=3CMucDjJQ4E) is a "must see" for journalists and people that come from non technological domains - still current and at the same time visionary for a video recorded in 2014 about Bitcoin.


"A small computer with mass memory and video-type display now costs $37,000"

And since $1 in 1965 is worth $7.27 today, that $37,000 is really $270,000.

Incidentally, I found https://jcmit.net/memoryprice.htm for the cost of memory over time. In 1965, memory cost $2.6m per megabyte. Today it is $0.003!


I love these old comp-sci papers.

Makes you wonder what amazing ideas are buried in them that where ahead of their time then but aren't now.

So much of what we use today came out of academia a couple of decades ago - I've made the joke before that if you want to see what the front-end folks will be doing next year look what was the hot research 25 years ago.


Do you have an example of this?


- D. L. Parnas, On the Criteria To Be Used in Decomposing Systems into Modules 1972

- Liskov, B Programming With Abstract Data Types 1974

- ______ A behavioral notion of subtyping, 1994

- Waldo et al, A note on distributed computing, 1997


Thanks.


2003: https://www.microsoft.com/en-us/research/publication/a-user-...

2021: https://www.microsoft.com/en-us/research/blog/lambda-the-ult...

Not quite 25 years, but it was a paper published by microsoft research, about a microsoft project, that took 18 years to be implemented.


Thanks.


http://conal.net/papers/icfp97/

Functional Reactive Animation - 1997

Functional Reactive Programming is (one of) the current hotness.


Thanks.


Bret Victor’s classic talk “The Future of Programming” is worth watching on this subject.

http://worrydream.com/dbx/


Thanks.


Bhuyan L.N.'s 1984 paper to be concrete, but in general, Valerio M.'s papers, Lamport papers, some old results from Knuth...


Thanks.


I remember my dismay when I realized years ago that successive generations of software developers don't know anything their predecessors did. It takes about a decade or more for them to pick it up by osmosis from their peers (or HN). I don't know what the hell is going on in Computer Science classes, but they sure aren't teaching enough history.


Your comment is directly contradictory. You say that developers pick stuff up by osmosis from their peers, but at the same time do not know anything from their predecessors. How does that work?


Peers and predecessors are not the same.


Marianne Bellotti's new book, *Kill it with Fire" has an entire chapter, titled "Time is a Flat Circle", devoted to this sort of institutional amnesia.


I see something of this in the plumbing of git. "pies of entries and lists? By patching techniques, of course. Variant entries and lists can take virtually no space, being modification data plus pointers to the original. When a modified version of a list or entry is created, the machine patches the original with the changes necessary to make the modified version"


dang, please add (1965) to the title. Ted really is a visionary.


I'm fascinated by these sort of ideas about how to structure data for personal archives, but most of the stuff I've read about it is quite old. Are there any recent innovations or ideas in this area or has everything basically been explored?


No.


Any paper with an Edward Gorey reference gets my vote.


Ok, so that's how an emacs buffer works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: