I have read through your pdf and I think you make a best case for Lisp I have read in last few years.
We could just imagine who programming languages would look like if we abandoned the specialized notations like Haskell or C++, of which C++ really starts to show up that such notations are probably a dead-end. Or if B. Eich was allowed to use Scheme as the built-in scripting language for Netscape. Perhaps we would not need XML and/or Json even Yaml as machine interchange and description formats?
Anyway, I am not familiar with Futamura and yaml spec, your texts in the repo, and your project is the first time I see this. I did a webserach and have read through the Wikipedia page on Partial evaluation which talks about Futamura projections. But can you please ELI5-me about your project: is this a yaml parser generator or is this a DSL/PL language parser generator? Can I specify few rules in a language of my choice, say C, and it will generate a yaml parser and test suite in C language? Or does it mean I can specify a parser for a programming language, say C or a DSL, as yaml production rules in language of my choice, say Common Lisp, and it will generate a parser for C or that DSL in Common Lisp? Or do I understand this completely wrongly? :)
How does this project compares to something like tree-sitter? This AST you build, I have just glanced over it thus far, or use under the hood, could that be exposed somehow, or is it already, to the application? For example, could we build a server in Common Lisp, that reads this various language specs, builds an AST and gives us answers on questions like: is this position in a code for a function definition, or in a comment, and similar? In other words, could we use it for a tool that gives information about meta-data from the source code so we could use it to build tools like LSP servers, indentation servers, syntax highlight and such? Just a curious question, forgive me if I misunderstand what this does.
> setf is like referring to a mutable reference in C++ or Java.
Setf is rather a computation of a reference, than a reference.
A reference in Java and C++ is a pure pointer, with some syntactic sugar in java (you skeep */-> to define and dereference it), and few corns of sugar spread on top of it in C++ (can't be null).
I am with you, I think setf is making code repetitive and more verbose in many cases, however in this particular case, if you see
(var *some-object* value)
somewhere far away from the class definition, say in another file, can you tell immediately what that code does? Can you understand it directly by looking at the code? Do you define something? Do you set something or do you just read something like a property list (something like rassoc)?
(setf (var *some-object*) value)
With setf it is immediately clear from the code what is going on. I don't know if that is the best example, just thinking of all the arguments for and against setf :). Personally, I find it makes code more verbose in many situations.
Yeah, that is great idea, and I agree it sounds great on paper. However, some time ago, I have come to a conclusion that it soon becomes quite verbose and tedious.
I think it worked better in the past when people used acronyms and very short names for function and variable names. Today, with self-documenting code style, names are longer for both functions and names. Constantly typing full paths gets annoying quite soon. Code is also more verbose to read and less code fits into 80 columns.
My personal conclusion is that I actually prefer to abstract those away with a proper name like aset, put, and so on, just to make my own code less verbose.
If they really wanted to simplify the language, they should have perhaps removed 'setq' and 'setf' and just kept 'set' in the language but with the powers of setf. Not to mention that users have to learn how to write own setf accessors, unless it is very simple stuff (the extra bit of magic you mention).
As a "simplification" of the language I think they have failed, if that was the goal. That does not mean that 'setf' is not useful. On the contrary, it is a very useful tool to be able to compute and set a "place", just shouldn't be sold as a simplification.
> The author of “Let Over Lambda” dislikes Emacs and does not use it.
I have no idea what Hoyte like to type in, but why does it matter what text editor he uses? Einstein didn't had any computer, not even a calculator. Do we have to use paper and pencil for all the calculations just because Einstein did? Our physics teacher in gymnasium, forced us for 4 years to do all calculations on tests by hand, at four decimal places, with that exact excuse: Einstein didn't have a mini calc. Non of us have become a Nobel prize taker in physics :).
> Also, a lot of the interactivity is required by the standard to be built in to your Lisp’s REPL, so you can do quite a bit if your REPL isn’t primitive.
Mnjah; not so much really. Using at least SBCL from plain command line really sucks. If you mistype something you have to retype everything, no history, etc.
> SBCL doesn’t even have readline
If you are on some *nix OS, you can get a long way by just using SBCL with built-in sb-aclrepl + linedit. Aclrepl gives you "command-like" stuff, similar to ":" in Vi (or M-x in Emacs), and linedit adds cursor motion, history and some basic completion. I would still not type entire programs in repl, but for running and testing the code it is a basic survival kit. For me personally it is enough.
There is also cl-repl package which gives you native bindings and some extras if you want to go all-in readline from within the lisp itself.
We could just imagine who programming languages would look like if we abandoned the specialized notations like Haskell or C++, of which C++ really starts to show up that such notations are probably a dead-end. Or if B. Eich was allowed to use Scheme as the built-in scripting language for Netscape. Perhaps we would not need XML and/or Json even Yaml as machine interchange and description formats?
Anyway, I am not familiar with Futamura and yaml spec, your texts in the repo, and your project is the first time I see this. I did a webserach and have read through the Wikipedia page on Partial evaluation which talks about Futamura projections. But can you please ELI5-me about your project: is this a yaml parser generator or is this a DSL/PL language parser generator? Can I specify few rules in a language of my choice, say C, and it will generate a yaml parser and test suite in C language? Or does it mean I can specify a parser for a programming language, say C or a DSL, as yaml production rules in language of my choice, say Common Lisp, and it will generate a parser for C or that DSL in Common Lisp? Or do I understand this completely wrongly? :)
How does this project compares to something like tree-sitter? This AST you build, I have just glanced over it thus far, or use under the hood, could that be exposed somehow, or is it already, to the application? For example, could we build a server in Common Lisp, that reads this various language specs, builds an AST and gives us answers on questions like: is this position in a code for a function definition, or in a comment, and similar? In other words, could we use it for a tool that gives information about meta-data from the source code so we could use it to build tools like LSP servers, indentation servers, syntax highlight and such? Just a curious question, forgive me if I misunderstand what this does.
Edit: was looking a bit around and found a nice intro into Futamura projections: https://www.youtube.com/watch?v=RZe8ojn7goo.
This is awesome :). Thanks.