Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stages of denial in encountering K (nsl.com)
439 points by kick on March 9, 2020 | hide | past | favorite | 422 comments


From Arthur:

    design goal: speed
    write fast, run fast, modify fast.
    minimize product of vocabulary and program size.
https://a.kx.com/a/k/readme.txt

    comprehensive programming environments 
    (programming, files, comms, procs, ...)
    are depressingly complex.
    commonlisp, c/lib/syscalls, java, winapi
    are all in the 1000's of entry points.
    many with multiple complex parameters.
    with k we try to do more with less
    in order to write correct programs faster.
https://web.archive.org/web/20041024065350/http://www.kx.com...

    > Just curious, but does the state of California charge you a large tax
    > for each character in a variable name?
    
    yes.
    
    we also try to fit our algorithms onto licence plates.
https://web.archive.org/web/20040823040634/http://www.kx.com...

For some 'licence plate' K algorithm dissections, you might like this Twitter account:

https://twitter.com/kcodetweets


The readme examples remind me of Perl code. Example:

    print<>!~($/=$")+2*map 1..$s{$_%$'}++,<>


This one comment managed to insult both K and Perl coders...


I don't think Perl coders need any help with that.


For non-perl guys: It is a cryptic example, because it uses ‘system’ variables in a non-obvious way, not because of syntax. I can read it syntactically just fine, but have to look up for names to understand what it does. A fully uncodegolfed variant would require the same effort here.


Golfing is not seen so negatively for Perl. "Embrace the language" they say.



What does this do?


Don't use indentation for blockquotes. HN rendered indented lines as code, which is unreadable on mobile.


I use to program financial applications in K/Q for a stint. Very elegant and orthogonal language. And very well documented. One major issue I had with it and other APL-ish languages: lack of skimmability.

I can easily skim thousands of lines of Perl and Java in seconds and have a rough idea of what the code does. Reading K, however, requires careful and deliberate attention to every single character. I found it useful to maintain documentation on how to /read/ the code (almost a literate programming approach).


That's bad in my book.

I write a line once for a 100 times I read it.


OTOH, there is something to be said for reading fewer lines in some cases.

Conifer forest

Vs

Pine Pine Cedar Spruce Pine Cedar ...


Yeah, I agree. I wrote some mortgage prepayment models in q for a while, but didn't dip into K itself. Q is fantastic as a database query language (assuming you have time-series databases and are mostly doing fairly simple operations.) Implementing a full model in it is fine, but can get hard to maintain.

Also it has (or had, maybe they fixed it) a bunch of weird quirks, like "sum foo" skipping all the NAs in a list while "+/foo" didn't, resulting in NaN. (Or maybe it was +/ over an int array didn't work while a double array did. I forget.) Arrays of dicts were immediately turned into tables, which didn't work the same as an array of dicts, etc.

But, yeah, it's not crazy implementing things in q/K, as long as they're mostly linear things. Trees are hard, iterative methods are hard, and it's really hard to skim code, other than to just rely on names and comments.


I believe they have fixed it. I just did the following:

  q)a: 1 2 3 0N 3 4
  q)sum a
  13
  q)+/[a]
  13


the thing that's odd to me is that you could write this exact same post about other languages - a person who learns Haskell ends up writing that exact same range/reduce/plus statement. Modern lisps like Racket or Clojure take you there. Forth programmers and Joy programmers already noticed that their two languages had converged: and they'd write basically exactly this, too. Really, it's only the "mainstream" languages that are different!


A Common Lisp programmer would write it like that too. He would then disassemble the function, run away in terror and eventually come back to rewrite the whole thing with a more efficient loop expression.


That's an interesting take on things. Another one would be that ~40-50 million programmers do things one way and ~10000 (totally random guess) do it another way :-)


This was written by 'RodgerTheGreat, whose comments and submissions on this site are a treasure trove.


Thanks for the tip. Looks like they also have a webcomic: http://beyondloom.com/


his eerie mind reading tone is entertaining and almost entirely accurate


Look, I don't hate K. It's probably a pretty good language for the task it seems to have been designed for, which appears to be numerical computing. Being able to fit an entire code on a screen is an interesting concept; while I'm not completely convinced it should be a goal in and of itself it's certainly something I can't rule out as a possible productivity booster.

However, I still find it problematic, and it's not really about the language itself: it's more about how it's promoted in the article, by the people in this thread, and in general. Specifically, the claims it's perfectly readable, and anyone who says anything otherwise is either stupid, lying, or too poor to understand it are not useful. (Yes, you know who you are for that last one. You're not helping your case.) The fact of the matter is that most programmers are used to ALGOL-esque syntax, and there is significant value in being able to lift concepts and interoperate between those languages at a superficial level. Every "inspired" languages hits these issues, depending on how "strange" it is: Objective-C is ugly, Lisp is alien, and on the far end APL derivatives are simply labelled as unreadable. That doesn't mean they're bad languages, but to those promoting it: drop the attitude, and understand why we have reservations. Your language isn't the "true" way to program, that only the chosen elect can understand and lecture us on. Don't ignore or gloss over languages that have similar features to yours. We want to hear what the interesting things are that you can do in the language, actual guidance for how we can adapt to the language (not: "one-letter identifiers are just what we do; deal with it"). Thanks.


I love small languages (lisps, forths, apls) and think that there is much to learn from them, but you need an open mind. You need to start from zero and totally forget your ALGOL-mindset while you are learning the new paradigm.

It's like learning Chinese. Everyone learns it in China, and everyone there will tell you that it's perfectly readable and you don't need to be extraordinarily intelligent to speak it. But if you expect to figure it out based on your knowledge of English, it will be terribly frustrating. This does not mean that German or Dutch (which share roots with English) are better languages, or even easier to learn. Do not expect your Chinese teacher to give you a different answer from "these weird symbols are just what we do; deal with it".

This article is quite good. Clearly the author understand your reservations and he is saying: give it a try and it will click. The learning material might be better, but it's there (start with Iverson and work from that). It's much easier than learning Chinese!


> Everyone learns it in China, and everyone there will tell you that it's perfectly readable and you don't need to be extraordinarily intelligent to speak it.

I'm not sure Chinese is necessarily a good example for the point you're trying to make.

Native Chinese speakers _probably_ spend more effort acquiring full competency in Chinese than native English speakers need to learn English. For one thing, reading and writing is harder to learn, even for natives (source: am Chinese and haven't acquired full literacy, despite years of half-hearted effort).

English is generally easier to learn, not only for German and Dutch speakers, but for most people in the world who speak languages that are unrelated to both of them.

One hypothesis[1] is that English has been learned by foreigners multiple times in the past and slowly lost it's more difficult native features (noun declension, many forms of inflection). Chinese has tones, which are rare amongst world languages and, as many learners will tell you, difficult to achieve proficiency with as as an adult.

On a practical level, the British and Americans have had hegemonic influence over the world for about 2-3 centuries, so many idioms and concepts in the language will be familiar to a learner; less so Chinese, which is far more coupled to the culture of China and its history.

Like, every schoolchild in China learns a bit of Tang poetry as part of their education, and 成語 idioms are part of daily speech across all social classes.

https://en.wikipedia.org/wiki/Middle_English_creole_hypothes...


> hypothesis[1] is that English has been learned by foreigners multiple times in the past and slowly lost it's more difficult native features (noun declension, many forms of inflection).

The claim that noun declension and inflection is harder to learn isn't well founded. If your native language has these, it'll be more natural for you. All languages have semantic declension and inflection, in the sense that context signals the kind of each word even if it's not syntactically inflected. For example in English the position of the word can signal whether a word is an object or subject "John hit Sally" vs "Sally hit John", the order of John and Sally signals who hit whom. In an agglutinate language you could signal this by putting a prefix like "John hit Sallyum" or "Sallyum hit John" would be parsed the same. My native language is Turkish, I've been learning English since I was 5 years old, but this "order matters" thing eternally confused me. It doesn't make sense to me when people say suffixes are more confusing than this order thing.


> English is generally easier to learn..for most people in the world

Ohoh, this line sparked a thought in my head:

"English is the JavaScript of human languages."

> ..slowly lost it's more difficult native features

That may be true of creole and English spoken as a second language (which, I believe, is more dominant in the world than English spoken by natives). Maybe also true about losing inflections, thank goodness, that surely must have made the language easier.

On the other hand, English as spoken by natives do contain plenty of quirks, exceptions, phrases that don't make sense explicitly (with implicit meaning you have to have learned/memorized)..


Lisps (the real ones, at least that rightfully use the name) do not belong into this lunatic fringe category; they support Algol-style programming. Aside from the prefix syntax, a lot of the concepts are familiar.

Microsoft's sample application "Your First Windows Program", written in C, using Win32 calls, can be translated into TXR Lisp literally, almost expression for expression:

https://docs.microsoft.com/en-us/windows/win32/learnwin32/yo...

The "meat" part, without the FFI declarations that precede it:

  (defun WindowProc (hwnd uMsg wParam lParam)
    (caseql* uMsg
      (WM_DESTROY
        (PostQuitMessage 0)
        0)
      (WM_PAINT
        (let* ((ps (new PAINTSTRUCT))
               (hdc (BeginPaint hwnd ps)))
          (FillRect hdc ps.rcPaint (cptr-int (succ COLOR_WINDOW) 'HBRUSH))
          (EndPaint hwnd ps)
          0))
      (t (DefWindowProc hwnd uMsg wParam lParam))))

  (let* ((hInstance (GetModuleHandle nil))
         (wc (new WNDCLASS
                  lpfnWndProc [wndproc-fn WindowProc]
                  hInstance hInstance
                  lpszClassName "Sample Window Class")))
    (RegisterClass wc)
    (let ((hwnd (CreateWindowEx 0 wc.lpszClassName "Learn to Program Windows"
                                WS_OVERLAPPEDWINDOW
                                CW_USEDEFAULT CW_USEDEFAULT
                                CW_USEDEFAULT CW_USEDEFAULT
                                NULL NULL hInstance NULL)))
      (unless (equal hwnd NULL)
        (ShowWindow hwnd SW_SHOWDEFAULT)

        (let ((msg (new MSG)))
          (while (GetMessage msg NULL 0 0)
            (TranslateMessage msg)
            (DispatchMessage msg))))))
Full code here: https://www.nongnu.org/txr/rosetta-solutions-main.html#Windo...

If you look at this and the C side by side, you can easily see where the parts match up.


That's a very convenient analogy for your argument, but it doesn't make sense for programming languages in general.

It's not really meaningful to compare the readability of sentences in English versus Chinese. On the other hand, it's both meaningful and important to compare readability of program snippets - and readability is more than just the first-time learning burden.

Overly-concise code being difficult to maintain is a well-accepted fact. The response shouldn't be to demand fluency - that's just sticking your head in the sand.


Analogies are just analogies. The point of my comment was that you need to approach this with an open mind, forgetting what you already know from other languages.

> Overly-concise code being difficult to maintain is a well-accepted fact.

Your well-accepted facts do not apply to this totally different paradigm. You seem to think those well-accepted facts are universal. They are not.

I have spent the last two years learning about array languages and think it is a fascinating topic, so I tried to give some advice (based on my own experience) to those who want to make the same journey. My point is not that you need fluency (you need some fluency to understand anything complicated), it is that the fluency you have in other languages may be counterproductive. Your comment is an excellent example of this.

Try to read a tutorial. Rewrite it using "meaningful names". All we have done that at some point, and all we started finding the sort names more comfortable to work with after a while. Don't take my word, give it a try.


> The point of my comment was that you need to approach this with an open mind, forgetting what you already know from other languages.

Why do I need to do that?

I'm interested in solving problems for people, especially using computers. To do that, I need to build teams and build software. I need to think about costs and benefits, about longevity and sustainability. A great deal of that thinking is driven by the world as it is, by the historical accidents that have gotten us to where we are.

If this is a recreational activity, not meant to be practical, then great. I know people who teach themselves Esperanto and Klingon, and they seem to have a good time. Extreme Ironing looks fun, too. But if it's going to have some bearing on my profession, then I'm unlikely to forget what I already know, because in practice the people I will work with already know quite a bit, and brains being what they are, they won't be forgetting it.


> Why do I need to [approach this with an open mind]?

Because those are the standard terms of intellectual discussion. People going into such a conversation without an open mind are called idiots, and they're not worth taking too seriously.

> I need to think about costs and benefits, about longevity and sustainability

Isn't it weird that people don't value that?

I mean, look at how many python programmers are out there! I have programs written just five years old that don't work in a new python interpreter and that's terrible for "longevity and sustainability". Good thing there are so many cheap Python programmers so we can keep the costs down: We're going to be rewriting this thing until we're old and gray!

I personally find this line of thinking quite unproductive though.

> I know people who teach themselves Esperanto and Klingon, and they seem to have a good time.

Hey that's great. I know a few Go programmers too. Loads of fun those chaps.

If only Esperanto or Klingon or Go were useful in some way....


You omitted a key piece of what I said. If you'd like to answer the question I actually asked, feel free. Bonus points if you demonstrate the values you claim to champion here, like open-mindedness and eagerness for sincere discussion.


How did he not answer the question?

You asked why you should keep an open mind when encountering and talking about a strange language. He pointed out that it's necessary for a good discussion on it. He's not wrong.

You've dismissed every point he made while not knowing a single thing about the topic at hand. That's not grounds for a proper discussion.


That is not what I asked.

What I'm responding to in the thread is a question of pragmatism. In particular, I'm taking issue with the notion that one can only approach this "forgetting what you already know from other languages".

That's very much distinct from the notion of an open mind. And it's especially distinct when we're talking about tools that must fit into a professional context that involves decades of history, millions of people, and a complicated set of economic factors.

For example, Imagine that you were working in a machine shop and I came to you excited about a new screw-head. If you dismissed it out of hand, I might ask you to keep an open mind while I explained why I thought it was better. But if my explanation ignored that the Phillips head was the dominant choice, with billions of drivers and hundreds of billions or perhaps trillions of screws in existence, that would be another thing entirely.

The same logic applies to conlangs like Esperanto and Klingon. Is it possible it would be better if we switched the whole world to Klingon? Sure. Might I learn interesting concepts if I studied Klingon? It's possible. Should Klingon be considered for a moment as an important thing for most people to learn? No, not at all. It's utterly impractical.

So yes, keeping an open mind is valuable. But that's not something I've ever heard seriously disputed, so editing me down to saying that is constructing a straw man. Which also isn't grounds for a proper discussion.


I read the sum of wpietri's question as saying "so sell K then". As a reader I then looked on to what might be a better selling of K than all the comments I've read so far.

Instead I found silly prose examination.


If you think Go is less useful than K, you might need to get out more. At least, it is actually used more than K is.


> Overly-concise code being difficult to maintain is a well-accepted fact.

Many seasoned APL/J/K programmers state that conciseness is an asset. I would trust their judgment on that. There are ways in which that and the quoted statement could both be true at the same time.


Aaron Hsu writes about this a lot. In an APL snippet, you often don't need layers of blackbox abstractions. You can see the whole thing at a macro layer. I'm sure it takes time though.


Mathematics is also written tersely but not this tersely, and nor is terse math easy to read for mathematicians.

Terse logic is also very different from terse names; how much of K brevity is just from name compression vs logical succinctness? Haskell people could also use single character names everywhere. When will they adopt the conveniences of fitting everything in as few pages as possible?

And what’s the point of fitting as much content as you can into one eyeball when working memory can’t consume a whole page of a non-succinct language like Python?


It's more than that though. APL symbols can operate on multi-dimensional data by default.


Layers of abstractions are useful though, they allow you to concentrate on what matters for the code in question.


In many cases I'd agree, but the author I'm referring to has a neat example. His APL GPU compiler doesn't have modules and multiple layers of abstractions. He has few named variables. Just a few pages of entirely data flow/parallel APL where you can see everything at the macro level. He did a show HN and several YouTube lectures on this. The abstractions hurt the ability to understand the code and made it take longer to make changes.


> It's not really meaningful to compare the readability of sentences in English versus Chinese.

it is if one speaks both languages and is qualified to comment. part of 'overconciseness' is domain/cultural experience - what is overconcise in one context is not in another.


I started with Python and still miss its lazy generators. K's grain favours calculations that drive down long 'straights' of arrays without many conditional turns, but it's eager-first, which can be a drag for some problems (eg making a prime number generator object that I can take an arbitrary number of elements from lazily). Still, it hasn't been as bad as I feared.

I like the choices of primitives in K and the novel ways they can be combined to do things more efficiently (and with fewer pieces!) than you may expect. Good examples are 'where' (&), the grade twins (< and >), scan (\), and eachprior (':).

I love the adverb syntax: each (') lets you map a function over a data structure with zero syntactic effort, and 'over' (/) and 'scan' (\) make reductions trivial.

With regards to an on-ramp, I started with John Earnest's oK REPL and did a bunch of Project Euler and Advent of Code problems. I then moved to trying to understand the examples on kparc.com. I then converted a spreadsheet application from JS to k7. In all of this, John's documentation was essential.

For me, having single variable names isn't paramount, but in some cases - particularly functions - I can see it helps to 'drive' your mind towards finding simpler ways to do things. It lets the characters that do the 'real' work come to the forefront.


If you're going to use a language (call it X; we're not just talking about K), then you really have to know X's syntax and primitives. There's just no other way.

You also would be much better off learning X's idioms. These are standard ways of saying something. They're standard for a reason - because they work, and because everybody understands them. If you can't read them, then a lot of code is going to be difficult to understand.

That's just table stakes for using X. If you want to use X, but don't want to learn the syntax and the primitives, then you don't really want to use X. You may want the benefits that you think X would get you, but you don't actually want X.

Now for the other side: If you're a proponent of X, you need to remember that many languages are magic within certain domains, and anywhere from "meh" to horrible outside it. Take K, for instance. I'm pretty sure it wouldn't be a good fit for most embedded systems. I'm fairly sure it wouldn't be a good fit for web programming. I'm not sure I'd want to do text processing in K. So it's not "the one way" (except possibly within certain domains). It's not "the way of the future" (except possibly within certain domains). It's not "what all the smart and enlightened people are using" (except possibly within certain domains). And, unless we're working in those certain domains, it really isn't all that important whether the rest of us learn it or understand it.


The one specific concern I have is that some languages are readable without you knowing about the language's syntax and primitives, because it apes them from somewhere else; K is different and thus you can't apply your past knowledge to read it like you might usually be able to do.


That's fair. On the other hand, if a language is from a different paradigm than mainstream languages, there's not much "aping from somewhere else" that can be done. I mean, K could ape things from APL, but that doesn't help most of us. (I guess it helps those that know APL...)

I suspect that looking like C or ALGOL would not be all that useful for K (or Lisp, or Haskell, or Forth). The semantics are too different. A surface similarity in syntax isn't enough to bridge the gap in a meaningful way.


> the claims it's perfectly readable, and anyone who says anything otherwise is either stupid, lying, or too poor to understand it are not useful.

Yes: It is perfectly readable once you know how, and anyone who says anything otherwise is just plain wrong. Including you 58 days ago.

I don't know if you are wrong because you are lying, or you believe this because you are stupid. I wrote off at the time that it was well-meaning ignorance, but it doesn't really matter: The key thing, that is, the thing holding you back from reading it, is you.

I believe you can learn to read it and read it as well as me as quickly as with a few months of study. If you think anyone else is telling you otherwise, ignore them.

I don't believe you need to not be "poor" to do it - everything you need is available to you.

It's all about whether you want to do this or not.

> Every "inspired" languages hits these issues, depending on how "strange" it is: Objective-C is ugly, Lisp is alien, and on the far end APL derivatives are simply labelled as unreadable. That doesn't mean they're bad languages, but to those promoting it: drop the attitude, and understand why we have reservations.

We understand because we were once you: Every X-evangelist you're talking to (where X is Lisp, or Objective-C, or in this case K) was once like you in at least the way of not knowing X, so we know you are wrong in a way (as we now know X) that you simply cannot comprehend yet. Think about that. Maybe re-read Paul Graham's blub article and appreciate really what it means to be the blub programmer.

Many K programmers have some familiarity with APL. Some (like myself) have used Objective-C and Common Lisp (I've got over a decade of professional history in CL), and loads of other languages. Until you learn K, all you have is our testimony; our gospel that this is worth learning. You can literally do anything in blub, being able to do it easier is the point.


Take a break from K (or any other shorthand language of your choosing) for a year or five, then come back and try to figure out what the code you wrote when you were using it frequently is actually doing.

Now do the same with well-written, verbose C-style or Python code (no shorthand/abbreviations, no lambdas, no ternary notation, etc.).

Shorthand code in any language is bad in the long run because it's unmaintainable except by people who work with it on a regular basis. You are making the same arguments as Perl fanatics who used every possible abbreviation in their language 25 years ago. How much of that is even comprehensible now without a major reverse-engineering effort?


> You are making the same arguments as Perl fanatics who used every possible abbreviation in their language 25 years ago. How much of that is even comprehensible now without a major reverse-engineering effort?

Perl is inscrutable in a different way: It is impossible to insert an instruction reliably into a statement allows the student to observe what is going on without changing it: Program flow can change mid-sentence. This isn't true of K.

> Take a break from K (or any other shorthand language of your choosing) for a year or five, then come back and try to figure out what the code you wrote when you were using it frequently is actually doing.

I've been looking at K code for more than five years at this point, and I don't have any problems reading code written by K programmers a decade ago or more.

I think you're completely wrong about this.


> I've been looking at K code for more than five years at this point, and I don't have any problems reading code written by K programmers a decade ago or more.

I think you're completely missing the point here. The assumption wasn't that old code is unreadable, but that you have trouble understanding it after NOT looking at ANY K code for five years.


If that's true it's a trade-off which may or may not be worth it. It's not inherently bad or unacceptable.

Many other skills require a period of re-learning after a five year gap. Where do you believe K sits on the scale of re-learning effort involved, riding-a-bike seconds to minutes, or months to years of effort?

I've only learned a little APL so far, and that really didn't take much effort so I'd be surprised if it would take me more than a month of dedicated learning to get fluent in the language. If that's what it takes to learn from scratch (the language, not the concepts common to maths and many programming languages), I would expect re-learning to be possible in less than a week. That does not strike me as too big an obstacle in many cases.


> I think you're completely missing the point here. The assumption wasn't that old code is unreadable, but that you have trouble understanding it after NOT looking at ANY K code for five years.

Fair enough.

I don't see any evidence why that would be true, or even more true than other languages.

Do you have any?


I don't have a horse in the race here but guess that a big part would be things like single-character identifiers rather than descriptive names.

A slightly weaker but maybe more realistic example, albeit in mathy terms: pick a language L from the set of all programming languages as your next language, according to a probability distribution of commonality of use in 2020 or so, possibly skewed for your field/interests, and work on it exclusively for 5 years.

If your initial language (before L) was K, you're not terribly likely to have picked a successor language that has any relation to K. If your initial language was C, your successor is likely ALGOL-like, and you probably won't have rewired your brain totally. So coming back to old work after 5 years will likely be less of a shock.

Granted, this is like saying "k is different!", but it ties it to a real-world situation where it could even bother a dedicated k programmer.


I think that's an interesting experiment, but I don't believe it will produce the results you predict: I haven't written in Postscript for over ten years, but I just picked up some random postscript code and have no problem reading it. Why would I expect other programmers to be different in this regard?

However even if it's true for most programmers, why does this definition of "readability" have value?


It's a fair point that you have that ability and hope others will too. They might! I don't know that I would, but I might! I'm pretty stupid sometimes though.

My other comment on the nearby thread likely best-addresses why I think coming back "cold" is useful, based on an assumption that being a novice and being cold are similar, and maybe they're not.

With regard to picking up postscript specifically: I'm not intimately familiar with the language, but the bits I found in a quick search look a) less approachable than javascript to me, so fair point on that, but b) somewhat algol-like. At a glance I felt like I could pick up the flow-of-execution without a lot of work, though I could be wrong.

If it is kind of algol-like, I feel like that plays in favour of my argument a bit -- algol-like languages are so ubiquitous as to be hard to really not use the mental model for 5 years, so we're probably not really coming in "cold" to one.


Postscript is a concatenative (stack-based) language, not algol-like at all. Just as array languages, concatenative ones are terse and regular, but reversed (left-to-right evaluation instead of right-to-left).

In my opinion, it is a fair comparison.


> Do you have any?

Excellent question! Personally, I'd argue that a language cannot be viewed in isolation from the problems it's trying to solve.

5 years ago I maintained decades old HLASM programs for IBM mainframes during a consulting gig and haven't touched anything like it since.

I'd have very little issues reading and understanding the source code today. I couldn't say the same about the domain logic and business processes behind it, because it was very specific to the company and I worked in a totally different field since then.

This leads me to think that it's a combination of working with similar languages or in comparable application domains that makes coming back "cold" easier.

IMHO people tend to underestimate the influence of domain knowledge in the programming world and greatly overestimate the value of particular language and framework choices...


> Excellent question! Personally, I'd argue that a language cannot be viewed in isolation from the problems it's trying to solve.

I'm not sure I'd agree with that, at least not entirely.

A language is a tool for thinking. It doesn't solve problems by itself, but it is a good language if it makes thinking easier. k is a great tool for thinking! With only 25 (or so) easily accessible symbols, it's amazing we got about a hundred primitive operations out of them, but what's really incredible is how tastefully Arthur selected those primitive operations such that the symbol could indicate an entire class of operations.

Some work well: - means subtraction and negation. Others are cute: < means grade (ascending) and less-than (and the grade of a grade turns out to be pretty useful!). Some are (however) unfortunate: + means flip and addition (which aren't the same thing at all), and I don't think + would be flip if flipping wasn't so useful.

And it's the tool that matters to me: I don't think I ever used array_flip in php until I learned flip in k, and now it's "obvious" (although PHP's array_flip is limited to a single use-case and requires other functions for other use cases, oh well). And I'll probably know how flip works until the day I die, and if I need it (in another language), I'll probably just make it again.

The tool has survived! And it's the tool that I go rooting in languages to find.

Lisp and Erlang have some really great tools (concepts) like this.

> 5 years ago I maintained decades old HLASM programs for IBM mainframes during a consulting gig and haven't touched anything like it since.

How funny!

I've had almost the same exposure to HLASM (although it was more than a decade ago)- consulting gig, never again. Any language designer who thinks comments can begin only on the first column needs to be taken out and shot.

However I think having to refresh myself on exactly how BALR works, or what types dyadic ! takes isn't going to waste much time. Are these real concerns you have about forgetting the tools you learn?


> I believe you can learn to read it and read it as well as me as quickly as with a few months of study.

Then it's not perfectly readable. That, or your definition of readable is useless because then any programming language is readable given enough study time.

> being able to do it easier is the point.

I still haven't seen any example where it is actually easier. Shorter? Yes. Easier? Very debatable.


> Then it's not perfectly readable. That, or your definition of readable is useless because then any programming language is readable given enough study time.

Given a fragment of Chinese, if you do not know Chinese, would you say it is unreadable?

What possible definition of "unreadable" could you have that would make that a useful statement?

No, I don't think speaking to a language as being "unreadable" is particularly helpful. A very careful reader might interpret your claim that "K is unreadable" as "gjulianm can't read K" but I wonder what exactly is the point of even saying it in the first place? Maybe you mean "approachable" -- a fully reasonable statement, but one reflecting your inability to say anything else meaningful about it.

On the other hand, if I know Chinese, it's entirely possible for me to point to a gibberish statement (for example, on a tattoo), and say that it's unreadable. I'm definitely referring to something else here.

Now maybe we just accept having two different definitions for "readable", but I think yours has real harm. I think referring to something that you do not understand with the same word that means nobody can understand it might confuse people, and I just cannot understand for the life of me, why so many programmers would want to intentionally confuse people like this.

If I pick up some random K code, I can read it. I could make a change to it, and other K programmers will understand what I did and why. That's good enough for me.

> I still haven't seen any example where it is actually easier. Shorter? Yes. Easier? Very debatable.

I think the max of list example is a good one. I've also previously observed some examples that might be less mathematical.

[1]: https://news.ycombinator.com/item?id=22467866 - Here I beat a custom "database" by 50x with one line of q (a very similar language to k)

[2]: https://news.ycombinator.com/item?id=21799376 - A six-character solution to the rock-paper-scissors game, using q to analyse the creation of the solution

[3]: https://news.ycombinator.com/item?id=21680743 - Here's our friend the bin operator again, showing how it appears in production.

[4]: https://news.ycombinator.com/item?id=19659590 - One of my favourite k/q features is views.

[5]: https://news.ycombinator.com/item?id=16851862 - String parsing is "obvious" in k/q where it isn't in other languages

In these cases (and many others) q/k is "easier" because it gets the job done faster. If you can beat my k with your JavaScript then good for you (And I'd like to see it! I can always get better!), but I usually can't. I find the solution faster with k, the program runs faster with k, and those are the things that matter to me when I say "easier".

I absolutely do not mean "more approachable".


Sorry to start popping up on all of your threads!

Your points are really good here, and writing code that runs faster and finding a solution yourself faster are clear wins for a language. I think the value of those wins is subjective, though.

If you need speed of execution, or if you're working in a green-field scenario and need to get your work done super fast, your wins are the most valuable ones.

But for example, if you work on a CRUD application that handles edge cases for specific customers that someone wrote 11 years ago when you used a different ORM but that still matter today, concision and execution speed aren't really the triumphal traits of the code. Clarity to others who may not be as well-versed as I am is the key thing I care about.

Does Javascript, or Python, or whatever else achieve that? Gosh, I don't know. But I'm scared of thinking about k as a total novice, so... I think they win against it up to some point. Maybe the level at which I'd know enough k to not be bothered by it is low enough that this doesn't matter, but ... it's still scary!

If I feel that way, why would I choose to use it knowing that I'd need to find the rare someone who overcame that fear, just to work on my CRUD app? This isn't k's fault, but k is not the right choice here, and so k has lost on this attribute of approachability, which is "readability for novices", and is the most valuable for a huge number of people.


I mean, if you work on a CRUD application, you're kindof stuck where you are. If it's written in Python, I think it's naive to think that your JavaScript knowledge will be very helpful: So many things are just too different. The JavaScript developer is looking for promises and maybe event streams, but 11 years ago, Python asyncio was uncommon, so you've got a thread-per-worker model. It's terrible any way you cut it!

What you've got that's similar: Block structure, single dispatch, everything's an atom, and a few seemingly familiar keywords like "for" and "if" that despite working differently, often appear in familiar patterns. I mean, how different is for k in x: than x.forEach(k =>? It seems like enough of these things dominate those first few steps into the unknown that they provide a comforting handrail into the descent into the coming madness, but it's difficult for me to believe it provides anything more than that.

And it's my experience that kind of comfort can be a lie: To read a fragment of well-documented code, to be amazed in the work and thought that clearly has gone into it, only to discover that fragment contained a bug I skipped over (since that area was so well tested and so clearly good) simply because I didn't bother to actually read it. So now I read. And maybe I read JavaScript and Python a little slower than others, what's important to me is that I also read JavaScript and Python slower than I read k.

I think that's a valuable trade.

But I think you're touching on something important: It's very difficult to mix in a little bit of k into an existing application to "try it out". I hope that there may be things we can do to the environment to make k more accessible and approachable that don't involve trying to cut out this valuable thing, because this thing that makes k unique is just so good I don't want to give it up.


I'm not sure many people use K to write crud applications in the way you're thinking.

Kdb+ is fully integrated K and a K-like query language running on an SSD array database built for array languages and stock/banking usage. Can you write a web app in it? Sure, but I doubt very many do and those that do probably just use K as the numerical engine and JS for everything else.


The comparison with languages as Chinese is not very appropriate because everybody has a native language. In that sense, programmers have more of a 'tabula rasa' (not completely, of course) that makes for more appropriate comparisons of readability. Usually, when people talk about "readable programming languages" the definition is taken as the amount of effort an average programmer needs to read a certain "standard" piece of code. You can of course categorize (readability of mathematical programs, or of system interactions?) and specify (for general programmers? for data scientists? for functional programmers?) but overall that's it.

Regarding your examples:

1. If you are using a database then it's not an argument for the language, I think. 3. A binary search is hardly a selling point of a language. 4. Views are cool, I reckon that. Although K is not unique in that regard. 5. That solution is, from my point of view, suboptimal and non-obvious. It's not only harder to understand than the naive search but it also performs algorithmically worse in all situations: it always performs nm comparisons (n, m being string, substring lengths resp.) plus the hidden cost of the rotation (which could be zero, could be not-zero depending on how is that implemented) plus the AND of every bit, plus the search of the match position.

I also left the second one as the last, because that, to me, is an anti-example. See the things I need to have "in-memory" in order to understand the snippet `Paper`Rock`Scissors@&/"ki"?

"ki"? means search for "k" or for "i"? I have to assume it's what that is, but I can't reconcile that with the documentation of the find function https://code.kx.com/q/ref/find/ because I don't know where comes the function argument.

* / means 'over'. Together with '&' (which somehow is both 'and' and less than? that's very confusing) it looks like it takes the minimum? I honestly do not know what it's meant to do at this point because I don't know what kind of object is "ki"?.

* @ means 'apply' and here I don't get how you are applying three actions to a what, a list? Because I get that ` means 'print'. Knowing what the code does I assume that when it finds 'k' (index 0) does the first action (paper) and so on. Looks like the 'find' function returns the length of the list when an element is not found (weird, because it is a very good way to facilitate mistakes and induce race conditions) so that's how you get 'scissors' in the other cases.

To me it's weird that you put this as an example. A code example that, despite being pretty simple (if the input has k, return Paper, if it has i, return Rock, else return Scissors) I am not able to understand it even with the documentation at hand. Given this example, I find hard to believe that you can find any K code and understand it quicker that you'd do with other programming languages. And I honestly doubt that it's faster than a for-loop in C.

Yeah, you can say you understand it. But you need a bigger "translation table" than in other languages, which adds to the mental concepts that you also need to keep in mind with programming. It's not to say that it's impossible, but it's more mental work for, as far as I have seen, very little benefit. It can work better for you, yes, but it's very possible that it's just that you're used to it and you don't notice the extra burden of dealing with that code.


> The comparison with languages as Chinese is not very appropriate because everybody has a native language. In that sense, programmers have more of a 'tabula rasa' (not completely, of course) that makes for more appropriate comparisons of readability.

How do you figure? Programmers too, have a "native language" (usually the one they use the most, or that has the most powerful features they know how to implement).

> Usually, when people talk about "readable programming languages" the definition is taken as the amount of effort an average programmer needs to read a certain "standard" piece of code.

I think that's a load of shit. What would be the point of communicating such an opinion to others?

> 1. If you are using a database then it's not an argument for the language, I think.

It's built-in to the language. You don't call Python a database because it can unpickle things.

> 4. Views are cool, I reckon that. Although K is not unique in that regard.

Excel has views. I'm not aware of any other language that has them. Do you have some examples?

> I can't reconcile that with the documentation of the find function https://code.kx.com/q/ref/find/ because I don't know where comes the function argument.

See the first example under type-specific uses of find. x (the left argument as given in the example) is a simple list.

> / means 'over'. Together with '&' (which somehow is both 'and' and less than? that's very confusing)

https://code.kx.com/q/ref/accumulators/#unary-application

https://code.kx.com/q/ref/lesser/

To x&y is the lesser of x and y. You can see in binary this harmonises with logical and:

    00001110b&11110111b
    00000110b
> I get that ` means 'print'.

No. `foo is a symbol foo. A symbol by itself is an atom, but `a`list`of`symbols can be written just by writing them together.

https://code.kx.com/q/basics/datatypes/#symbols

> code example that, despite being pretty simple (if the input has k, return Paper, if it has i, return Rock, else return Scissors) I am not able to understand it even with the documentation at hand.

Actually the point is to look at the analysis to see how I chose k and i. If you already know you can solve the game by searching for "k" and "i" then you've missed the point: Array programming languages make this obvious.


> How do you figure? Programmers too, have a "native language"

Unless everybody is born learning a language, no we don't. Even 'The one they use the most' varies over time.

> What would be the point of communicating such an opinion to others?

An easy use case: you're in your company and you need to do a simple script for some computation. The more readable the language you use is, the more coworkers will be able to find a bug/reuse it/modify it without requiring your help.

> It's built-in to the language. You don't call Python a database because it can unpickle things.

I read more of the parent comments on that link. I didn't notice that the OP started with an unsorted dataset, you did binary search on a sorted one. I thought that it the "kdb native format" meant more database magic that just serialization.

> Excel has views. I'm not aware of any other language that has them. Do you have some examples?

--C++ has string_view at least. I think C# had also views of some sort but whenever I search 'C# list/string view' I get MVC or WPF examples about unrelated things.--

Misunderstood what a view was, IDK why I thought it was string/list views.

> To x&y is the lesser of x and y. You can see in binary this harmonises with logical

"harmonises" in the sense that it takes one of the most common programming symbols and changes it? I don't know of anyone that sees "2 & 3 = 2" and thinks "yeah this makes sense".

> Array programming languages make this obvious.

How? First, your analysis line is even more cryptic than the resulting code. And I don't think the idea "find the minimum distinguishing characteristics between the outputs" can only be born out of array programming languages. Don't get me wrong, I think it's smart, but I don't see how the language makes that obvious.

Edit: Sorry, I misunderstood what a view was.


> Unless everybody is born learning a language, no we don't. Even 'The one they use the most' varies over time.

Nobody is born having learned a language. It's something even babies have to learn and they know nothing!

People can even "switch" their native spoken language up to a point although it seems to get much harder as they get older and more experienced with one language, as they're constantly comparing their aptitude to their "native" one -- just as they do with programming.

> An easy use case: you're in your company and you need to do a simple script for some computation. The more readable the language you use is, the more coworkers will be able to find a bug/reuse it/modify it without requiring your help.

The decision to bring another language into a company requires different considerations for every company. The one I work for has a lot of q/k programmers, so this isn't a problem for me. I like programming in k, so it makes sense I might want to be around other k programmers.

I think if you decided to write your "simple script" in even a well-known language like JavaScript, you'd cause some grumbles in a Python+C# shop; k isn't special in this regard.

> I read more of the parent comments on that link. I didn't notice that the OP started with an unsorted dataset, you did binary search on a sorted one.

HIBP offers the data pre-sorted, so I used that. I could have used asc (actually sort) instead of `s# (an assertion that it is already sorted) if I'd used a different file, but I value my time.

> I thought that it the "kdb native format" meant more database magic that just serialization.

No. It's literally just mmap. There's a header on the file describing the array shape, and indicating that it is sorted (so bin will be accelerated), but that's it.

kdb is a database in exactly the same way that Python could be if you just pickle/unpickle everything, except it's actually fast enough that people do this, even for large data sets.

> "harmonises" in the sense that it takes one of the most common programming symbols and changes it? I don't know of anyone that sees "2 & 3 = 2" and thinks "yeah this makes sense".

If you write something like (f=42)&(g=69) you expect "&" to mean "and" here, right? Well if f is an atom, then f=42 returns either a 0b or a 1b based on whether f is 42. What if f is a vector? Well, let's see:

    f:1 42 2
    g:4 69 4
f=42 returns 010b and g=69 returns 010b -- do you see why? Now 010b & 010b should return 010b for the same reason that 1b & 1b should return 1b (and 1b&0b should return 0b).

"&" is just generalised from here to work the same way non-binary values. Having it convert to binary-and would be very confusing for vectors, and although I can see a consistent way it could work, I don't see the value in that operation.


> Nobody is born having learned a language. It's something even babies have to learn and they know nothing!

I meant a programming language (it gets confusing, sorry). But anyways, our native language is much more embedded in the brain than programming languages.

> I think if you decided to write your "simple script" in even a well-known language like JavaScript, you'd cause some grumbles in a Python+C# shop; k isn't special in this regard.

Yes. And I'd cause even more grumbles if it was written in Haskell, and even more if it was on K. That's the point.

> HIBP offers the data pre-sorted, so I used that.

Yes, and that changes how you access the data. So the evaluation was not fair, as the OP was not using sorted data.

> db is a database in exactly the same way that Python could be if you just pickle/unpickle everything, except it's actually fast enough that people do this, even for large data sets.

That's nice, I'd actually wish Python had better serialization facilities.

> f=42 returns 010b and g=69 returns 010b -- do you see why

What if "f:42 1 2"? f=42 returns 100b (is that a value or a vector?) so under the common "and" definition "(f=42)&(g=69)" should return 000b, under the "less than" returns 010b?


> our native language is much more embedded in the brain than programming languages.

I think it's possible that might have more to do with the fact we use "our native language" more.

> Yes. And I'd cause even more grumbles if it was written in Haskell, and even more if it was on K. That's the point.

I don't know. I work in a place with a lot of q/k programmers, so this isn't a problem I face.

> Yes, and that changes how you access the data. So the evaluation was not fair, as the OP was not using sorted data.

You're mistaken. They did use sorted data, they just didn't know about the presorted file so they also wrote code to sort it.

> is 100b a value or a vector?

It's a vector of three elements (bits: zeros or ones).

> What if "f:42 1 2"? f=42 returns 100b … so under the common "and" definition "(f=42)&(g=69)" should return 000b

Yes that's right, and q/k would return 000b in that case.

> under the "less than" returns 010b?

100b<010b is 010b. We need something that gives us 000b and that operation is "and". Say I have a table that looks like this:

    f  g  k 
    --------
    10 9  kf
    20 8  je
    42 69 kf
    42 0  lk
That's written like this:

    t:+`f`g`k!(10 20 42 42;9 8 69 0;`kf`je`kf`lk);
k has tables as a data type. You can think of them as a list of dictionaries if you want:

    q)a:`f`g`k!(10;9;`kf);
    q)b:`f`g`k!(20;8;`je);
    q)c:`f`g`k!(42;69;`kf);
    q)d:`f`g`k!(42;0;`lf);
    q)(a;b;c;d)

    f  g  k 
    --------
    10 9  kf
    20 8  je
    42 69 kf
    42 0  lf
but I only do this to reinforce the idea this is a basic data type in k.

I can write t.f=42 and get 0011b and t.g=69 and get 0010b. I can do queries like this:

    q)select from t where (f=42)&(g=69)
and that's basically the same as this:

      t@&:(t.f=42)&(t.g=69)
    +`f`g`k!(,42;,69;,`en)

(&: is "where" and not to be confused with &)

And from there it should be clear why we need this "and" behaviour. The question is what should a&b do when they're not bit vectors that would be compatible with this behaviour? 42&69 should return 42 because it's the lesser; 0b&1b returns 0b because it's the lesser... Mathematics is already quite comfortable with the relationship between logical-and and min, so this has a precedent.


Just as a datapoint: I have never written any code in APL, J, K, Q, etc., and I think & = min is perfectly defensible. If I had to switch regularly between K and (say) C or Python where & is bitwise (also perfectly defensible, of course) I'd probably find it irksome and make mistakes from time to time, but I don't think it's an unreasonable choice.

(But: I am a pure mathematician by background, and the idea that logical AND is a special case of MIN is pretty familiar to us. The thing whose only values are TRUE and FALSE is a particular Boolean algebra. A Boolean algebra is a particular kind of lattice, and lattices have MIN and MAX operations, though we usually call them MEET and JOIN instead. I would expect & = min to be less comfortable to people with different backgrounds.)

[EDITED to fix an inconsequential typo.]


> "harmonises" in the sense that it takes one of the most common programming symbols and changes it? I don't know of anyone that sees "2 & 3 = 2" and thinks "yeah this makes sense".

There are a few of us in this thread, trying to explain why.

You find outrageous that 2&3=2, almost everyone (all the non-programmers of the world) will find more outrageous that i=i+1. You get used to it quite fast. It's obvious as soon as you realize that, with booleans, min is and, and max is or.


> will find more outrageous that i=i+1

Yep, it is hard to understand. That's precisely why I don't like having even more unintuitive things.

> It's obvious as soon as you realize that, with booleans, min is and, and max is or.

But it's not min and max with the rest of types.


> But it's not min and max with the rest of types.

Yes it is: & is min for all types including boolean, where it is "and" as we can see above. Similarly | is max for all types including boolean, where it is "or". You can convince yourself in binary, or pick up

> it is hard to understand. That's precisely why I don't like having even more unintuitive things.

If I tell you something is "intuitive" I do not mean you should find this approachable knowing what you know. Just as statements on "readability" I find this personalised definition doesn't promote a meaningful discussion, as it depends too heavily on social factors.

Now I don't deny those social factors are important, but if you really mean that, you're basically rejecting learning anything about X because (as at least as a factor) nobody else knows it.

Are you sure you want to be that person? Does anyone?

No of course not. We want to be scientists and evaluate these tools on their own merits. If they are truly new and novel, then we would not expect them to be popular, and so we cannot (if we are to be good scientists) reject them for that reason.

It is for this reason I recommend when someone says "it is readable" we always interpret it to mean if one knows how, and similarly, "it is intuitive", should have an implicit when you are thinking a certain way.

If we do not do this, then we cannot get any value whatsoever from what the other person is saying; if I truly imagine this is what the mean -- that they mean they do not have any interest in being a good scientist and having a discussion about the merits, then they're definitely not worth talking to.

So let me say something about "intuition" under this definition: To me, it is important to understand the relationship between the intuition I can have (or generate) about something, and the memorisation and taxonomy that something requires. "&" is extremely intuitive in this sense because it always has the same definition for every type (It always means the lesser). Most things in k are intuitive like that, and those things that are not, tend to be useful enough that they're worth memorising.

An example of something like the other is "?" which usually means "draw" (As in, that's the most common meaning). You might say 10?100 to draw ten numbers from 0-99. But what would `x?`y? How many is an `x? So k gives another definition to this form (enumerate). This is not intuitive, but if the definition is useful, we will memorise it. This one turns out to be useful.

For the most part, and in the way that I mean, k is very intuitive, and where it is not, it tends to be useful. These are good things!


1. If you are using a database then it's not an argument for the language, I think.

The database is a piece of the language, though, and the language ships with it. Python has a lot of very complex things in its standard library; C without a standard library is foreign to most programmers. Why do they get a pass while k doesn't?

It can work better for you, yes, but it's very possible that it's just that you're used to it and you don't notice the extra burden of dealing with that code.

It's not like 'geocar is foreign to writing in other languages. His GitHub (github.com/geocar) is full of projects in other languages, some fairly recent. This may make sense as an accusation for someone who doesn't write any other code, but I don't think it's a fair accusation here.

If there was some intense burden from writing k in comparison to ALGOL-like languages, would 'geocar have written an operating system in it rather than C? That doesn't seem apparent to me.


> Why do they get a pass while k doesn't?

When we are discussing languages, they don't get a pass. Python has a great math library with Numpy, but it doesn't mean that the language itself is optimal for math-related problems. And specifically in that example, using an external database when defending the speed of the language... meh. It's as if I said that Python is very fast and showed a Numpy calculation.

> This may make sense as an accusation for someone who doesn't write any other code, but I don't think it's a fair accusation here.

It's not about writing or not other languages. It's about being so used to something that you don't notice the burden. I am very familiar with some of the projects and languages I work in and I feel comfortable with the codebase, and I get used to the mental burden of working with them and I end up not noticing it. It doesn't mean it is not there.

> would 'geocar have written an operating system in it rather than C?

Has he? Couldn't find anything (I'd like to see it, honestly, seems a fun experiment).


And specifically in that example, using an external database when defending the speed of the language

kdb isn't written in an external language.

Has he? Couldn't find anything (I'd like to see it, honestly, seems a fun experiment).

Not entirely him, of course. There are a few remaining files on kparc.com:

kparc.com/z/ kparc.com/$/

so on.

Not free software, and they seem reluctant to show it off. 'geocar has done a live demo of a primitive version, though: https://www.youtube.com/watch?v=kTrOg19gzP4 (2014)

The archive is more interesting:

https://web.archive.org/web/20130830071750/http://kparc.com:...


> kdb isn't written in an external language.

But it's an external database with its code and its optimizations. In this case it seemed that it was just doing a binary search, but it was using an already sorted dataset that the OP of that comment wasn't using.

> kparc.com/$/

Well, I can't get anything out of this. See http://kparc.com/$/file.k. The language is cryptic and the variable names are even more cryptic. It's really hard to see what does this code even do. I can't honestly believe anyone that says that's a good way to create code.


> But it's an external database with its code and its optimizations.

You're completely mistaken. It's not an "external database". That's a regular, ~600kb interpreter download from kx.com

Those lines are all the lines typed into a q) prompt one at a time on my laptop (A 1.6ghz macbook air).

> In this case it seemed that it was just doing a binary search,

Yes. That's all okon does as well, they just do it over a B-tree instead of a contiguous list. If you read the writeup you'll see the author actually eschewed using a contiguous list for reasons of random insertion then took to a B-tree generation algorithm that couldn't handle unsorted input. I suppose they just forgot what they were doing at some point.

> but it was using an already sorted dataset that the OP of that comment wasn't using.

Again, you're mistaken. The author (Stryku) might not have known the sorted file existed, but the first thing they tried to do was sort it. They just had a problem sorting a 22GB file because they were using terrible tools. They had to make tools because they didn't have any good ones. The rest of their algorithm took advantage of the fact the input (to these later stages) was sorted.

> Well, I can't get anything out of this.

> I can't honestly believe anyone that says that's a good way to create code.

I spent 5 minutes to write something (in q) someone else wrote over twenty days (in C++).

I can't honestly believe anyone that thinks spending twenty days on a problem is better than five minutes.

If you can't get there, it's going to be really hard to talk about what's amazing in k!


> I can't honestly believe anyone that thinks spending twenty days on a problem is better than five minutes.

Nobody thinks that and I didn't say that. I was talking about that specific file (although the rest of them are equally unapproachable).

> If you can't get there, it's going to be really hard to talk about what's amazing in k!

Well, the thing is that at this point the only amazing things that have been talked about is that it's fast (in some specific use cases of data queries) and extremely concise. What more there is to K? What more reasons are there to use it, and with which downsides does it come with?


> Nobody thinks that and I didn't say that. I was talking about that specific file

I hope nobody thinks that, but unless you can come to terms with the fact that you're wrong, and that this is a completely fair comparison, you're going to be the guy who writes okon2 at some point instead of using a better tool.

> the only amazing things that have been talked about is that it's fast (in some specific use cases of data queries)

Those weasel words are preventing you from seeing what should be obvious:

The k program is faster, shorter, obviously correct, and it took less time to write.

That's the amazing thing. Who doesn't want that?

> What more reasons are there to use it?

This is the wrong way to think about things, because there's an infinity of such reasons. Instead, invert it: You should always use the best tool you can. If you don't know k, it can never be used even if it would otherwise be the best tool.

I don't recommend people use k (except when it works), but I recommend people learn k because it'll make them better programmers.

> which downsides does it come with?

The biggest downside is that you don't know it, and the only person who can fix that is you


But it's an external database with its code and its optimizations. In this case it seemed that it was just doing a binary search, but it was using an already sorted dataset that the OP of that comment wasn't using.

I again point to the C standard library. Modern standard-compliant C can do almost nothing without it. I don't even think it can handle IO without it.

The language is cryptic and the variable names are even more cryptic.

You're thinking of it in a light that doesn't help you understand; don't think of it as a variable, think of it as a definition. "jk is defined as..."


> You're thinking of it in a light that doesn't help you understand; don't think of it as a variable, think of it as a definition. "jk is defined as..."

I was just trying to read the code for "file" and see if I understood a little bit of the implementation they did. In this case I don't even know where even is the function that opens a file or writes to a file. I don't think that's good code and I don't believe that thinking of it in terms of a definition is going to help.


Truly grokking Python as a first language also took me a few months of studying it and writing it, so the author is probably trying to say it's the same as any technology. The only caveat is once you learn something like Algol, you can kind of pick up similar ones fairly quick, while something like APL or K requires you to start over again.


Grokking a language in a few months is different from being able to just read a language in a few months. It's not just the paradigm change: Haskell, Julia, Mathematica or MATLAB for example are far, far more readable than K.


I think Julia, Mathematica, and Matlab aren't too bad, but have never had too much luck with Haskell despite spending time reading many blog posts and an entire book. APL was MUCH easier for me personally to start doing real things with quickly. Maybe because my experience is almost entirely with dynamic languages, I don't know. Point is, readability isn't an absolute. Haskell is easier for you, but APL is easier for others too.


Moreover if I need to keep in my head the corresponding concepts for all of this symbols, it's like learning a new natural language. We sometimes forget that one of the most important feature programming f language is to unburden the programmer from the language it self and get him to focus on the concepts. I also don't see the point of optimizing on the number of line of code if the complexity per character is increased (did I just invented a kpi here?)


> I also don't see the point of optimizing on the number of line of code if the complexity per character is increased (did I just invented a kpi here?)

It doesn't increase linearly though. #:'=: is seven characters that mean as much as 68 separate lexemes in lisp:

    (defun count (list)
      (let ((hash (make-hash-table)))
        (dolist (el list)
          (incf (gethash (cadr el) hash 0) (car el)))
        (let (result)
          (maphash (lambda (key val)
                     (push (list val key) result))
             hash)
          result)))
so you don't need all that. You don't even bother to create that "helper" function because you don't need it.

The choice of operators (and their definitions) was made with great care in an effort to maximise this effect.


That's just using nothing but the features of ANSI Lisp, which was at the time of its standardization already criticized for being large and facing pressure to stay small.

The hashing stuff feels a little "bolted on" in ANSI CL. Add-on libraries can round out the functionality.

In another dialect, TXR Lisp, the whole (let ...) part above reduces to this:

   (hash-pairs hash) ;; get key-value pairs as two-element lists
such a function could be had in ANSI Lisp.

To reduce a sequence into a histogram returned as a hash in TXR Lisp, we would do a group-reduce

   [group-reduce (hash) identity (op succ @1) INPUT-LIST-HERE 0]
The idea here is that the input items are projected through a function (here specified as identity, to take the items themselves) and grouped into buckets by the projected value.

Within each of these buckets, an independent reduce (i.e. left fold) is going on, whereby a hash table holds the reduce accumulators.

For maximum flexibility, group-reduce asks the caller to specify the hash table rather than making one implicitly; hence the (hash) argument.

Likewise, the initial value for the reduce (shared by all the buckets) is specified explicitly. reduce is too general a concept to justify making it a defaulted optional argument that goes to zero.

Here, we somehow abuse reduce. We start the accumulator at 0, and at each reduction step, we ignore the new value coming in, and instead produce the successor of the accumulator as the new accumulator value, so all we do is count the reduction steps in each bucket, thus producing a frequency histogram.

(op succ @1) is needed rather than succ because the function it denotes takes two or more arguments (it gets called with two), whereas succ requires exactly one argument. If succ ignored arguments after the first one, it could be like this:

  [group-reduce (hash) identity succ INPUT-LIST-HERE 0]
But relaxing the checks on the number of arguments in library functions for the sake of code golfing isn't a good idea. Further, if the function constructed the hash implicitly, and geared toward numeric processing, making the default accumulator zero, it could look like:

  [group-reduce identity succ INPUT-LIST-HERE]
and if we cripple group-reduce by taking out the projection we can get it down to:

  [group-reduce succ INPUT-LIST-HERE]
Basically, Lisp can be succinct to the point of diminishing returns, if you have the right set of functions for the task at hand.

The traditional, mainstream Lisps assume that the developers will make these for themselves, especially for tasks outside of list processing.


I think there are two worlds.

The one where shallow and social compatibility is seen as important.

The one where depth and seemingly looking sharp concepts is important.

If you care about your devs only be able to read some business logic then good, syntax familiarity is great. If you care about advanced problem solving, this is unimportant. And people prefer to iterate concise formulas presenting powerful concepts rather than instance-level readability (as in overlyVerboseNamingSchemes)


It might be simpler to view these languages from the perspective of "general purpose" or "specialized." General purpose languages rely on social compatibility because they're meant to handle a wide range of common development. Specialized languages can ignore familiarity because they're used by specialized teams for specific purposes. That appears to be the cases with K, which (AFAIK) gets a lot of use in financial modeling.

Could K become a general purpose language? Maybe, but it always strikes me as odd that that feasibility as a mainstream language comes up so frequently when K is discussed. It's not the high-order bit of the conversation.


No, if you care about advanced problem solving, you still want your solution to be readable and maintainable with as little fuss a possible.

The idea that concise formulas provide this (and that more modular solutions can't) appears to be a conceit.


Putting a space between discrete symbols would have helped the first example considerably

    + / ! 1000


> Specifically, the claims it's perfectly readable, and anyone who says anything otherwise is either stupid, lying, or too poor to understand it are not useful.

+1. All of these "compact" languages (I'd even extend it to Perl to some extent, the way some write it), are seriously lacking in the ergonomics department. Sure, J Leet Hacker might have no problem parsing pages of the stuff. What about the coworker with dyslexia, ADHD, vision problem, etc?

Even if you aren't, what are the real gains? It seems resistant to tabcomplete, git diffs are probably near-inscrutable, grepping for snippets is probably an exercise in judicious application of backslashes... it's cute, but I don't see where this holier-than-thou is coming from (not that ANY language justifies that attitude)

Edit: oops, touched a nerve, I see. I'll add to the above the 7 +/-2 rule [0]. Sure, lets say even you COULD write a program in 1/100th or 1/1000th of the lines in k. I don't WANT a source code that looks like that. Way too much information density.

I'll give them this: perhaps the criticism is that ALGOLikes are too sparse, could stand to be denser, and k is the manifestation of the far other end of the spectrum.

- https://en.m.wikipedia.org/wiki/The_Magical_Number_Seven,_Pl...


Dyslexic here! k is significantly easier to read than ALGOL-derivatives and most Lisp dialects.


Not mentioned in the article is that the K programming language is used for kdb+, which is a fast in-memory time-series database that's heavily used in the finance industry for securities trading systems. I've worked with it a tiny bit, but the pros are all Dark Wizards.


I love the brevity of regular expressions and use them on a daily basis. It is the same argument that keeps me returning to K: the syntax is terse and compact, the semantics are simple and composable, and your eyes get used to it.

Beyond a point however, I cannot read my own regex's after a month's absence. Which is why I use perl's /x modifier extensively to split up regex components onto multiple lines and to document them thoroughly, even if they are for throwaway scripts, because I don't always throw them away!.

For example:

    $_ =~ m/^                         # anchor at beginning of line
            The\ quick\ (\w+)\ fox    # fox adjective
            \ (\w+)\ over             # fox action verb
            \ the\ (\w+) dog          # dog adjective
            (?:                       # whitespace-trimmed  comment:
              \s* \# \s*              #   whitespace and comment token
              (.*?)                   #   captured comment text; non-greedy!
              \s*                     #   any trailing whitespace
            )?                        # this is all optional
            $                         # end of line anchor
           /x;                        # allow whitespace
(source: https://www.perl.com/pub/2004/01/16/regexps.html/)

This is where K fails me. It may not be a fault of the language, but everyone in the community has bought into this strange idiomatic style. I can't imagine debugging it, or checking it for correctness, or foisting it on a less experienced developer. Here's a canonical example an xml parser, on their website.

https://a.kx.com/a/k/examples/xml.k

Where's the pedagogy? Where are the comments? Why is this line noise considered acceptable?


> https://a.kx.com/a/k/examples/xml.k

> Where's the pedagogy? Where are the comments?

Most of that document _is_ comments. There's a comment on almost every line, very similar to your perl example. Comments begin with a "/" character which doesn't have a function to the left (e.g. whitespace).

First we have some constants (L,W,B,S,R) which refer to the left-bracket, whitespace (which includes blank), blank space, and slash and right-bracket. We've also got some utility functions (cut;join). These are simple enough they don't require any special explanation to the K programmer who reads this.

Then we have a function that produces an xml-entity from a character. The author assumes octal is required, so (needlessly) converts to that. The octal string (with a leading zero) is concatenated onto ";&#" then rotated so the ";" appears at the end (1! is cute). I would probably write this differently, because: 1!";&#",$_ic is shorter.

We then have a function that does the reverse, cutting off the first three characters after rotating (which is the ";&#" string again) and converts the octal digits back into decimal. This is probably wrong because real XML documents will probably prefer decimal entities, but perhaps the author wasn't dealing with these. I would certainly write this differently if I changed oc (as above).

Now we have the helper function xc and cx (whose names suggest they are converting from character-to-xml and xml-to-character respectively). This is a stylistic observation, we can also see this from the comment, or by reading the code (if we know what XML is). These implementations are pretty basic, just using ssr to do repeated search/replace on the entities (note that ssr knows that ? character means any).

You get used to it.

> Why is this line noise considered acceptable?

One major challenge reading inscrutable perl scripts is knowing where the execution begins. Perl just has so many rules for parsing it you really need either wizardry or patience to know how to pull it apart, but K is extremely regular: there's only one way to parse it, and shortly after learning it you also learn (quickly) you can insert trace statements that don't change the meaning of the rest of the statement to learn a new operator (or a new use of an operator you didn't know). I note this especially as it is extremely hard to do in perl (and even other Iverson languages including APL and J).

Line noise is a subjective quality that goes away (at least in this case) when you become more fluent in K. I don't believe this is necessarily true of all compact languages though.


If the comments were good (more like you wrote) then the ratio of comments to text would be even higher. And as with writing assembly, the risk of comments getting out of sync with the code is higher, too.


Thank you. The narration is what happens in my brain when I read it. I don't need it in the source files. Keeping the file short is the best way to keep it consistent (what you refer to "getting out of sync")


This seems similar to how Unix commands have both short and long names for flags. Single-character flags are easier to type, but also easier to mistype or misread since there is less redundancy.

It seems like K would be a particularly suitable language for having more than one syntax. The short syntax, once you get used to it, would be better for keyboard input and expert whiteboard discussions, but it might also be nice if there were also a standard syntax that was longer and closer to what most people expect? An editor could automatically translate between short and long syntax, and this would be helpful for making sure you typed what you think you did.


I think that's part of the theory behind q, which trades the monadic (unary) definition of operators for names, so: +: becomes flip, =: becomes group; ?: becomes distinct, and so on. I'm not convinced though, because Python+numpy has most of these operations (and those it doesn't aren't particularly difficult to implement), so it seems reasonable you could implement an environment almost as good as q[1].

But whilst the k/q operators are certainly useful, the Key thing is the notation. The notation is really valuable, and it seems hard to get it until you understand the notation well enough that it starts changing how you think about programming: numpy.matmul(x,y) might do the same thing that x+.y does, but the latter suggests more. I recommend reading Iverson's paper[2] on the subject, although you might find reading §5.2 before the beginning to be helpful in putting into context what exactly is meant by notation here.

[1]: There's a lot missing still. Good tables, efficient on-disk representation of data, IPC, views, and others-- all of which will be hard to do in Python without limiting yourself to a subset of Python that might not feel like Python anymore anyway.

[2]: http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...


Slight reversal.

I've read a lot of human oriented, commented code which meant nothing to me, because the overall state space / architecture was fuzzy.

Whether it's a oneliner or a framework, readability is quite secondary IMO

ps: this also connect to the mathematician views about naming vs structure.. names are mostly arbitrary, it's the structure that drives the logic and the device.


For regex, named capture groups can also be used so one group per line isn't as required:

    $_ =~ m/^                                     # anchor at beginning of line
            The\ quick\ (?<adjective>\w+)\ fox    # fox adjective
            \ (?<verb>\w+)\ over                  # fox action verb
            ...


> Where are the comments?

While I can understand how that code can be intimidating to a programmer with a more “traditional” background… there are 26 (non-empty) lines of comments, and only 15 lines of code.


10 of those 26 are "exercise", not comment. The 6 lines above that are ununderstandable, and I can only find two others, which fail to explain what it's supposed to do.

In this context, "intimidating" and "less tradational" are euphemisms that stem from cognitive dissonance reduction.


lots of comments there!

/ xml from char


I thought that was a game of life


This article should end with something like, "...and then you find out about arcfide's Co-dfns compiler..." https://github.com/Co-dfns/Co-dfns

You don't have to like it, but the existence of K, et. al. shows that most of us are wasting a huge amount of time and energy using bad tools. These things are so powerful and not that hard to learn.


It shows no such thing. It shows that you have found something you think you can be productive in and that you like, not that the rest of us are on the wrong track.


I can't agree. (BTW, I don't like K, and I don't use it, but if I did I'd like to think I'd be productive in it.)

This little system runs circles around entire sub-industries of other software. The fact that it exists and uses one or two orders of magnitude less time and code than other systems is significant.

It's like axes vs. chainsaws. If the job is to log a forest the latter will be better than the former.


Less code is entirely irrelevant unless for some reason you're concerned about a few k of storage for your source, less time appears to be speculation.

If people can make working, maintainable software other ways, particularly if those other ways have large, established ecosystems of dependencies, then they are likely doing it right regardless of how much you like apl.


Again, I don't like APL.

Less code -> fewer bugs, all other things equal, eh?

> large, established ecosystems of dependencies

...are actually a symptom of failure to refactor. Ideally, over time, you code base approaches the Kolmogorov complexity of the problems it solves.

> If people can make working, maintainable software other ways ...

...but it takes 10x-100x more code, RAM, CPU power, developer time, etc. than if you hadda used K, then you're leaving money on the table.


A million-line program isn't readable by anybody, no matter how readable the language is. If the equivalent program can be written in, say, a thousand lines in some more concise language, that's more than worth the learning curve, even if the language is strange and off-putting.


I doubt that K can reduce line count by a factor 1000 though. The examples in the article is mostly about shorter identifiers like "!" instead of "range" and a compact notation. That is perhaps a factor 5 not a factor 1000.

The example with a for-loop for summing a range is a blatant strawman. In which modern language would that be idiomatic?

Furthermore the examples only show a particular use case: Processing lists of numbers. How does the benefits measure up for all the other stuff a million-line program does?

That said, if you really have a million-line program doing mostly numerical processing, then I'm sure it would be a massive benefit to switch to a programming language optimized for this task.


> The example with a for-loop for summing a range is a blatant strawman. In which modern language would that be idiomatic?

Golang?

Oh, you said modern. Never mind.


> I doubt that K can reduce line count by a factor 1000 though.

It adds up! Consider something like this:

    (defun count (list)
      (let ((hash (make-hash-table)))
        (dolist (el list)
          (incf (gethash (cadr el) hash 0) (car el)))
        (let (result)
          (maphash (lambda (key val)
                     (push (list val key) result))
             hash)
          result)))
That's just #:'=:


OTOH, a more idiomatic version would be (and I would consider using the loop facility to be idiomatic):

  (defun count (pair-list)
    (let ((hash (make-hash-table)))
      (loop for (val key) in pair-list
        do (incf (gethash key hash 0) val))
      (loop for key being the hash-key of hash
        collect (list (gethash key hash) key))))
This drops us from 10 to 7 lines, which is clearly shorter. But, it's also relatively simple to change it from summing a list of (<count> <key>) to a list of (<key> <count>), something that I genuinely can't comment on for the K version. I suspect it would be as simple as first doing a permutation on the input, and (if needed) un-permute the result.


Python 3.7+: Counter(elems).values()

But we can play this game both ways. What's the K for (Counter(a) & Counter(b)).most_common(3)?


It's a little funny comparing a language with another language+plus-its-entire-ecosystem, but k fares remarkably well I think.

Counter could be #:'=: (count each group)

    c:#:'=:
Values would just be dot.

Counter[x]&Counter[y] is a bit tricky to write, because while the documentation says set intersection, what they really mean is the set intersection of the keys with the lower of the values.

This is entirely clear in k; First, understand I have a common "inter" function that I use frequently (it's actually called "inter" in q):

    n:{x@&x in y}
and from there, I can implement this weird function with this:

    i:{(?,/n[!x;!y])#x&y}
I've never needed this particular function, but I can read out the characters: distinct raze intersection of keys-of-x and keys-of-y, taking [from] x and y. It sounds very similar to definition of the function I gave above (if you realise the relationship between "and" and "lower of the values"), and this was painless to write.

most_common could be: {k!x k:y#>x} but if I'm going to want the sort (in q) I might write y#desc x which is shorter. In this I can save a character by reversing the arguments:

    m:{k!y k:x#>y}
so our finished program (once we've defined all our "helper functions" in a module) is:

    m[3] i . c'(x;y)
So we're looking at 13 lexemes, versus 19 - a minor victory unless you count the cost of:

    from collections import Counter
and start counting by bytes! But there's more value here:

All of these functions operate on data, so whether the data lives on the disk or not makes no difference to k.

That's really powerful.

In Python I need more code to get it off the disk if that's where it is.

I also may have to deal with the possibility the data might "live" in another object (like numpy) so either I trust in the (in)efficiency of the numpy-to-list conversion, or I might have to do something else (in which case really understanding how Counter[a]&Counter[b] works will be important!).

I might also struggle to make this work on a big data set in python, but k will do just fine even if you give it millions of values to eat.

These things are valuable. Now if you're interested in playing games, let's try something hard:

    "j"$`:x?`f
It creates (if necessary) a new enumeration in a file called "x" which persists the unique index of f, so:

    q)"j"$`:x?`f
    0
    q)"j"$`:x?`g
    1
    q)"j"$`:x?`h
    2
    q)"j"$`:x?`f
    0
How would you do this in Python?


> It's a little funny comparing a language with another language+plus-its-entire-ecosystem

This is pretty much my point: any ‘reimplement this weird built-in thing in $OTHER_LANG’ is going to involve overheads unless $OTHER_LANG also has that thing. I don't think keeping most types out of prelude should be a downside; you can always ‘from myprelude import * ’ if you disagree.

> How would you do this in Python?

I don't really know what "persists the unique index of f" means, but this seems similar to shelve. I can misuse that to give the same effect as what you showed.

    with shelve.open('x') as db: db.setdefault("f", len(db))
    >>> 0
    with shelve.open('x') as db: db.setdefault("g", len(db))
    >>> 1
    with shelve.open('x') as db: db.setdefault("h", len(db))
    >>> 2
    with shelve.open('x') as db: db.setdefault("f", len(db))
    >>> 0


> I don't think keeping most types out of prelude should be a downside

I don't understand what that means.

> I don't really know what "persists the unique index of f" means, but this seems similar to shelve. I can misuse that to give the same effect as what you showed.

I think you got it, but it seems like a lot of typing!

How many characters do you have to change if you want it purely in memory? In k I just write:

    `x?`f


‘Prelude’ is the set of standard functions and types that you don't have to import to use. Python comes with a rich standard library, but most require imports to use.

> How many characters do you have to change if you want it purely in memory?

When the question is if ‘K can reduce line count by a factor 1000’, shorter keywords hardly cut it. ‘setdefault’ is wordier than ‘?’, but as something you're only going to use on occasion, so what? And d.setdefault(_, len(d)) is something you'll use maybe once a year.

shelve acts like a dict, so you can do the setdefault dance the same way on in-memory dicts.


> ‘Prelude’ is the set of standard functions and types that you don't have to import to use. Python comes with a rich standard library, but most require imports to use.

I might not understand this about python.

    $ python3
    Python 3.7.6 (default, Dec 30 2019, 19:38:26) 
    [Clang 11.0.0 (clang-1100.0.33.16)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> Counter
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    NameError: name 'Counter' is not defined
Should that have worked? Is there something wrong with my Python installation?

> but as something you're only going to use on occasion, so what? And d.setdefault(_, len(d)) is something you'll use maybe once a year.

If it takes that many keystrokes, you're definitely only going to use it once per year! This is the normal way you make enumerations in q/k.

The low number of source-code bytes is the major feature in k. I've written some on why this is valuable, but at the end of the day, I don't really have anything more compelling than try it you might like it, and it works for me. This is something I'm working on though.

- https://news.ycombinator.com/item?id=8476113

- https://news.ycombinator.com/item?id=10876975


Counter is not in Python's prelude. Therefore you have to import it in order to use it.

My comment about prelude was in response to you saying I was using Python's “language+plus-its-entire-ecosystem”. This is not true; Counter is part of the standard library that comes with the language. It is not from a third-party package.

> If it takes that many keystrokes, you're definitely only going to use it once per year! This is the normal way you make enumerations in q/k.

You don't use this frequently in other languages because you almost never care about the index of a value in a hash table (because it's ridiculous), and you almost never want to brute-force unique a list (because it's a footgun).


> you saying I was using Python's “language+plus-its-entire-ecosystem”. This is not true; Counter is part of the standard library that comes with the language. It is not from a third-party package.

I'm not sure I agree "third-party package" is a good/useful place to draw the line, but I don't think it's terribly important. Sorry.

> you almost never care about the index of a value in a hash table (because it's ridiculous)

So I want to query all of the referrer urls I've seen recently. I only see a few of them, so I want to have a table of all the URLs, and another table with one row per event (landing on my page). In SQL, you might have a foreign-key operation, but in q, I can also make the enumeration directly. I don't really care about the value of the index, just that I have one.


I wouldn't want to use an index as a long-lived key like that, since it makes ops like deletion a footgun.

For temporary calculations I'd just `{u: Url(u) for u in url_strings}` for the table of URLs, and stick the `Url` objects in the table of events.


I don’t understand any of this. I don’t know what footgun is. I have no idea what a “URL” object is or how you stick it in a table of events (on disk? in memory? in a remote process called a “database”). I’m don’t know what you mean by temporary calculations.

I have fifty billion of these events to handle every day, and I do this on one server (that is actually doing other things as well). It is not an option to “just” do anything at this scale if I want to keep costs under control.


Sorry, it's a bit weird not sharing a lexicon. I get the impression Q is a whole different genealogy of programmers.

In Python, although you can just use shelve to store data on disk, in practice this is considered a bad idea beyond very simple cases. Valuable data wants the guarantees that real databases provide, like ACID. shelve doesn't provide this, and IIUC nor does kdb+.

So if you're handling 50 billion events a day, live, and you need these to persist, you'd use SQL or something similar. That would then ultimately determine how you add and manipulate records.

If you don't care that much if you lose the data on a crash, that's when we're talking about temporary calculations. In Python, rather than having two tables like (eg.)

URLs:

    url_id  url          count
       1    google.com    300
       2    python.org    400
       3    example.net   200
Requests:

    request_id   data   url_id
        1       'spam'     2
        2       'eggs'     2
       ...
       900       'ham'     1
you would make a custom type, aka. ‘Url’, containing ‘url’ and ‘count’ as object fields, and then store your requests as a list containing references to those ’Url’s.

    urls = {
        "google.com":  Url("google.com",  300),
        "python.org":  Url("python.org",  400),
        "example.net": Url("example.net", 200),
    }

    requests = [
        Request("spam", urls["python.org"]),
        Request("eggs", urls["python.org"]),
        ...,
        Request("ham",  urls["google.com"]),
    ]


> ... like ACID. shelve doesn't provide this, and IIUC nor does kdb+. So if you're handling 50 billion events a day, live, and you need these to persist, you'd use SQL or something similar. That would then ultimately determine how you add and manipulate records. …

ACID is overrated. You can get atomicity, consistency, isolation and durability easily with kdb as I'll illustrate. I appreciate you won't understand everything I am saying though, so I hope you'll be able to ask a few questions and get the gist.

First, I start write my program in g.q and start a logged process:

    q g -L
This process receives every event in a function like this:

    upd:{r[`u?y`url;y`metric]+:1}
There's my enumeration, saved in the variable "u". "r" is a keyed table where the keys are that enumeration, and the metric is whatever metric I'm tracking.

I checkpoint daily:

    eod:{.Q.dd[p:.Q.dd[`:.;.z.d];`r] set r;.Q.dd[p;`u] set u;r::0#r;system"l"}
This creates a directory structure where I have one directory per date, e.g. 2020.03.11 which has a file (u or r) referring to the snapshots I took. I truncate my keyed table (since it's a new day), and then I tell q the logfile can be truncated and processing continues! To look at an (emptyish) tree right after a forced checkpoint:

    total 24
    drwxr-xr-x  4 geocar  staff  128 11 Mar 16:59 2020.03.11
    -rw-r--r--  1 geocar  staff    8 11 Mar 16:59 g.log
    -rw-r--r--  1 geocar  staff  206 11 Mar 16:55 g.q
    -rw-r--r--  1 geocar  staff  130 11 Mar 16:59 g.qdb

    geocar@gcmba a % ls -l 2020.03.11 
    total 16
    -rw-r--r--  1 geocar  staff  120 11 Mar 16:59 r
    -rw-r--r--  1 geocar  staff   31 11 Mar 16:59 u
The g.q file is the source code we've been exploring, but the rest are binary files in q's "native format" (it's basically the same as in memory; that's why q can get this data with mmap).

If I've made a mistake and something crashes, I can edit g.q and restart it, the log replays, no data is lost. If I want to do some testing, I can copy g.log off the server, and load it into my local process running on my laptop. This can be really helpful!

I can kill the process, turn off the server, add more disks in it, turn it back on, and resume the process from the last checkpoint.

You can see some of these qualities are things only databases seem to have, and it's for that reason that kdb is marketed as a database. But that's just because management has a hard time thinking of SQL as a programming language (even though it's there in the name! I can't fault them, it is a pretty horrible language), and while nobody wants to write stored procedures in SQL, that's one way to think about how your entire application is built in q.

That's basically it. There's a little bit more code to load state from the checkpoint and set up the initial days' schema for r:

    u:@[get;.Q.dd[.Q.dd[`:.;last key `:.];`u];{0#`}];
    r:([url:`u$()]; req:0#0; imp:0#0; clk:0#0; vt:0#0);
but there's no material difference between "temporary calculations" or ones that will later become permanent: All of my input was in the event log, I just have to decide how to process it.


I mean, sure, if your problem is such that a strategy like that works for you, I'm not going to tell you otherwise. You can log incoming messages and dump data out to files easily with Python too. I wouldn't want to call that a ‘database’, though, since it's no more than a daily archive.


Yes! "databases" are all overrated too. Slow expensive pieces of shit. No way you could do 50 billion inserts from Python to SQL server on a single core in a day!

I'm so glad k isn't a "database" like that.


It's a bit unfair to compare the speed of wildly inequivalent things. RocksDB would be more comparable, but even there it is offering much stronger resilience guarantees, multicore support, and gives you access to all your data at once.

Calling them expensive is ironic AF. Most of them are free and open source.


> It's a bit unfair to compare the speed of wildly inequivalent things.

Yes, but I understand you keep doing it because you don't understand this stuff very well yet.

> RocksDB would be more comparable

How do you figure that? RocksDB is not a programming language.

If you combine it with a programming language like C++, it can, with only 4x the hardware just about keep up 1/6th of my macbook air[1].

RocksDB might be more comparable to gdbm, but it's not even remotely like q or k.

[1]: And that's taking facebook's benchmarks at their word here, ignoring how utterly synthetic these benchmarks read: https://github.com/facebook/rocksdb/wiki/Performance-Benchma...

> much stronger resilience guarantees,

You're mistaken. There is no resilience guarantee offered by rocksdb. In q I can backup the checkpoints and the logs independently. It is trivial to get whatever level of resilience I want out of q just by copying regular files around. RocksDB requires more programming.

> gives you access to all your data at once

You're mistaken. This is no problem in q. All of the data is mmap'd as soon as I access it (if it isn't mmap'd already).

> Calling them expensive is ironic AF. Most of them are free and open source.

If they require 4x the servers, they're at least 4x as expensive. If it takes 20 days to implement instead of 5 minutes, then it's over 5000x as expensive.

No, calling that "free" is what's ironic, and believing it is moronic.


> How do you figure that? RocksDB is not a programming language.

I'm comparing to the code you showed. You're using the file system to dump static rows of data. All your data munging is on memory-sized blocks at program-level. Key-value stores are the comparable database for that.

> You're mistaken. This is no problem in q. All of the data is mmap'd as soon as I access it (if it isn't mmap'd already).

Yes, because you're working on the tail end of small, immutable data tables, rather than an actually sizable database with elements of heterogeneous sizes.

> In q I can backup the checkpoints and the logs independently. It is trivial to get whatever level of resilience I want out of q just by copying regular files around.

Yes, because you don't want much resilience.

---

What you're doing here is incredibly simplistic. It's not proper resiliency, it's not scalable to more complex problems, and it's not scalable to larger workloads. An mmap'ed table and an actual database are different things.

It works fine for you, but for many other people it's not.


> You're using the file system to dump static rows of data

That's what MySQL, PostgreSQL, SQL Server, and Oracle all do. They write to a logfile (called the "write ahead log") then periodically (and concurrently) process it into working sets that are checkpointed (checked) in much the same way. It's a lot slower because they don't know what is actually important in the statement except what they can deduce from analysis. Whilst that analysis is slow, they do this so that structural concerns can be handed off to a data expert (often called a DBA), since most programmers have no fucking clue how to work with data.

That can work for small data, but it doesn't scale past around the 5bn inserts/day mark currently, without some very special processing strategies, and even then, you don't get close to 50bn.

> All your data munging is on memory-sized blocks at program-level.

That is literally all a computer can do. If you think otherwise, I think you need a more remedial education than the one I've been providing.

> What you're doing here is incredibly simplistic. It's not proper resiliency, it's not scalable to more complex problems, and it's not scalable to larger workloads. An mmap'ed table and an actual database are different things.

Yes, except for everything you said, nothing you said is true in the way that you meant it.

Google.com does not query a "database" that looks any different from the one I'm describing; Bigtable was based on Arthur's work. So was Apache's Kafaka and Amazon's Kinesis. Stream processing is undoubtedly the future, but it started here:

https://pdfs.semanticscholar.org/15ec/7dd999f291a38b3e7f455f...

Not only does this strategy get used for some of the hardest problems and biggest workloads, it's quite possibly the only strategy that can be used for some of these problems.

Resiliency? Simplistic? I'm not even sure you know what those words mean. Putting "proper" in front of it is just weasel words...


> That is perhaps a factor 5 not a factor 1000.

I’ve personally found 20-30x compared to Java - admittedly not the most compact language.


I'd rather read 30 lines of Java (yes Java !) than try to decipher one line of K !


Matter of taste; I cannot imagine having to program Java (give me Clojure/Scala any time if jvm is required) for a living anymore; deciphering a lovely 200000 class codebase where every call in every 30 lines bring you into a deep spelunking trying to figure out where/how/what and hoping it's not in some ancient undocumented .jar etc. So I have very much the opposite of what you have. Then again, I have been doing k/j/apl for a long enough time that I would not call reading it deciphering, most of the time. While for Java (& C#) it is always deciphering, even after 20+ years (which was when I became a professional Java programmer) because, unlike with k, you simply cannot know all the libraries and files or the brilliant imagination of people who like to use design patterns 'just a bit' wrong for everything.


> the brilliant imagination of people who like to use design patterns 'just a bit' wrong for everything.

I assume you are comparing this to deciphering and maintaining K-code written by developers who uses all the idioms just a bit wrong?


Well that would be an interesting test; it is not that easy to do that while it is absolutely trivial (I would say run-of-the-mill) to abuse design patterns.


It has recently been discovered that human speech has a constant bitrate of 39 bit/second[1]. People speak faster in some languages but convey less information per word, and the converse holds true in languages where people speak slower.

In a 'code golf' language where you reduce a 1 million line program to 1000 lines, the 1000 line program is going to be just as hard to read. More concise syntax doesn't help as the same number and complexity of concepts are going to exist in both programs, and that's what the human has to understand.

There are languages where more powerful concepts are available for less code. Erlang for instance, with concurrency. In those cases, yes, readability is aided and maintainability is improved.

[1] https://www.sciencemag.org/news/2019/09/human-speech-may-hav...


Claim A: Human speech has a constant bitrate.

Claim B: The intelligibility of a computer program is proportional to its complexity of the problem it solves, not to the number of characters in its source code.

First off, I don't see how B follows from A.

Secondly, even if there does seem to be some similarity, one of the reasons any comparison breaks down is that human languages are evolutionarily adapted to our cognitive capacities, whereas programming languages are designed, for the most part without a very rigorous understanding of how they interact with our cognitive capacities (this is an active area of research, but not very advanced, I think.)

Finally, I reject claim B. The intelligibility of a computer program is heavily affected by how clearly the underlying concepts that define a solution to the given problem are mapped into the structures available in the given programming language, and that is obviously heavily affected by the choice of programming language. And I believe that in general more concise languages are more concise precisely because they make available more and more straight-forward mappings from problem-space to language-structure space. So more concise languages make it possible to write more easily comprehensible programs once you are over the barrier of holding all the mappings they make available in your head.


Human speech having a constant bitrate seems to indicate that there's a limit in the bandwidth we have when receiving or giving information, where that bandwidth is measured in terms of the actual amount of information, independently of the media. Therefore, in a computer program, what matters is not the number of characters but the actual ideas behind the code. In other words, the more complex the ideas are, the more time we need to process it, independently of the number of characters.

> The intelligibility of a computer program is heavily affected by how clearly the underlying concepts that define a solution to the given problem are mapped into the structures available in the given programming language, and that is obviously heavily affected by the choice of programming language.

I don't think so. Nobody uses "just" a programming language. They use a generic programming language plus libraries (or DSLs) plus their own code that maps the concepts of their problem to code. Unless the problem to be solved is really generic and/or simple, a programming language alone is very very far of having direct mappings from problem to language.

And you can't escape the complexity of the problem at hand. You can hide it, but in this regard there's no difference between hiding it behind a compiler, behind libraries or behind your own code. You still need to understand the problem and the underlying concepts.


> human languages are evolutionarily adapted to our cognitive capacities, whereas programming languages are designed [...]

That's what they're designed for, but why does that mean they aren't also evolutionarily adapted to our cognitive capabilities?


At some point you run into entropy. You can only compress code until it covers the requirements 100% and nothing else. You can't go past that point without losing features.


That depends.

When my assumptions about the underlying problem are wrong a million line program is much easier to fix than a thousand line one.

Or rather, a thousand line concise program in a quirky language will become a million line program in a quirky language when exposed to changing business requirements.


If the language allows you to simplify the program sufficiently it might be easier to just rewrite it from scratch when the requirements change. I doubt that this is the case for any language, but you know, in principle it could happen.


https://en.wikipedia.org/wiki/Write-only_language

>Languages that are often derided as write-only include APL, Dynamic debugging technique (DDT), Perl,[2] Forth, Text Editor and Corrector (TECO),[3] Mathematica, IGOR Pro and regular expression syntax used in various languages.


You've got me wondering about the utility of a genuinely write-only language.

As a thought experiment, would there be benefit to letting functions only be written once?

No-one could come along and break code by changing a function. If you wanted to fix a bug in a function you would have to write a new copy and explicitly update callers to use the new version.

A lot of maintenance overhead? Possibly, but tooling would take care of the majority of it. It would be useful to be explicitly know not just when a function has changed but when the functions it calls have changed.

You would need a naming convention, instead of Main you would need something like Main#23145. It would tick over very regularly.

Library functions like CalculateTax#12 would tick over less frequently.

By calling the old revision you could call the old code if you didn't want to update a module or function.

Perhaps you could extend that name to be something like Namespace.Name#IndirectRevision.DirectRevision.Hash which would allow tools to more quickly extract when the change was made in the logic of the function itself or when the change was in dependents.

This would be an alternative strategy to maintenance to IoC. Instead of treating dependents like they don't affect the code being executed, we can control instead the version of code

By forcing code change in all callers, you build up an explicit picture of where bugs happen but more important see also at the impact of those bugs on other places.

It would need great tooling to get over the paradigm shift of moving away from IoC but I think it would be interesting.


This is exactly the main idea behind the Unison language.[1] All functions are immutable and identified by a hash rather than name. When you make any change you are creating a new function with a new hash. It's definitely a very interesting idea.

[1] https://www.unisonweb.org/


What an absolutely fascinating idea! As the field of software engineering matures, how million line repos get maintained is going to become a subject of academic study.

> If you wanted to fix a bug in a function you would have to write a new copy and explicitly update callers to use the new version.

Finding all callers is the hard part though! And in modern languages, that need to be addressed both at compile (or "compile") and run time.

Assuming you've got all the call sites with what constraints you were given and have imposed, the second part, where you call a specific version of a function, is where it gets tricky.

Is the bug in the calling code or the called code? because huge software maintenance is obtuse. The original programmers have all long since left, so what we're left with is a smattering of random specific version calls, with only the tiniest of tidbits of history, locked away in someone's email, and saved in the previous ticketing system.

So it sounds interesting, but I worry (though I worry about a great many things), that the programmer three years later, ends up coming across calls to three different versions of some code, with no guidance on which ones can be upgraded, should be upgraded, and must not be upgraded. I can't say which one would involve digging up more ancient history though.


> Finding all callers is the hard part though!

Only because we made it hard by storing code in files instead of a database.


> explicitly update callers to use the new version

In a genuinely write-only language, how do you update those callers? You can't, you'd have to write new ones to call your new bugfixed function. And then you'd have to write new callers of those callers...

You'd have to rewrite the whole program every time you wanted to make a change. Which would make bug-fixing a much more cerebral activity since you'd want to work out as many as possible before the rewrite to save time.


The idea is interesting. However, worth noting that you would have to increment main() with literally every change, because when you fix a bug in foo#10 (by rewriting as foo#11), you'll need to update the affected call sites to use foo#11 instead, which means they get incremented, too, so you need to do the same to their callers, all the way up to main.

You'd absolutely need tooling to automate some of this (present a list of call sites; you select which should use the new function). It would also increase the size of your code base by a lot, although I'm not sure that's as much of a problem since we may be spending less time reading code and more just writing a new function.



You'd essentially be semvering little bits of your code.



I'm curious if this is used for anything…it sounds very blockchain-ish.


Only if it's written in a style which affords reading. K programmers seem to want to write their code in a mathematical style, but without the natural language prose which makes mathematical papers readable. In a paper by a mature mathematician, the equations only perform some of the work in expressing the idea. The rest of the work is done by prose written with an intent to be lucidly expository, to allow the notation to be terse because the high-level concepts are being expressed in a natural language.

We have this style in the programming world. It's called Literate Programming. How many K programmers write in that style?


>We have this style in the programming world. It's called Literate Programming. How many K programmers write in that style?

More than you would think. I had a very nice gig at a hft years ago where we were using noweb+k extensively.

The only place were I edited source code by hand with paper, pen and paste.


> but without the natural language prose which makes mathematical papers readable

A mathematical paper introducing some meaningful worker function uses English because mathematical notation is insufficient for describing computation. That's one of the important goals of Iverson's notation.

> We have this style in the programming world. It's called Literate Programming. How many K programmers write in that style?

Quite a few! I would actually suggest there are more literate K programmers than there are literate C programmers! Every 4-5 lines of code probably has 2-3 lines of prose in a large application that I work on, but in another language that might be something like 500 lines of code for 2-3 lines of prose which puts the prose offscreen for most of the code it's describing.

Some C programs will try to keep the prose-to-code density higher than that, but even amongst Knuth's code that's rare.


I assume you enjoy reading gzipped versions of documents, from binary.


I find that in these discussions people think that complexity can be escaped. A program cannot be less complex than the problem it solves. Of course you can add even more complexity if the programmers are not good, but that lower bound cannot be surpassed.

In your example, assuming that a million-line program is well done in its language, you'd have exactly the same complexity but now in a thousand lines, meaning that now each line does more, is more complex and changing it/understanding it is more difficult and prone to errors. It would be equally (if not more) impossible to understand.

If it were me, I'd choose the million-line program and keep the ability to understand parts of it and performing maintenance with just the program reference by my side, not the language reference too.


In your example, assuming that a million-line program is well done in its language, you'd have exactly the same complexity but now in a thousand lines, meaning that now each line does more, is more complex and changing it/understanding it is more difficult and prone to errors. It would be equally (if not more) impossible to understand.

Disagreed. Reductio ad absurdum: It's not a good idea to force everyone to program in machine code, writing sequences of 1s and 0s. Why? Because the right abstractions (and associated notation) do make complexity more manageable.


Ok, you're right. It's not so clear-cut, but there's definitely a balance to find.

Of course, assembly is far better than machine code. C is almost always better than assembly, because it maps better the mental ideas to code than assembly. After that, you have higher-level languages and then the change is not that clear. High-level languages tend to hide the complexity of dealing with the computer: in Python, JS, Haskell or K you don't need to worry about memory allocations, or how objects are defined, or how to pass arguments to functions. Sometimes you need to worry about that, and having a language that does that for you makes your program harder to understand (how does JS work with parallelism? When is an object an object and not a copy? Suddenly you need to know how does JS do certain things and that complexity reappears in your program).

Now, for the example, when I said that the million-line program is well done, I was thinking about some JS, Python, C++, or Java application without too much unnecessary cruft and that maps properly problem concepts to code concepts. If the 1000x line reduction (if that even exists) is done on the basis of smart one-liners, unreadable code and constructs and language-specific, it's not worth it. Say your application was a physics simulator of a certain situation. Most people would want the million-lines Python code where each function and each part is understandable and lets you focus on the problem at hand, instead of a thousand lines where you not only have a difficult problem but also a difficult code.


Chrome is well north of a million lines. Anyone familiar with C/C++ can at least meaningfully explore the codebase with the use of mechanisms like ctags and understand parts of it. If it were 1/1000 lines fewer but in K instead, I don't really think that would make grokking it any easier.


I would hope that someone familiar with K could probably figure it out, but the benefit of C++ is that someone familiar with Java or Python might have a hope of doing pretty well too.


I think this is something of a strawman: there are plenty of million line (plus) systems and software suites, but I think million line programs are relatively uncommon by comparison.

Especially so if you don't count library code, and I don't because otherwise you end up doing things like counting the number of lines of code that back the Win32 APIs in your program line count, which doesn't seem sensible. Most of the time we treat libraries as black boxes, except on rare occasions, so don't tend to think of them as part of "our" codebase[1].

Most million line plus systems are broken down into subsystems (applications, services, libraries, etc.) that in isolation are smaller, and in many cases much smaller.

Even with those rare million line of code monoliths, for the most part there's some level of organisation in the codebase: layers, subsystems, components, or a mixture of these and whatever else is appropriate.

This organisation, and particularly when coupled with modern tooling, facilitates the understanding of even very large systems.

I don't say that it's necessarily easy but, then again, neither is understanding a large, complex system implemented in K.

[1] Except when thinking about security because third party code has been shown to be a rich source of vulnerabilities.


The Kolmogorov complexity of the program, eh?

Alan Kay' VPRI's STEPS project used PEG parsers to create math-like DSLs to in their attempt to go "from the desktop to the metal in 20,000 lines of code."


How does error handling work in K? For example, what happens when you try to read a file that doesn’t exist? How does the read function indicate failure to the caller? How does the caller check and recover (show an error to the user and continue with some default values)?


If primitives fail, they raise a signal. There is also a primitive for unconditionally raising a signal. Most flavors of K have a primitive called "error trap" which captures signals emitted by a function application, should they occur:

      f:{11 22 33 x}
      .[f;,1;:]
    0 22
      .[f;,4;:]
    (1;"index")
If a signal is not trapped, K pauses execution and opens the debugger, where a user may interactively inspect the environment, evaluate code, and resume:

      f 4
    index error
    {11 22 33 x}
     ^
    >  x
    4
    >
Overall not that different from try...catch in a garden-variety interpreted language with a REPL.


Thanks! That's fairly straightforward and, I guess, avoids having to pass a closure.


Looks like the author figured it all out at the very beginning:

> If each punctuation character in the K is a separate part of speech, and you reverse their order

     +      /     !    100
     plus reduce range 100

This seems to be how this works. Compare that to math equations like these ones:

https://en.wikipedia.org/wiki/Maxwell%27s_equations#Formulat...

You would not expect them to read them like a paragraph. You are expect to read them symbol by symbol and unpack from the inner out.

K seems to be a dense language. Expect to read it symbol by symbol, not word by word or line by line.

If will be come more readable if you (1) slow down (2) get used to the programming patterns.

Not saying K is a good language or anything. But the frustration of the author seems to have more to do with impaticence and mismatched expectations than with flaws in the language itself.


>You would not expect them to read them like a paragraph. You are expect to read them symbol by symbol and unpack from the inner out.

As someone who has a bunch of letters before my name because of physics, no. You don't read physics symbol by symbol. You expand those symbols to pages and pages, meditate on those pages and achieve enlightenment on one small aspect of the problem. There is so much implicit state in those equations they are basically meaningless.

A nice book that deals with all the implicit state and terrible unclear notation used in physics: https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...


The author is a K professional developer who works with K on a daily basis and have implemented a K interpreter in JS (oK). He is not frustrated, he is making a point. It looks like you perfectly got the point.


Thanks for the clarification.


I've seen a few posts here and there over the years about array languages, but I can't recall any (or any discussion in their comments) that have mentioned any disadvantages beyond more human factors.

So for the k/q/APL/array language gurus out there: in what situations would I not want to use k or another array language, if the relative obscurity of the syntax and mental model were not a factor (e.g., assuming you, your team (if any), and anyone who would ever look at your code would be very familiar with the language and idioms)?

I may not want to choose Go or Java because of latency concerns, or Rust because of compilation speed/iteration time concerns, or C/C++ because of safety issues, or Python/Ruby/Javascript because of performance issues. Why would I not want to pick k?


Performance depending on your use case. APL/J/K can be very fast for an interpreted language, but they won't beat C, C++, Fortran, and Java in most cases.

Also, think about distribution. With most of the array languages, it is commercial, so I'd be cautious about building a business around it. With other languages, it is free to use and deploy as you see fit and some have binary executables that don't need a runtime. K is very expensive. Dyalog APL is pretty reasonable, but still costs money to use and royalties in certain situations.

I think any decent developer can pickup array languages, but many won't want to as they'd rather have something more mainstream (Java, JS, C++, Python...etc) on their resume. So it might be difficult to hire.


> APL/J/K can be very fast for an interpreted language, but they won't beat C, C++, Fortran, and Java in most cases.

Do APL/j/k/q work well for some performance-sensitive situations but not others? Some of the comments here by people who appear to be very familiar with the language make it seem like it performs quite well compared to the traditional high-performance languages.

Distribution is definitely something I hadn't considered; I guess I've been spoiled by the variety of free/open languages available these days.

I've always wanted to try to pick up an array language to expand my horizons a bit (e.g., that one bit of advice that programmers should learn one of the four main types of languages --- ALGOL-based, Lisp, functional, array-based), but I just haven't had the time.


Worth noting that most APL interpreters and the J interpreter are significantly slower than (most) k/q interpreters. I'm not sure if the performance argument is relevant for all of them.

Arthur Whitney's a biased source, of course, but the kparc site has a bunch of little programs in which he beats the equivalents from rather influential people (like the UNIX/Plan 9/Inferno authors http://www.kparc.com/z/bell.k ) for bragging rights.

k losing to some really nicely-written C and most Fortran seems plausible, Java and C++ not so much.


Is there publicly available information on what makes k/q interpreters so much faster? Or is that part of the secret sauce that the companies maintaining those interpreters sell? I assume it's the latter, but hope for the former, since good descriptions of the technical work that goes into high-performance systems usually makes for fascinating reading.


So the language isn't really amazingly fast from certain benchmarks I've seen like fibbonachi, but the entire integrated database solution for analysis purposes is very fast. I wouldn't do high frequency trading or high performance scientific computing in it, but data analysis is great.


> With most of the array languages, it is commercial, so I'd be cautious about building a business around it

K/Kdb/q seems very happy with this state of affairs, which is really strange: if they O/S'd it I could see a lot of goodwill/mindshare going their way, but I think their Morgan Stanley roots view O/S as commie nonsense. Meanwhile, it's definitely viewed as legacy in the bulge-bracket banks that use it b/c it's so hard to hire for, not to mention its limited applications for teams (kdb is NOT a good multi-user database). However elegant it may be, it's doomed to be a COBOL if this is where its main userbase stays.

J is open source, but I can't get a handle on how battle-tested it is. It always seemed like Kdb was the real product that got used in anger.

Personally I think array-heads should spend more time with Haskell, since it has the ability to be massively terse and point-free, has type inference, loves infix gobbledygook, can be tuned for great performance, and is thoroughly open-source.


K/Kdb/q seems very happy with this state of affairs, which is really strange: if they O/S'd it I could see a lot of goodwill/mindshare going their way

It's not too hard to grok: if they freed it, they'd lose money. $200,000,000 in revenue last year from Kx alone. Goodwill compared to cash, executive picks cash every time.

I have no idea what Shakti's doing, but they, too, are probably making a substantial amount of money.


> I have no idea what Shakti's doing, but they, too, are probably making a substantial amount of money.

I'd think AW would be more concerned about legacy and impact than just $$$ at this point -- tradetech has long stopped being a significant source of innovation in computer science as it's essentially iterating on the same problemset from the 00s.

OTOH I suppose having a small cult has its benefits, as opposed to actually giving something to the larger world, O/S isn't exactly a cakewalk.


This is the annoying thing about this software niche. I have zero information on Shakti. Their website just says some buzzwords and has a contact link. You can download it as an anaconda package or something, but I don't understand the license.


The license is "By using this software you surrender your freedom," which is pretty standard.


I'm not an expert in either language, but have played with both and APL is much easier to experiment with and less hard for me to grok than Haskell. Both have REPLs, but there is a lot of additional ceremony with Haskell (for good and bad). I don't see a huge amount of the APL crowd going to Haskell. Yeah it has terse operators and points-free, but it is more the sum of the whole kind of thing.


I agree and don't mean to imply Haskell has an out-of-the-box K/APL experience. More that with it's syntactic flexibility, one can imagine offering a k-like DSL/library in Haskell which would then give you access to its ecosystem, freely distributable apps, etc.


No problem and I like you're thinking, but honestly just want a free and open source APL or K language that is part of a lightweight download (a few MB) that can also create zero install executables that bundle the interpreter. There are a lot of toy projects out there, but nothing really close to what I want. I'd build it myself if I had the time and was significantly more talented :).


Only because earlier today I so definitely thought about the whole "code is a liability (debt)" reality we all live in... that is to say, I'm spit-balling here:

but for once K makes me go "oh shit!" and not because of the craziness of its syntax... rather, if that syntax is so precise/terse, and even if to the outsider it looks fucking insane, does it have a wicked lower TCO because you literally have insanely less lines of liability in your code, libraries, dependencies, etc?

Is that the reason finance uses it, they maybe empirically know what the rest of us don't seem to grasp? Or just because it's so domain specific? Or... something else entirely?


> does it have a wicked lower TCO because you literally have insanely less lines of liability in your code, libraries, dependencies, etc?

Yes. q (a language and distribution of k4) has a built-in webserver with websockets, messaging, all types can be serialised over IPC (except pointers to C code) and most data structures have an efficient on-disk representation.

You might've recalled a few days ago someone announced a specialised database for doing "have-i-been-pwned" lookups in 49µsec. I wrote[1] one line of q that is about 10x faster:

[1]: https://news.ycombinator.com/item?id=22467866

It's just a single operation, and it's built-in, so maybe you think Python could just have mmap-efficient variables and a bin operator and you'd be there, but those things don't exist in Python. If I had the same problem as the author, it certainly wouldn't materialise as "custom database" of any kind, because I'd already be done.

In fact, one of the biggest challenges with "big data" is choosing an efficient representation. The author spent a lot of time on this, but my language just had one built-in. How much time? What's the target? I mean, if I've already paid for q it's clearly a no-brainer, but q isn't free either (even if you get it for no cost: you still need to learn it!)

> Is that the reason finance uses it, they maybe empirically know what the rest of us don't seem to grasp? Or just because it's so domain specific? Or... something else entirely?

Back in the day, k was the only thing you could put tick data into. Now there's lots of systems, and computers are a lot faster, so quants can use Python. Their code doesn't need to be "fast", but "fast enough".

My code still needs to be fast though.


Finance uses it because it's great for sorting out medium-big time oriented data. The IQ test aspect of it is probably considered a bonus in some shops.

There are non-terse APLs out there; the most obvious one floating around is Nial[1]. Culturally though, once you've figure out how to read and write in the compressed ascii runes, it's kind of hard to go back to the other way. J is actually denser because of the hooks, forks[2], rank and full tacit aspect of things. Personally whenever I've composed an algorithm in J I feel like I really understand it in a way that pretty much nothing else gives me with all the extra typing.

[1] https://github.com/danlm/qnial7

[2] https://crypto.stanford.edu/~blynn/c/apl.html


The "amount" of code is the same as in any functional language though. It's just written in a smaller text file.


Arthur tries to choose primitives carefully so that the amount of steps is smaller than other languages (see the 'product' quote in another comment). So even if you give each symbol an English name, the code will often end up smaller.


In which case it seems like two things are being conflated a lot here: terseness and better primitives. I'm opposed to the first, but very interested in the second. Is there a language you think focuses on approachability and readability but keeps the good parts of K?


k9 (from Shakti) will have optional word equivalents for some symbols (or at least common compositions) by default.

Have you looked at q?

numpy has a lot of k/APLish primitives built in.

fml was also interesting but never made it past the spec stage:

https://www.reddit.com/r/apljk/comments/etpbbf/fml_an_optimi...


> +/!10

I personally find the given example harder to read than the equivalent in python/matlab/javascript with lodash/probably many other languages

sum(range(10))

sum(1:10)

With "K", you have to know a few new conventions. In the second case you just have to read, and the data flow is more obvious.


You still need to know that range starts at 0 and the end is not inclusive. Also, the second example is not the same as the first one, unless the second is non-inclusive.


Yes, but as you multiply that difference 100 or 1000, the advantages of the concise notation become obvious.


This 100 or 1000x claim is extraordinary. Do you have any factual basis for repeatedly asserting it?


Contrived Debian shootout benchmarks:

http://www.kparc.com/z/comp.k

https://web.archive.org/web/http://shootout.alioth.debian.or...

Note that the relevant k program here is contained entirely on the last line of code and denoted by a comment telling you what it does:

http://www.kparc.com/z/fun.k

http://c2.com/cgi-bin/wiki?WardNumberInManyProgrammingLangua...

Most Joy people are also k people, and Joy appears to be the only one similar, although still very large.

The C++ program is lost to link rot now, but by the author's own admission, was substantially larger than even the substantially large examples within that page.

      1    2   29 k
     23   70  498 scheme
     25  111  768 otherhaskell
     44  135 1096 erlang
     53  125 1043 rb
     67  190 1419 ocaml
     74  346 1693 haskell
     80  357 2586 java
Keep in mind that by their own admission, everyone in the c2 thread's code was hard to read (outside of the Joy program), and many were buggy. Meanwhile, the k program is quite simple to read.

These are just some basic examples, though.


You should probably label your columns, but that seems to be wc output.

The Scheme and Haskell versions are actually shorter than you imply since they include the pairings in the count (when the pairings in the code are examples). Additionally, the k version is one line longer than you suggest. The definition of a is on its own line is used in the solution. Still only 2 lines, but if you want to persuade you should probably have correct numbers.


Fair crit! Running off of few hours of sleep last night (spent way too long defending k in this thread; need to set sensible noprocrast settings or get some flavor of self-control); accuracy is impaired. Another error I see in retrospect is that I included comments in a few of them but not all of them.


So maybe 40x in line count if you compare an extremely verbose language like Java to K in this contrived example? That's not close to 100x and especially not close to 1000x.


I'm sorry, but which one are you looking at? (All comments stripped:)

k mandelbrot line count: 1

c++ mandelbrot line count: 148

Java line count: 157

Modern PL example (found on the modern shootout page, which I just found out they changed the link on):

Rust mandelbrot line count: 116

Ada mandelbrot line count: 532

And that's not even the one where k comes off the best.


As I understand author’s point, your example with “sum” is only valid because “sum” word has very few meanings, especially in software engineering world.

Even the word “product” would introduce certain ambiguity in context of real world codebase: is it “math” product or some “domain” product?

It’s even worse with most other words.

So if in any given codebase you still have to know the specific meaning of every word, how is it different from having to know specific meaning of every K symbol?


> So if in any given codebase you still have to know the specific meaning of every word, how is it different from having to know specific meaning of every K symbol?

So you never encountered namespaces then? Because that's why you use namespaces. The number of symbols is limited, which is the reason why the whole "a symbol is more explicit"-argument simply doesn't hold in general. It breaks apart pretty quickly. That's why you'll find the following sentence pretty much verbatim in most maths and cs publications: "Throughout this paper, unless otherwise specified, we use the following notations," followed by half a page of definitions. Plot twist: different authors often use different notation even within the same field of research.


I didn’t say I buy that argument, it’s just my understanding of it.


> “sum” word has very few meanings, especially in software engineering world

Sum types, checksums, …


That’s why I said “few”, not “one”?


I'm not sure that sum and product differ significantly, which is why I brought up a couple of examples.


I went through a similar transformation when I finally understood LiSP. I knew Python and had programmed in it for several years but it was breaking into Scheme that changed my approach. I wonder if learning new paradigms always changes how you use other languages.


According to Peter Norvig, you were half-way to Lisp using Python:

https://norvig.com/python-lisp.html


According to Guy Steele, if you're a C++ programmer using Java, you've been dragged halfway to Lisp.

https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/...

So by that estimate, if you switch to Python you're another halfway to Lisp (three quarters of the way from C++).


I could never figure out how Java is in any sense halfway to Lisp.

Python, maybe.


The reference point is C++.

What makes Java nearer to Lisp than C++?

Mostly the Java runtime, the JVM. C++ basically comes with some kind of manual memory management and raw data layout.

Lisp comes with Garbage Collection and managed memory. In Lisp one passes references to arrays and other objects around. The runtime tracks which objects are of what dynamic type and of what size. The garbage collection may be a simple mark&sweep or a more complex, say, generational GC. The first garbage collection was developed for Lisp around 1960 and has then slowly spread to other languages. C++ was object-oriented, but with a more traditional non-GC approach. Java changed that.


Those are certainly huge similarities, but I am wondering if a test of the veracity of that statement could be tested by how difficult a seasoned Java programmer feels to learn Lisp compared to how difficult a seasoned C++ programmer feels to learn Lisp.


what is `<<` ("ordinal") supposed to be doing? i understand how `<` works, but how is it useful to apply it a second time?

the closest i got to finding some pattern is this:

  >>> xs
  '34210'
  >>> grade(xs)
  [4, 3, 2, 0, 1]
  >>> grade(grade(xs))
  [3, 4, 2, 1, 0] # :O
where

  grade = lambda xs: (
     [i for (i,_) in sorted(enumerate(xs), key=swap)]
  )
  swap = lambda p: (p[1], p[0])
no clue what it means though.

(i guess "ordinal" really is too ambiguous...)

EDIT

alright, i see it now - `<<xs` is "for each item x of xs, where does x land when you sort xs?


< grades upwards: for each element, the element of the returned array is its position if the array is sorted.

    x: 10?A:"abcdefghijklmnopqrstuvwxyz"
     x
    "ysyselacwl"
     +`x`y`z!(x;<x;<<x)
    x   y z
    --- - -
    "y" 6 8
    "s" 7 5
    "y" 4 9
    "s" 5 6
    "e" 9 2
    "l" 1 3
    "a" 3 0
    "c" 8 1
    "w" 0 7
    "l" 2 4
So <x gives a sort index, <<x tells you how far each element is from the minimum in that sort index


I still don’t understand what the numbers in column y mean. I understand column z and that’s also what grade up is returning in John Earnest’s oK:

    x:“baced“
    <x
  1 0 2 4 3
    <<x
  1 0 2 4 3
That is, for example, when sorting, the item at position 1 in the original string (“a”) moves to position 0 (the beginning of the sorted string). So in the end you get “abcde”.

What does column y mean? If I sort the original with it, I get “wllaysysce”.

Has this changed between versions of K?


Nevermind. I've figured it out with the help of the reference manual at http://www.nsl.com/k/k2/k295/kreflite.pdf. I knew it was a permutation but I applied it incorrectly.


yeah, i figured it out eventually. since you seem knowledgeable about K, would you mind taking a look at my second comment in this thread (about inverting permutations) and seeing if it makes sense? thanks!


This may help - I found it confusing at first too: https://twitter.com/kcodetweets/status/1236904643684270080?s...


one cool explanation i found is that grading a permutation list effectively gives you its inverse:

  xs = "cab"

  <xs = [1,2,0] =
  {
    0 → 1
    1 → 2
    2 → 0
  }
  
  <<xs = [2,0,1] =
  {
    0 → 2
    1 → 0
    2 → 1
  }
which is exactly what "ordinal" is supposed to do! it kinda makes sense - grading the indexes given by `<xs` gives a permutation list that maps each index back to its actual position (i.e. [1,2,0]==>[0,1,2]). or at least it seems that way if you stare long enough, would be good if someone could confirm!


People are so quick to reject K and APL-style languages for superficial reasons that they never get to the deep and interesting reasons! I am mostly familiar with APL, but I think the things I appreciate and dislike are about the same in K.

One interesting philosophical difference, at least among some APL programmers, is that building abstractions should be avoided. TFA has a hint of that philosophy, in its suggestion that perhaps naming (the root of abstraction) hinders clarity. I think this is definitely worth considering, and it doesn't really have anything to do with the (supposedly) cryptic syntax.

One reason I don't like K/APL-ish vector programming is performance. I once spent some time working with others on a GPU-targeting APL compiler, and I found that some common programming patterns are quite antithetical to good performance. In particular, the use of "nested arrays" (not the same as multidimensional arrays) induces pointer structures that are difficult to do anything with. Such nested arrays are typically necessary when you want to do the equivalent of a "map" operation that does not apply to each of the scalars at the bottom, but perhaps merely the rows of a matrix. Thus, control flow and data are conflated. This is fine conceptually, but makes it difficult to generate high-performance code.

Another concern is that encoding control flow as data requires a lot of memory traffic (unless you have a Sufficiently Smart Compiler; much smarter than I have ever seen). Consider computing the Mandelbrot set, which is essentially a 'while' for each of a bunch of points. An idiomatic APL implementation will often put the while loop on the outside of the loop over the point array, which means it will be written to memory for every iteration. In other languages, it would be more idiomatic to apply the 'while' loop to each point individually, which will then be able to run entirely in registers. You can also do that in APL, but it is normally not idiomatic (and sometimes awkward) to apply complex scalar functions to array elements.

Just look at this Mandelbrot implementation from Dyalog; they do it in exactly the way I described: https://www.dyalog.com/blog/2014/08/isolated-mandelbrot-set-...

Specifically, the conceptual 'while' loop has been turned into an outer 'for' loop (this is OK because the 'while' loop is always bounded anyway):

       :For cnt :In 1↓⍳256                      ⍝ loop up to 255 times (the size of our color palette)
           escaped←4<coord×+coord               ⍝ mark those that have escaped the Mandelbrot set
           r[escaped/inds]←cnt                  ⍝ set their index in the color palette
           (inds coord)←(~escaped)∘/¨inds coord ⍝ keep those that have not yet escaped
           :If 0∊⍴inds ⋄ :Leave ⋄ :EndIf        ⍝ anything left to do?
           coord←set[inds]+×⍨coord              ⍝ the core Mandelbrot computation... z←(z*2)+c
       :EndFor


There's a q Mandelbrot here:

https://github.com/indiscible/qmandel

And Arthur's version in k (also below):

http://www.kparc.com/z/comp.k

I should convert it to k9 (Shakti)...

    +/~^+/b*b:49{c+(-/x*x;2**/x)}/c:-1.5 -1+(2*!2#w)%w:10


My K is weak, but I think that implementation is also putting the fixed point iteration on the outside, isn't it?


> One interesting philosophical difference, at least among some APL programmers, building abstractions should be avoided

Humans need abstractions to think. What programmers need are abstractions that are not black boxes. I.e. can be opened.


What is TFA?

> the use of "nested arrays" (not the same as multidimensional arrays) induces pointer structures that are difficult to do anything with

J doesn't allow nested arrays by default; if you want to create such a structure, you have to use boxes (and then unbox the values inside of them to get at them), which makes it explicit and reduces its usage. AFAIK, k doesn't have nested arrays at all.


This looks cool, seems like it's proprietary and expensive though. Is there anything similar that's free?


https://www.gnu.org/software/apl/

Most likely available through your package manager.


Better:

Free software K interpreter:

https://bitbucket.org/ngn/k/

Arthur's K interpreter isn't actually that expensive for casual use anymore (it's free for most non-commercial use and you can download it off the Shakti website pretty easily), though. Not that I'd recommend it: Arthur's a genius, but nothing's worth using proprietary software for.

What I usually recommend for learning the paradigm is J, largely because it has far better resources than anything else in the space. J's labs are wonderful, and the books Ken Iverson wrote before he died (all available for free) are great both for learning the language and also for learning many other things.

GNU APL is nice, and by no means am I trying to diss it: it's just not the best for learning, in my opinion. Its info pages are really nice though, like most GNU info pages.


There are a lot of books, tutorials, and papers on APL and APL2, which are applicable to GNU APL.


I don't believe so? I started out using APL2; GNU APL seems distinct in ways that are noticeable. Certainly not in a bad way, but I think enough to cause confusion for any newcomer reading said books, tutorials and papers.


The system functions and included workspaces are different, but I haven't had any surprises with the operators so far.


Using / for reduction reminds me of the Bird–Meertens formalism.


The problem is also that you encounter K/APL in these circumstance, hardcore math functions or ad hoc queries. I'd say that most languages look their worst in these circumstances, deeply nested for loops for e.g. 3D math in C aren't exactly instantly clear either.

Once you see e.g. input verification or other more "tedious" parts of programs, things get decidedly less cryptic.

And less optimized/concise, of course. A bit like the people who complain about GUI hello world programs taking 10 lines.


I have the same question for K aficionados as I have for Forth ones: what real-world, human-usable, important software has been written in it? A GUI framework, a web browser, a window manager, a text editor, etc. Anything?

I'm asking because the article is poking at C-like languages, i.e. implying that K is good for general-purpose use. Yet all I hear about is that K is good at multiplying numbers and matrices, which is a pretty limited playground.


Forth:

OpenFirmware (previously seen in Power Macs, pretty much every Sun device until Sun died, so on.

Canon Cat (greatest text editor of all time, by Jef Raskin, the guy who made the Macintosh)

Forth Has Been to Space (multiple times, but who's keeping score?): https://www.forth.com/resources/space-applications/

OKAD, which was used to create the processor with the lowest power usage per instruction in the entire world.

k:

Important, no, but a substantial volume of stuff. Text editors, GUIs, window managers, operating system completely independent of any other, a pretty important and very expensive database, so on.

k, unlike Forth, came after software was seen as IP. Software as IP has made a lot of people very angry and been widely regarded as a bad move.

k's ancestor, APL, was used in quite a few things quite elegantly, and while k is different, it's not different enough to be a different paradigm. Some examples: first practical electronic mail system, first widely-used electronic mail system (used for Carter's successful Presidential campaign, for example), first worldwide computer network, I could go on.

That k hasn't seen as much groundbreaking work done in it is less because of the language itself and more because of the insane costs, trigger-happy lawyers of kx, and money.

Of course, when you asked this of Forth the other day, you were trolling ( https://news.ycombinator.com/item?id=22319154 ), so I imagine I might be wasting my time.


Sorry, but I'm not the one trolling. Reading up on "Canon Cat", I find

> It had a text-based interface without a mouse, icons, or menus

sigh

The Forth in space thing is just some drivers/firmware for spaceships. I.e. once again nothing that an ordinary user would care about. The APL part is devoid of links to real contemporary software. Not even something of Notepad++ quality...

Please don't waste your time as you don't seem to want to understand my question. Thank you.


Don’t know about K, but SimCorp is a financial software company that has its original software built in APL. Of cause these days they are also doing a lot in other languages.


Arthur wrote an OS in k with some c. Not public, but geocar can probably talk more about it if he shows up.


How do people do error handling in K? Or is the data typically well-formed in real programs? Not bashing anything, I'm just genuinely curious.


“ and you can add 5 to an entire matrix at once as easily as a single number. Why you would want this property remains elusive-...”. Why would that be elusive I wonder ? MATLAB has been doing it since the 80s as an example and that exact elusive feature would have been used countless times.


Those words were spoken "in character". The idea was that some readers would chuckle at the naivety.


And many time’s I shake my head at how people miss humor/sarcasm on HN. Got me.


Also Fortran and C++ Eigen matrix math library. Very useful for anything that is non-trivial math.


I totally buy that this is elegant for small number- or lust-based things, but since a financial database is named as the flagship product: how does I/O look like in K? Both user input and disk access. How do you make a user interface? How do you networking?

Is it still as elegant?


My problem with K isn't its syntax, it's its culture: People who are so self-satisfied with their ability to read their own code they refuse to give variables meaningful names, aping mathematical tropes they half-understand at best, because those mathematical equations are only tolerable in the context of a paper where the rest of the lifting is done by natural language prose. They eschew comments without realizing that those comments enable terse expressions by showing you have the basic maturity to express yourself in a way others can understand.

Knuth tried to move us to that state with Literate Programming. How many K programmers use Literate Programming? One? Two? Any?


Elsewhere in this comment section, someone mentioned a HFT firm that extensively used noweb and K.


My hat is off to the people who can handle this kind of thing; I myself cannot.


Do you really think you could not, even after taking a few weeks to study and practice?


Probably not, no.

I could easily see myself becoming dejected after printing out a few cheat sheets and buying some relevant books. I would have attempted some exercises and done whatever toy projects with an increasing feeling of dread that I was simply aping what was before me only to find what I thought I had learned had slipped away in a weekend. Should I get much further than that, I would then try to solve an actual problem I had with it and that would be the end of it as I would stagger into realms of development environments and dependencies to try to get the right libraries in order, I would look for advice online and tentatively ask questions only to be blasted for using the wrong dialect of whatever, or have my questions misconstrued until I could be informed that what I really have is an XY problem, I don't really want what I asked for after all. What a relief to me.

At that point I throw in my hat and go back to what I am used to.

I know myself well enough to know that I just haven't the grit or patience to torment myself with trying to find an application for the befunge du jour and, as such, learning it becomes a Glass Bead Game for me. If my life depended on it, perhaps, but this reminds me too much of the "executable line noise" I escaped: Perl, with a kind of sly terseness I associate with any number of obfuscations as a kind of puzzlebox monument to the "cleverness" of packing something into a single line I have had to slam into while programming.

I lack the vim to tilt at these windmills.


That's a pretty good set of things to visualize if you want to feel defeated before you even get started.

I write K for a living, and J for fun. J is WAY crazier, and I learned it specifically because it was so weird and crazy. I kinda wanted to remember what it was like to be a beginner again.

In all these languages, you can always fall back to doing things the "old way"... They all pretty much support structured and functional programming with loops and functions and whatnot. It's just there are lots and lots of shortcuts, and you get used to them over time.

(Also for what it's worth, the communities around vector languages do tend to be pretty friendly and welcoming...)


> I write K for a living, and J for fun.

Do you have a clear preference for either? If you could choose J at work, would you? Would you write K for fun?

> (Also for what it's worth, the communities around vector languages do tend to be pretty friendly and welcoming...)

If you've got a site / mailing list / IRC channel in mind, I'd like to know about it.


Some context often missing form these "code golf" showcases:

Domain specific languages have syntax optimized towards specific tasks. This often means very compact symbolic expressions which looks completely inscrutable to outsiders but are highly efficient when you know the language. String processing in Perl, pointer arithmetic in C, selectors in jQuery. Nobody can guess what the code does by looking at it, you have to learn the language. But it is worth the investment when the task in question is often used.

So the question is, are you working in a domain where the investment in learning a particular DSL will pay off?

But this context is completely absent in the article. By comparing the K syntax for "reduce" with a for-loop, it assumed the competition is systems languages like C or Go where a for-loop is idiomatic. But then the question is, how often do you use "reduce"-like operations in this domain anyway? It is really worth optimizing the langauge towards?

If on the other hand the article admitted K is a domain specific language optimized for numerical processing, then a reasonable comparison would be against Numpy or Matlab, perhaps Haskell, not for-loops.


I don't think he was actually arguing that K is domain-specific, he was just observing that it seems that way to outsiders.

If anything, the most successful applications seem to be with databases (not math), but it's also pretty great for interactive graphics:

https://github.com/JohnEarnest/ok/tree/gh-pages/ike

I work at the same company as the author, and we use K3 for all kinds of things -- system administration, network servers, interactive web apps.

It's a pretty general purpose language. And you can even do old-school loops if you want. It's just very concise.

Also, in K, there's a whole set of one-or-two-character symbols that allow you to express other uses of loops in traditional languages.


author ended article with

> Leaning back in your chair, you think to yourself:

> |/

what does this translate into?


I reversed each of my list of objections.


Roughly:

    let max = list[0];
    for (let i = 0; i < list.length; i++) {
        max = Math.max(list[i], max);
    }
With a|b, you get the maximum between a and b. / is reduction. So, |/list, will be equivalent to list[0]|list[1]|list[2]|...|list[n], i.e. the maximum value in the list.


dyadic | is or, or max. So e.g. 4 = |/1 2 3 4


Seeing this article and the top comment, is this that contentious an issue? I wasn't aware people care that much about Algol vs. APL-ish languages.


His diff was lame, I'm sure the coworker would have gone along with

  Math.max(...list);


That hack isn't guaranteed to work with long lists, since the standard allows a fixed limit on argument count. For example, in Node 12:

  > Math.max(..._.range(0, 1000000))
  Thrown:
  RangeError: Maximum call stack size exceeded


I have fallen in love with this language. How I do replace Java(Script) with this?


I think J is better equipped for web programming.


On the web? Perhaps WebAssembly. Please don't write your JavaScript this way, though.


Disingenious and unconvincing, like a sales pitch for a product that is actually bad for you.

Yes, that kind of syntax can be wonderfully terse when doing simple arithmetic operations on lists of integers.

Most programming work does not consist of doing simple arithmetic operations on lists of integers.

If you want to convince me that this languagy useable for anything but toy problems specially chosen to demonstrate the language's strengths, show me how it can be used to handle an HTTP request with some special cases for certain headers, or to parse a custom date format with variants, or to implement a game with realtime user input.


> Most programming work does not consist of doing simple arithmetic operations on lists of integers.

This^^^.

And even when doing simple arithmetic operations on lists of integers it's hard to read unless you've become very familiar with it.

This is no good.

We actually have an in-house DSL with a symbolic representation that looks very similar to K for our reporting system. It's one of my many missions to gradually phase it out in favour of something with either Python or SQL syntax. This will ease onboarding and drive adoption.

The advantage of a language like Python, for example, is that although I wouldn't say I know it, I can read it and understand what's going on most of the time. Same with SQL or Ruby or various other languages.

This makes it easy for people to understand what something is doing just by reading it, and opens up the possibility of learning along the lines of "How did Susan do that?", look at Susan's code, and see something that's fairly easy to read, understand, and remember, as opposed to a screed of arcane looking symbols.



This is what - I presume - is their implementation of sending an e-mail

E:[u:();e:{a::?[a;x;y];J(x)+#y};cz:{$[#u;e/_`u;]};kx:{u,:,(j,j+#x;k_a);e[k]x};kb:{$[=/k;J j-1 0;];kx""};cx:{kx cc`};cv:{kx@9'`}] A:[w::_W%F;j::k;k:0 0;J:{k::2#0|x&#a};lx:{J j+x};y::V+F0,j;z::1'(V;w[1]$a)],E

U:{(`Z,'!Z).'+x};V:0;i:0;I::`Z,(!Z)(#Z)!i;y:{I .`y};zf:{$[`kt=x;i+:1;^`W`F?x;I . x;. x]};u::F[0]!#Z;W:{U`V,,u,'0;U`W,,(-':1_u,x),'x 1};z:{U`z;1'(V;U[`a]{$[#x;"";y]}'$!Z;3)} Z:`to`cc`subj`!A$/:4#()

This looks like an exercise in obtuseness.


A bit more convincing: http://www.kparc.com/edit.k

Properly formatted and with comments, it almost looks readable.

The insistence to use one- or two-letter identifiers that you then have to look up the meaning of in a comment makes it look pretty childish, though. "Look ma, it's still super terse even when doing something real!" Yes kid, it is when you refuse to do the one obvious thing that could actually make it readable.


Single-letter identifiers are a natural thing to object to when coming from other language paradigms, i.e. nearly every programming background out there, but this is a category error. What seems ridiculous in one context can be sensible in another. The objection turns out to be parochial.

It reminds me of how people think that parentheses are a significant aspect of Lisp, when in practice they're not. The parens look grotesque at first, but once past the novice stage they fade completely into the background and impose no cognitive overhead. Because of their regularity, the parens free you to program without thinking about syntax, allowing you to think more about the problem at hand—a highly liberating experience—yet to someone who hasn't worked with the language long enough to get that experience into muscle memory, they look absurd.

Short identifiers are preferred in APL-style languages because longer ones obscure the code. As you get familiar with the idioms of the language, you can pick them out in visual chunks to grok what a program is doing. You're not reading the program operator-by-operator, but phrase-by-phrase.

That wouldn't work if the programs used longer identifiers, because then the programs would consist mostly of identifiers, making it harder to scan the phrases, making it harder to grok the code. That's why people don't do it. You can simulate that in more familiar languages:

  for (ridiculously_long_name = 0; ridiculously_long_name < 100; ridiculously_long_name++)
No one writes like that because beyond a certain length, the identifiers distort the code. That's what's going on in APL languages too; it's just that the "certain length" turns out to be 1 or 2.

In other words, the reason why APL-style programs adopt this style is not because the programmers are childish, but because the ergonomics of the language make it optimal. This is hard to understand, but only because mainstream languages have completely different ergonomics.


Writing APL (or J/K) with the ascii character set is like coding C in Morse code.

I wrote plenty of J code for my masters in particle physics and the only way I could use it without going insane was to write a noweb filter for literate programming when using latex symbols for the functions. The woven document was close to mathematics, the actual source code was editable source code.


One or two characters for identifiers is optimal? What kind of programs are being written with APL/K? At some point you'll have more concepts in your program that available letters unless you're dealing with tiny programs.


The more experienced programmers in this thread can answer this better, but I believe top-level functions tend to get longer names.


Are APL programs normally clumped together, as they are commonly presented to show off their terseness, or do they normally have comments and line breaks to indicate structure?


I tend to agree: falls into the category of "just because you can, doesn't mean you should."

Actually, that's perhaps a bit harsh.

I have some pretty obscure hobbies, and there's certainly nothing wrong with doing this sort of thing if you want to. Still, having done so it's a bit rich to then look askance at the rest of us like we're all idiots for not choosing to go down a similar route with our own systems.


I suspect everyone would love K if they had an extra 30 IQ points to work with.

The reason we have Python, Javascript, and Java is because most people (including me) just aren't bright enough to work fluently in these terse APL-alike idioms.

They fail because it's hard for most people to keep that many terse symbols and symbolic relationships in memory at the same time, without English labelling and all the other usual memory aids.

(In fact basic functional programming is a bit of a stretch for the average commercial developer.)

An interesting hypothetical is what a language would look like if it needed an extra 60 IQ points...


In the 70's, in high school, I taught myself APL (mostly TOPS-10 APL SF and a bit of IBM APL SV) well enough to grasp the fundamental functional approach and write real programs (and the obligatory game of life in 1 line :-)) At the time I was also using PL/I and the difference was mind expanding.

I'm quite sure I don't have an 30 extra IQ points. Mostly I was just mildly obsessed (and these days I find persistence is a workable substitute).

It is incredibly valuable to have experience across the range of programming paradigms.


IQ points aren't really anything to do with it: I suspect most programmers, and certainly those who are average or better, could learn to be proficient with K if they invested the time.

The real issue is knowing your userbase.

Example: we have this in-house DSL that's been around for years and uses a symbolic representation not a million miles from K that we use for automating reports. The problem is it's a pain to onboard people who, whilst they're bright enough, aren't programmers and in any case have many other responsibilities to attend to. Giving them something more readable rather than arcane makes life easier for everyone.

And fundamentally we want to make it easy for everyone, not just the top quartile or the top 10%, to quickly crank out high quality automated reports across different market sectors, geos, etc., as well as to modify and maintain these reports into the future.




So how would this look like if it also needed to handle triangles? And we want textures, so we need texture coordinates and a way to specify per-object materials. Also, the original code just uses bounding spheres, we probably want something better like a kd-tree.

In the original code[1] these changes are trivial to make, even for someone new to the code base. How would they look in k? As others I struggle to see past the terseness, but maybe it would make more sense if I could see the program evolve.

[1]: https://web.archive.org/web/20070822025008/http://www.ffcons...


Graphical spreadsheet application:

http://www.nsl.com/papers/spreadsheet.htm


With 5 to 10 lines of comment per one line of code, it starts to become readable. You're still left constantly wondering "what do a and x mean in this context, again?"

3/10, would definitely not want to use for any kind of real work.


John Earnest links to his tutorial in the Stages of Denial post; if you click it and give it a read, you'll probably understand much easier.

But you shouldn't feel pressured to use k: people who advocate for k play the role of a Cassandra who can drown her tears in dollar bills.


A Cassandra... or a victim of Stockholm Syndrome.


is this like a code golf example? or what you expect "good" K to look like?


Which part do you not understand? In any case, raya.k is quite splendid. Did you check that one out? It's very clean and quite simple. (I linked to it right below that one.)


The question doesn't imply failure to understand. I'm also curious whether this is considered a good example of k code, which is clear and readable to people proficient in k.


literally any of it. I don't know perl but I can read and modify well written perl scripts in a pinch with maybe a little googling.

I know several major languages and this program looks like noise to me. is it standard to give functions terrible names? and to not pull out constants to make things understandable? i'm unclear what semi-colons, colons and commas do here or why the lines are so long.


> I know several major languages and this program looks like noise to me.

That's because it is. As pointed out repeatedly, one of the main goals of the program is to minimize the program size, in terms of bytes.

Thus the source code is essentially compressed code and, as we know, compression makes thing more random hence more noise-like.


Thus the source code is essentially compressed code and, as we know, compression makes thing more random hence more noise-like.

This is wrong. k programs aren't really random at all. It's not even that complicated; it has less than thirty primitives. It's a notation for representing ideas, and it excels at that.

Dyalog APL is similarly terse, although less so, and the goal of it isn't to be terse. This criticism still gets thrown at it, and is representative of an unwillingness to accept something that mathematics has known for years: notation isn't golfing, it's a convenient way to express thought.


> This is wrong. k programs aren't really random at all.

The k source code posted around here are certainly more random than a similar byte count of say Java code. Try zipping 1000 bytes of either.

> notation isn't golfing, it's a convenient way to express thought

Sure. But up to a point. And it seems to go beyond pure notation. Check out the interpreter posted here: https://bitbucket.org/ngn/k/

It _requires_ a readme file to explain the file names. That's not convenient by any sensible measure. And the same goes for variable and function names. Take the xml parser[1], "xd". How would a new user know what this did without checking the definition?

[1]: https://a.kx.com/a/k/examples/xml.k


It doesn't "require" a README file to explain anything. cc has a man page; when was the last time you checked it?

For example: https://news.ycombinator.com/item?id=22009876

How would a new user know what this did without checking the definition?

Why would somebody want to know what it does without checking the definition? The definition isn't complex, and the program can be read and kept in your head in the span of a minute. It's also well-commented.


"one of the main goals of the program is to minimize the program size, in terms of bytes." -> so, you're claiming that code-golf is actually the intended goal?

It seems that there's a fundamental disconnect of values here. It seems to me that you're optimizing for something that's nearly insignificant (to me) at the expense of qualities that actually matter for me and almost everyone else.

In essence, if that code could be modified to take twice as many bytes but allow other programmers to save 10 seconds when figuring out what exactly does 'k' (the first symbol introducing in that raya.k example) means in this context, then it should and even must be done. If there's a tradeoff between program size in bytes and readabilty, then there's no question that program size in bytes should be sacrificed for even small improvents in readability.

That raya-k is an excellent example - it's a nice, terse proof-of-concept, but it's not finished until it's been "ungolfed" for maximum readability (and likely at least twice the code size), and is currently not usable for illustrating the language unless you intentionally want to pick a bad, unoptimal (i.e. heavily optimized for a wrong metric at the expense of important things) example as an illustration. Terseness is nice-to have if all other things are equal, but sacrificing readability to improve terseness is ridiculously maladaptive.


Indeed, my feeling is also that K optimizes for the wrong things, at least as a general purpose language.


    K       +/!100
    Python  sum(range(100))
    Haskell sum [0..99]
    Rust    (0..100).sum()
    Ruby    (0..99).sum
There are plenty of ultra-concise languages for toy problems like these; they are called ‘golfing languages’. Ultimately it's a bit pointless since most of the time people aren't playing code golf.

(The issue with setting everything to obscure ultra-compressed keywords isn't that it's not possible to learn those keywords, but that most of the time programming is done over specific and customized domains with larger and problem-specific vocabularies.)


Except in K all the idioms are laid flat. As pointed out in the article, the K code is the equivalent of range(100).reduce(operator.add), it encapsulates the entire meaning, there is no need for an intermediate function definition because that would be longer than the code itself.


Well I guess I agree with this statement, except that I don't think K is primarily used for solving toy problems.


That's what I'm wondering with K! What happens once I want to abstract over the basic operations?


The article's point is a meta-point. Congratulations on actually being part of the meme in the article.


The article's point is dishonest smugassery. Congratulations on convincing people to avoid K for its community, if not the language itself.


If you are stuck in a boring language and want something more concise, you don’t have to give up legibility by going to K. There are plenty of languages where you are not expected to use “for” loops.

Python/numpy is most accessible and probably closest to “instant gratification”. Mathematica is great for, you know, mathematics, but you’ll need a license. And there is always haskell.


k is perfectly readable. That's like, one of the biggest points of the post. The post mocks the mindset featured within your comment.

None of the languages you listed are quite as readable as k to someone who knows it.

All of the languages you listed are significantly slower and feature none of the benefits of a concise notation, even with fast.ai's Python style guide made to imitate Arthur Whitney.

http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...


> k is perfectly readable. That's like, one of the biggest points of the post. The post mocks the mindset featured within your comment.

It mocks it, but it doesn't present a convincing argument. Switching from an imperative language to K is making several changes at once, so we can't really conclude anything about the details of which changes are good or bad. The closest I've seen to a controlled experiment is something like: on a Haskell codebase, switch between using symbolic names like "^>>" and alphabetic names like "promap". And my experience has been, in a real-world mixed-ability team, that overall the alphabetic names had the advantage; if this isn't so for K, then we should try to understand the reasons why. The OP gestures at this, but unconvincingly: why yes, "ordinal" is much clearer than "<<", thank you very much. "Ordinal" may mean different things in different contexts, but so too does "<<" for anyone who knows other programming languages; a name that can help with intuition is a good thing even when it isn't perfect.

I'm very much a believer in concise notation (more than most people I've worked with), but the advantages of having a name that can be unambiguously spoken are large enough to outweigh a small constant factor. I sketched a mainstream-FP implementation of the famous 1-liner APL game of life, which became something like three lines (not counting implementing the equivalents of the APL operators, which require slightly fancy polymorphism to do the right thing on vectors and matrices but are otherwise not that special) that could actually be read and talked about. That seemed like a good tradeoff.

One can - and should - write code that's much denser than the current maintstream style. I do think the industry massively underestimates the value that comes from fitting a function or class on a single page. But I remain unconvinced that it's worth going all the way to code that can't be communicated verbally.


I'm very much a believer in concise notation (more than most people I've worked with), but the advantages of having a name that can be unambiguously spoken are large enough to outweigh a small constant factor.

I don't believe this holds up to scrutiny. APL as a language was made to be spoken, and it seems to have succeeded in that. Alan Perlis briefly touched on that in a great paper,[1] and just about everyone who's worked with the language and its derivatives agrees that your statement isn't true.

The idea that APL and its derivatives can't be "read and talked about" is frankly ignoring reality by any stretch of the imagination.

For example, here's [2] [3] [4] some videos disproving your statement, the first two being APL, the last being k itself.

For extra bonus points, see Aaron Hsu's talks [5]; pick any of them, they're all pretty good, and most have a segment in them explaining how that viewpoint is wrong.

[1] https://www.jsoftware.com/papers/perlis77.htm

[2] https://www.youtube.com/watch?v=a9xAKttWgP4

[3] https://www.youtube.com/watch?v=DmT80OseAGs

[4] https://www.youtube.com/watch?v=kTrOg19gzP4

[5] https://www.youtube.com/results?search_query=aaron+hsu


> I don't believe this holds up to scrutiny. APL as a language was made to be spoken, and it seems to have succeeded in that. Alan Perlis briefly touched on that in a great paper,[1] and just about everyone who's worked with the language and its derivatives agrees that your statement isn't true.

The people who have worked with the language are a small and highly specialised minority of programmers, unusual in many respects. I haven't worked with APL, but I have worked with symbol-heavy Haskell-style code, and I've found that even if these operators theoretically had standardised, pronounceable names, in practice being able to talk about them was a genuine problem. If there's some special difference between APL and symbolic Haskell libraries, what is that difference?


Alan Perlis is a Turing Award-winner who helped to standardize the most influential language of all time, ALGOL, of which Go, Python, C and most languages originate. In many respects, he was closer to a Go programmer than a Haskell programmer.

Haskell's symbols aren't notation, they're just scribbles. Miranda is a better comparison, but Miranda too doesn't lean hard enough.

Ken Iverson literally developed APL as a way to give better lectures at Harvard. It was a verbal and handwritten language long before it was typed. It excels at it, because it was designed for it.


> But I remain unconvinced that it's worth going all the way to code that can't be communicated verbally.

Symbols can be communicated verbally. You can make up new symbols and give them a verbal meaning.

I think the big issue here is that programmers are lazy and whine about anything they aren't able to pick up within a weekend with their prior experience and some hard staring. "I looked and I didn't get it so it's unreadable." By the same token, Greek, Russian, Japanes, Korean, and many many other languages must be unreadable, along with like mathematics.

Meanwhile, I'm happy that we have concise syntax for mathematics, even if you have to go out of your way to learn to read all these fancy symbols. Notation as a tool of thought and all that.


> I think the big issue here is that programmers are lazy and whine about anything they aren't able to pick up within a weekend with their prior experience and some hard staring. "I looked and I didn't get it so it's unreadable." By the same token, Greek, Russian, Japanes, Korean, and many many other languages must be unreadable, along with like mathematics.

Language design would be easy if it weren't for those pesky programmers :). I don't think you're wrong, exactly, but I'd note that few companies fully support people learning any of those (rather they rely on hiring people who studied them in university). Most programmers, quite rightly, aren't willing to commit a significant chunk of personal time to learning something with little immediate benefit, and most employers won't adopt a language that requires a significant training programme, and I'm not convinced they're wrong either.


> Language design would be easy if it weren't for those pesky programmers :).

What kind of a language would you make if time-to-learn-it were not a factor in it?

I've been thinking about language design for years but I still haven't arrived at anything specific. There are ideas floating in the air, but I haven't had the time to try them out in practice.

I think it'd be nice to see efforts like this, where language designers drop on the floor every concern about familiarity or similarity to existing languages and just find out what results in the most powerful language (in general, or in a particular domain).

It's a very difficult problem.

> Most programmers, quite rightly, aren't willing to commit a significant chunk of personal time to learning something with little immediate benefit, and most employers won't adopt a language that requires a significant training programme, and I'm not convinced they're wrong either.

They might be pragmatic, but I feel like we're kinda stuck in local minimas on too many fronts, thanks to such shortsightedness.

But don't you think it's mind boggling, and perhaps even a little hypocritical, that we spend up to around two decades of our lives in education (with a bunch of fluff that somebody always argues might be useful some day). And then after that, learning a new tool that would require more than a few weekends is no-no? I find it quite absurd.

But yeah I'm not really expecting companies to care, even though I think they should. To be honest I don't care too much about companies; from what I've seen, the vast majority of them are just crap ;) Thankfully there's open source.

Long term, I think something about the system needs to change. We should have more flexibility to train people on demand without putting all the burden and cost on the individual alone (or their company). For all the talk about lifetime learning, I don't think we've done much yet.


> I think it'd be nice to see efforts like this, where language designers drop on the floor every concern about familiarity or similarity to existing languages and just find out what results in the most powerful language (in general, or in a particular domain).

To my mind the purpose of a programming language is to bridge the gap between what a human can understand and what the computer can do. So I don't think you can draw a clear line between ease of learning and the value of the language; there will always be cases where you need to talk to a domain expert, so it needs to be easy to translate between the programming language and the human domain, which means there's a lot of value in familiarity.

I think the ideal language would look much like the way people explain what they do to other people (kind of an extension of the idea that the best programming languages look like pseudocode). Most fields of human endeavour rely on a limited amount of domain-specific jargon, but not a completely different alphabet, mathematics being a notable exception - if notation were truly a powerful tool for thought in general, wouldn't we expect to see more of it in other complex professions? Indeed I think a lot of existing language design has gone wrong by sticking far too closely to mathematics; for example, operator precedence massively complicates a language, but is really only used to support arithmetic, which is really quite a niche case. (And so it makes sense that APL-family languages might be successful as domain languages for mathematics, but I don't ever expect them to break out beyond that niche).

Python rightly has a good reputation syntax wise, though it persists with mathematical-style function application, overly cumbersome keyword constructs, and awkward special-casing of operators; the code I've seen that's looked most like English sentences (in shape) is probably Haskell or Scala when written in an operator-light style, with lines formed of space-separated words (even if the words themselves are unfamiliar) - Scala supports more of a "subject verb object" style, but sometimes requires brackets.

In terms of what's missing, I think a lot of the tools we use to structure text on a larger scale aren't there - imagine having to write an instruction manual using only plain text with no headings or chapters or the like. But on the small scale, yogothos has got to me; I don't think we should quite go all the way to lisp, but we do need less syntax rather than more. I'm thinking Python-style grouping (indentation and colons), Haskell-style function application (spaces), Smalltalk-style control flow (none). Most syntactic constructs take expressions (including e.g. method bodies), a block acts as an expression (evaluating to its last line). A few syntactic shortcuts, only where they're generally applicable enough to be worth the overhead: Scala's _ for lambdas, maybe Haskell's $, but not a lot else; I think it's worth using symbols for syntax, stuff that breaks the normal rules of the language, precisely to distinguish those things from regular well-behaved functions.

Sound pretty uninspired, middle-of-the-road? Yeah, it is; I really don't think the current state of the art in syntax is that far away from optimal. If I was trying to design a custom language I'd be a lot more concerned about implementing a good record system, about effect systems, about stage polymorphism.

> But don't you think it's mind boggling, and perhaps even a little hypocritical, that we spend up to around two decades of our lives in education (with a bunch of fluff that somebody always argues might be useful some day). And then after that, learning a new tool that would require more than a few weekends is no-no? I find it quite absurd.

Yeah. TBH our whole industry's attitude towards experience and career history is a mess; I highly doubt that an unbroken chain of programming jobs is the best possible way to gain skills, but your CV will get the side-eye if it shows anything else. I wonder how much of this is just path dependence, accidents of history, and no-one wanting to be the first employer to do something radical.

> But yeah I'm not really expecting companies to care, even though I think they should. To be honest I don't care too much about companies; from what I've seen, the vast majority of them are just crap ;) Thankfully there's open source.

Open source relies pretty heavily on companies, in my experience - either because a company decides to outright work on open source as a part of its primary business, or because it tolerates an employee who wants to contribute in doing so. You get some contributions from students, which are, uh, variable, and very occasionally a government grant, but most programming is being funded by corporations, by one route or another.


> The post mocks the mindset featured within your comment.

Actually, the post pretends you know nothing about functional programming and then presents K as the solution–but, as the grandparent points out, it’s not the only one. The post actually has no comment on this.

(I should also note that smugly mocking a mindset by reducing it to a caricature might help explain why people are exasperated by the post’s patronizing tone.)


The post sort of comments on that by claiming that K's somewhat cryptic-looking operator-style functions are more legible than named counterparts.

> Does giving a K idiom a name make it clearer, or does it obscure what is actually happening?

  sum:     +/
  raze:    ,/
  ordinal: <<
> The word “ordinal” can mean anything, but the composition << has exactly one meaning. (That one took a while to click, but it was satisfying when it did.)

(Didn’t click for me but I also didn’t spend a while on it, whatever.)

However, one should note that it's not like other functional languages don't have their share of cryptic operators, intuitiveness up to debate of course. Here's a list for Mathematica: https://mathematica.stackexchange.com/a/25616

At the end of the day this approach doesn't scale very well and skews nicer when you look at very simple examples like +/!100.

Anyway, I'll take my sum(range(100)).


everything any of us work with is a cryptic operator, if you think about it...


> k is perfectly readable [...] to someone who knows it

A great example of a tautology. Every language in existence passes this test...

The real test is a language that is readable by those who don’t know it.


Lots of OOP boilerplate makes absolutely no sense to me because I have never had to write such code. I would consider Java and C# unreadable compared to an array language.


Boilerplate can be stripped away, though, while you can't pull meaning out of a symbol without knowing what the symbol is.


Sure you can — from repeated usage of said symbol in context. That’s how language acquisition works in toddlers.

Not saying that it’s a preferred way, though.


I would compare trying to pick up K as trying to figure out one of the clicking languages when you know Spanish, while going between most languages is like going between Spanish and Italian. You can kind of guess your way through in the latter case, while it's significantly harder to do the former unless you actually sit down and learn the language formally.


Chomsky has an idea called "The poverty of the stimulus", where he attempts to prove his ideas about deep syntax (or whatever he calls it) by handwaving that toddlers can't possibly learn language by merely being immersed in it for a few years.

I'm not impressed with the notion as applied to toddlers, but I think it applies fairly well to learning what variables mean in a codebase. Especially one written to be terser-than-terse.


Having used C++ for 25+ years, I know it better than I wish I did. I have also used Clojure professionally for 5 years now, and I find it much more readable, as in there's much less cognitive load when figuring out what something does.

It is not just about abstraction level either: Python gives me intense discomfort.

I have not used K, but quite like the way it looks.


> Every language in existence passes this test...

That's not true. Languages vary tremendously in how readable they are to people that know them. Especially if you set the bar for "knowing" at a consistent number of hours invested.

A language that's total noise at hour zero could also be the easiest to read at hour 20, and 40, and 100, and 1000...

When a language is something you're supposed to be doing real work in, as a core part of your job, it's perfectly reasonable to require some amount of training before you're set loose.


If you don't know the language, you won't be writing, editing or improving code in it anyway. Your test is therefore irrelevant for programming languages.


If people don't know a language, yet find it readable, they are more likely to end up getting involved in doing those things.

Poor readability is not a good way to keep people out; you're not selecting for talent.


If the language is readable beforehand, but brings a 30% productivity decrease to those proficient in it compared to what they could have with one that wasn't, even the best programmer will be mediocre.

Further, the best have had a long history of finding APL and derivatives appealing: Ken Thompson? Liked APL. Alan Perlis? Loved APL. Rob Pike? Has spent years on an APL interpreter.

APL isn't lacking fantastic programmers.

k has shipped with a Python-like sugar layer for years now (q). Readable to most, but without the notational value, much is lost.

Q'Nial (one of my favorite programming languages) is readable by most. It's very verbose, but by all means an APL-derivative. It's not as useful a tool as one with notational benefit, and never attracted much interest.

Anyone close-minded enough to ignore something entirely because they don't understand it at first isn't likely "top talent."


You're overestimating how readable any programming language is to non-programmers.


With decent identifiers you can kind of guess the purpose of things without understanding anything else about it. A file with words such as "latitude" and "coordinate" and "distance" in it is probably something to do with maps, for example.


Non-programmers making sense of code in this way is extremely rare. I don't think this is a priority in programming language design, the same way that making dashboards accessible to non-pilots is not a priority in airplane design. That would certainly be a poor reason to sacrifice major improvements in airplane efficiency or safety. Designing simpler dashboards for student pilots as a learning device is of course a different matter.


Perhaps, but passing this test implies a lower learning curve, which is extremely relevant for programming languages.


It really doesn't. k's ancestors were taught to high school students and admin in incredibly short spans of time. Here's an anecdote from Kenneth Iverson about a man who learned APL in two weeks, completely alone, and the results after he taught his students it:

My daughter Janet attended Swarthmore High School, and recommended Rudy Amann (head of the math department) as an excellent teacher. I therefore approached him with a proposal that we put an APL terminal in his school as a tool for teaching mathematics, suggesting that he first spend the summer with the APL group to assess the matter.

Rudy responded that he could spend only two weeks, which he did. I gave him an office with a terminal (and the Calculus text in APL that I had written after our earlier experiment with high school teachers), and invited him to come to me or anyone in the group with questions. Since he never stirred from his office, I despaired, but at the end of the two weeks he announced that he wished to go ahead with the project.

Rudy was pleased with the results, and told me of canvassing those of his students who went on to college, finding that they were pleased with the preparation he had given them. One thing he had done was to use some of the final two “review” weeks to show them the translation from things like +/ to the sigma notation they would encounter in college.

Here's another where they taught it to high school teachers, with a direct explanation as to how students responded:

I believed that APL could be used in teaching, and Adin said that to test the point we must take a text used in the State school system, and try to teach the material in it. He further proposed that we invite active high school teachers.

We hired six for the summer, with the plan that two (nuns from a local school, who could provide a classroom in which we supplied a computer [typewriter] terminal) would do the teaching, while the other four (with a two-week head start) would write material.

To our surprise, the two teachers worked at the blackboard in their accustomed manner, except that they used a mixture of APL and conventional notation. Only when they and the class had worked out a program for some matter in the text would they call on some (eager) volunteer to use the terminal. The printed result was then examined; if it did not give the expected result, they returned to the blackboard to refine it.

There were also surprises in the writing. Although the great utility of matrices was recognized (as in a 3-by-2 to represent a triangle), there was a great reluctance to use them because the concept was considered to be too difficult.

Linda Alvord said to introduce the matrix as an outer product — an idea that the rest of us thought outrageous, until Linda pointed out that the kids already knew the idea from familiar addition and multiplication tables.

Finally, it was this interest in teaching that led us to recruit Paul Berry, after seeing his Pretending to Have (or to Be) a Computer as a Strategy in Teaching when it appeared in Harvard Educational Review, 34 (1964), pp. 383-401.

(From KEI's delightful but sadly unfinished autobiography: https://www.jsoftware.com/papers/autobio.htm)


So then, why is K less popular than pretty much any other language? Because people are stupid? Uninformed? There's a lack of PR-minded people working with K? What exactly is your theory? It's funny that a language this flawless is slightly less popular than S or D or pretty much any other one-letter language. There must be something that makes it unappealing for the masses of programmers - what is it?


Masses love javascript. Just saying.

More seriously, core language quality is not the primary driver of language choice.

Otherwise we'd all be using something with S-expressions, Hindley-Milner, dependent and linear types. ducks


Javascript must've done _something_ right, no? Yes I know it's bad from "language design" perspective, but we all know what it did right: it was embedded out-of-the-box on a very popular platform (the browser). So, there is an explanation.

Do note that I didn't claim "popularity == quality". I'm not even claiming K is bad!!! Just that it's strange for something so clearly superior to everything else to be such a niche language. Surely it must have downsides...


Why is it strange? Think of languages like Lisp, Smalltalk, Haskell. You may not find them on top of TIOBE, but their innovations do trickle down to mainstream, indicating that the designers of mainstream languages find them worthwhile.

Array languages are very much poised to do the same, if they did not already: numpy is essentially a poor (and verbose) man's array language embedded in python. Or consider Matlab.


> "A LISP programmer knows the value of everything and the cost of nothing."

Here, a very old quote that tells you directly what is wrong with FP - the performance on old hardware sucked. For a very long while compilers/interpreters were not good enough.

The cost/benefit equation changed in recent years, and sure enough, FP ideas are becoming mainstream.

Again, I'm not saying that array languages don't have fundamentally great ideas. I'm saying that articles like this one do them no favor - they're just smug rants. Show the world you understand the downsides, and you get a better chance of promoting the upsides.


(k is competitive with C by most measures, and most k programs are faster than the equivalent C programs.)


k costs $20,000 per-core (less accurate now than it used to be but I'm pretty sure it's still true for commercial uses). It has a billion-dollar company based entirely around it (Kx/FD) and a smaller multimillion one, so it worked out well, I guess.

J is free, but J has never had an advertising budget, and was only freed recently.

APL's current leading implementation is really bad in comparison to how nice the language used to be, and just as proprietary. It was also significantly nicer to input back when APL keyboards existed and typeballs did as well.

Very few people strike gold and then have the willingness to not sell it at market-rate. Unfortunately, most people implementing APL are very aware of the rate. And it's high.


A language doesn't cost... the environment might. Why did nobody pick up on the ideas? C# is not inspired by K, Go is not, Swift is not, Rust is not, Clojure is not... I mean, take any big or small company that decided "we need a new programming language" - I'm not aware of many taking inspiration from K.

I get it, there are people who love K, and are productive in it. And I'm not even claiming the ideas of K are inherently bad! But you know, when the rest of the world "doesn't get it", _maybe_ the reason is not that the rest of the world is dumb? Articles like this one do K no favor, IMO.. they might "feel good" but they IMO don't prove that K programmers are superior beings who reached enlightenment - they prove the exact opposite, lack of understanding the rest of the world. It's fine to say "I'm quirky and it's fine, this makes me special". It's wrong to say "The rest of the world is quirky, they don't share my niche preferences".


Go was, by the author's own admission, a language for the lowest common denominator (the computer science undergrad). In many ways, the computer science undergrad is less competent than someone with no formal backing at all: it takes less time to teach a new skillset to a blank slate than repair the damage done to undergrads. Rob Pike's hobby is working on an APL interpreter.

C# was Microsoft's answer to Java after getting sued for cloning Java.

Rust is by C++ devs, for C++ devs.

Swift was Apple's successor to Objective-C.

Clojure is another language doing the "Lisp but with a bootstrapped ecosystem" thing, by a guy who had been active in the Java, .net, and CL communities.

All languages build on something, and when the mainstay languages are all ripping off ALGOL-60, ALGOL-60 clone after ALGOL-60 clone is what you will get.

Programming languages are just now starting to become influenced by Prolog despite the benefits of Prolog for certain tasks being apparent; the entire result of the field right now can roughly be traced entirely to UT Austin.

The point of the article was not to claim anything about the superiority of k programmers, it was to point out that the rest of the world was extremely close-minded to anything with different syntax in a comical way, along with demonstrating that a common criticism (of which there are many, 99% of which unfounded, as you can see by checking any thread on Hacker News that even briefly touches upon k or APL) was hypocritical, which I think it did a great job at.

It was a reference to the Kübler-Ross model, which does seem to apply to people who find APL/J/K at some point.


> But you know, when the rest of the world "doesn't get it", _maybe_ the reason is not that the rest of the world is dumb?

Look at the kind of replies you get on every hn thread on APL/J/K - the majority of the negative comments around readability are emotional/gut level responses.

People aren’t trying to understand K examples for a couple of hours and failing... they are making snap judgements based on their experiences in other languages.

That is hard to overcome.


> So then, why is K less popular than pretty much any other language?

K is pretty polarizing and goes against many established “best practices” we get hammered into us every day as software devs.

Just look at the knee-jerk negative reactions on hn posts every time the APL family is mentioned.

We tried it in my team and a lot of people who were pretty productive in it still hated the experience.


Chance.

Or, to put it another way: history.


Uh, no? I read a lot of code, and much of it which is in languages I cannot write. But I can often get the gist of what it's doing, and this in itself is quite a useful thing to be able to do.


For you, not for anyone actually working with the language. Which again points to the test being entirely irrelevant for the stated purpose.

Some random high school student probably can't read your Swift. Sure, it might help them out somehow with the Excel functions they're needing, but it doesn't change whether or not Swift is a good or bad language, or whether the test actually means anything significant.


I think a random high school student who has just finished AP Computer Science should be able to read my Swift, at least when the task itself is not complicated. Maybe not all of it, but I think it's a failure if they can't get the gist of it. (Interesting anecdote: I have gotten emails from people who have translated some of my Swift code which talked to an API endpoint I reverse-engineered to their platform of choice. They knew what at GET request is, but they were familiar with was JavaScript or Kotlin or whatever–but they were able to make use of code written in a language they couldn't write. I think that's really great.)


You added more criteria to this high school student. Almost every one of them will have to use and program spreadsheets, few learn to program using an ALGOL-like language.

The idea you put forth disqualifies any programming language that isn't derivative of ALGOL: it's exactly the sort of thing used against Lisp dialects, and many other languages that have shown themselves to be quite wonderful ("What's cdr? That's stupid!" "You're telling me you use color instead of operators? Malarkey!" "What in Hell is Reverse Polish Notation?! This is America!").

APL (and derivatives, but using APL as it's the chief example of the paradigm and because if you know APL, k is readable) doesn't require a semester in a classroom, it requires a week or two and a book.

(Question: when did they start teaching AP Computer Science?)


Somewhat; there's a number of things about K which make it uniquely difficult to understand. One is of course that it is unpopular and nobody has really seen it before: while this isn't an inherent issue with the language, it is a practical one, since K exists in a world where other programming languages exist and people with experience in those languages exist. Something that can be understood by more of them is generally a nice feature to have. K's style of short identifiers makes it even harder to be accessible: while many non-programmers can kind of guess the purpose of some programs from these, K makes this essentially impossible. I would have to disagree with your assertion that APL's derivatives are fundamentally better and/or faster to learn actual computer science (note: AP Computer Science–which I think started in the '80s?–does not actually teach this very well, either). I think they're probably comparable, although it might be easier to get some of the numerical stuff because many students might be familiar with math.


I would have to disagree with your assertion that APL's derivatives are fundamentally better and/or faster to learn actual computer science

Have you tried learning any of them? I think this is an unfair thing to say without having at least given it a try.


"a random high school student who has just finished AP Computer Science"

This probably assumes your conclusion, since AP computer science probably teaches one style of programming rather than another.


"Mathematica is great for, you know, mathematics, but you’ll need a license."

Incidentally, if you have a relatively recent Raspberry Pi you already have a Mathematica license.


What does "legibility" really mean? Presumably it means something like the ease of going from looking at code to understanding what it does and how, and having a mental model to reason about the code and what modifying it will do.

How many times have you opened a Java/C#/C++ project and opened up class file after class file trying to understand how the whole thing is pieced together, faced with a monstrous system and, days later, still no understanding of how the system works at a macro level and how everything is connected.

How many times have you started browsing a GitHub repository looking for where something is implemented, opening file after file only to give up an hour later with absolutely no understanding of the organization of the system or being able to predict where any particular functionality is implemented.

Most people would say Python is a fairly legible language. You can write a class or function and most programmers can understand how it works fairly easily. However, with Python (or any mainstream language) you are only looking "down". You see what the function uses and how. You cannot see how the function itself is used. The other part of a function's meaning is how it's used idiomatically and what problems are solved by it. This is why, when figuring out how to use a library, most people prefer to look for an example than to browse the code. I believe any codebase of sufficient size effectively becomes a sort of library for solving its specific problems in its specific ways. There is a sort of hierarchy of increasing domain-specificity from language to library or framework to codebase, where each level has its own idioms and preferred ways of doing things.

Even ignoring the seemingly unnecessarily obtuse notation (which you really do get used to reading, but I recognize it's a hard sell to say "use it regularly for a few weeks/months and it'll all be perfectly clear trust me"), K (and J, and APL which I'm more familiar with) are legible in a way that other languages are not. When the entire system fits in a page of code you can understand everything about it from the top down. APL allows you to look "up". From any function you can see, instantly, without endless scrolling and switching between dozens of files, everywhere the function is used and how and what functions it's used in conjunction with. You do not need to press "switch to header file" ever. You do not need to press "go to definition" ever. The definition is right in front of you. Often since APL prefers idioms to building giant towers of abstraction, the definition is the name.

Not using "for" loops is not really the point of APL. The point of APL is that being really, incredibly, seemingly unnecessarily concise fundamentally changes how you read code.

APL is very legible.


> You do not need to press "go to definition" ever.

Do these languages have a module system, or just copy-paste?

What do you do when you get more than a page of code? I'm skeptical of a claim that every program fits on one page.


> However, with Python (or any mainstream language) you are only looking "down". You see what the function uses and how. You cannot see how the function itself is used.

You could find cross references…


> Mathematica is great for, you know, mathematics, but you’ll need a license.

Do you have an opinion of Octave?


Octave is a Matlab clone, not a Mathematica clone.


http://maxima.sourceforge.net/ is closer to Mathematica, and is open source.


Some people find entertaining from these extremely compact syntax languages. I have hard time to understand them, but such is life.


Regular expressions are a more concise way of writing string matching and extraction code, often exponentially more efficiently than without. They’re also usually more compact than spoken English can represent. For example, this proofreading regex detects possible issues in spelling:

/(?<!c)i(?=e)/


That's a really bad proofreading regex, though, because it will be wrong more than a third of the time--which is much, much higher than the expected real spelling error rate. The "i before e except after c" rule only has a high success rate when you add the caveat that it only applies to words where the ie or ei makes the sound [i:] (like in "believe", for example), and that caveat isn't expressible via regex.


Doesn't that regex detect correct spellings instead of wrong ones? I.e. it matches tier, tie, etc.


Yes. Good eye :) The other comment is also quite accurate: it doesn’t properly grasp phonetics.


FWIW, I agree with your argument (I guess).

Regular expressions are complex and oftentimes a tool might be necessary to get a better idea of what an expression will match. However, if we were to make the language to describe them more verbose that'd still not solve this issue. Having a concise way to describe them is a good thing. Those who grok them can do so quickly without having to read several lines of a more descriptive language and those who need assistance can still plug them into a tool.


K is a gift from God. That is all.


Dlang:

  list.reduce!max


If you want to sell me a language

do:

- Explain what particular use cases the language is optimized for

- Show its advantages compared to the alternatives

- Tell me honestly what tasks it is not suited for

do not:

- Use blantant strawmen to make the alternatives look worse.

- Tell me I am an idiot


I think the simple explanation here, per POSIWID, is that people who do the latter things do not in fact want to sell you on a language. It reminds me of the people who liked a band until it got popular. The point isn't the thing itself, it's having a reason to feel superior to other people.


Ultimately, you’re the one responsible for your opinions about the language, not the author.


It seems like you're inferring a lot which wasn't said, which is quite consistent of you! :)


Are reduce and lambdas really that cryptic/complex to people? I think I need an adult. -.-


I guess the point is that while many say what you do here, many of those also say "No! Using lambdas and reduces is great but saying `+/!10` is impossibly terse".

Essentially that's just the Blub Paradox in action.

Sadly, at the moment I'm on this side of the Blub Paradox in a weird way. I do acknowledge the power of specialist syntax and short notation (studied Maths which is full of this) but haven't dedicated sufficient time to it for it to be fluid to me.


range(100).reduce(plus, 0) isn't cryptic, what's cryptic is single character symbols. Looking at that line, unless you know K specifically, it's very difficult to figure out it's a reduce. Where as `sum(range(100))` is pretty self explanatory to most programmers who don't know python.


Not to mention it's a lot easier to search for `reduce` than `+`.


I don't know K but "+/" immediately stood out to me as "add over", so I think they made a good choice in using "/" for fold.

"!", on the other hand...


There's no good (or bad) choices for mapping abstract concepts like ranges, mapping, reducing, or combining to common (ASCII-) symbols. Every choice is inherently arbitrary and carries no meaning whatsoever (besides maybe ergonomics). One could just as well argue that "/" is the ISO 8000-2:2009 2-9.6 recommended symbol for division. The same source also shows why symbols alone are inadequate for an explicit representation of abstract concepts: the Nabla symbol alone represents 5 different operations depending on context according to the ISO standard.


What would be a more obvious symbol choice for range (aka til or seq)? Pound (#)?


It's not commonly taught in college. They're probably simpler than your typical for loop.


I’d be quite disappointed in your degree if you were never asked to take a course that was centered around a functional language.


Many smaller CS departments in the US have no such a course in their curricula.

There simply aren't enough professors to maintain that breadth of courses in those departments.


Really? That’s quite surprising; I would expect all of them to have some sort of “programming languages” course or similar…what classes do they have then?


AFAIK most or all of those curricula do include a sirvey course for programming languages.

IIUC the GP was talking about courses that specifically focused on functional programmimg.


Fair, but I from the context of the top comment I was talking more about being able to recognize and use basic functional constructs.


My CS program (class of 2007) was mostly c++. There was one semester of Java, but it was optional.

We learned fundamentals, discrete math, symbolic logic, and also "databases" (read: MS Access), object-oriented design, and I forget what else.

With the benefit of hindsight, I feel like I overpaid for that education :)


I’m a math student minoring in CS and I covered all of that before I’d finished second year. I also covered functional programming in first year.

I’m really surprised your CS degree didn’t cover operating systems, compilers, or algorithms. CS students at my school are required to take all of that stuff (and a lot more, including everything you’ve mentioned) before they even finish 3rd year. In 4th year they’ll be studying things like networking, real-time programming, computer graphics (physically-based rendering), machine learning, and computational math (for simulations, numerical solvers, linear and nonlinear optimization, etc).


Ah yes, operating systems - file that under "I forget what else".

Like I said, I feel that I overpaid for that education. The degree itself has paid for itself many times over, but I needed to learn basically everything important for a CS career on my own.


"Write an interpreter for a functional-like language of your choosing with XYZ requirements". In Haskell. As a 1st year uni project. I was just getting over Fortran. And prove it! "Can I do it in Turing please?" "No. Haskell." I still curse that exercise to this day, but it enlightened thinking.

A CS or Maths course should certainly involve some functional language element.

For someone reading thinking of CS at university, whether there's a requirement or option in such would be a good question to ask (you're interviewing them as much as they're interviewing you).


It's taught in bootcamps...


It's definitely more cognitive load to x people where x = 'some' or 'most'. It does not matter what x is, your change will be rejected.


Sounds like a recruiting problem, or possibly a training one.


> When the entire system fits in a page of code you can understand everything about it from the top down.

I'm sorry, but that sounds like the juvenile fantasy of someone who is still in their programming puberty.

Good luck fitting an operating system, web browser or air traffic control system into a page of code.


Please don't descend into name-calling.

What PeCaN said there is no different from what masters like Chuck Moore and Arthur Whitney say. Maybe you disagree, maybe not everyone is like those guys, but "juvenile fantasy" and "programming puberty" is an extremely shallow dismissal—which https://news.ycombinator.com/newsguidelines.html specifically ask you to avoid.

We detached this subthread from https://news.ycombinator.com/item?id=22523047.


Maybe I should've said a "screen" instead of a "page" (if you have two files open side-by-side and they both fit on your screen it's the same benefit). And I admit it is slightly hyperbolic (to make a point about the virtues of being able to see a significant portion of your project at once). Large projects will be more than one screen of code, although perhaps surprisingly few. (This actually influences how I organize code, in that the "screen" becomes a logical unit, and I try to keep the API surface of each screen really small.) I'm working on a programming language project to try to get even denser.

>Good luck fitting an operating system, web browser or air traffic control system into a page of code.

Aaron Hsu's co-dfns compiler (optimizing compiler for a subset of APL to the GPU) is 3 screens https://github.com/Co-dfns/Co-dfns/tree/master/cmp (I can fit the entire codebase on my three monitors).

This K text editor is way less than one screen https://kparc.com/edit.k (Funny you should mention OS. I'm eager to see how many pages kOS is. AW notoriously hates scrolling.)

Web browsers are a lost cause, I'll give you that one.


> This K text editor is way less than one screen

Where's the handling of encoding when loading and saving? Where's the handling of different line-endings? Where's the error handling?

Would have been nice to see something approaching real-life code rather than these toy samples.


> I'm sorry, but that sounds like the juvenile fantasy of someone who is still in their programming puberty.

For larger applications what PeCaN was saying still mostly holds true.

I work on large (1-10 million loc systems) enterprise java code bases and write some side projects in k as a hobby.

I can get the equivalent of around 30-40 java class files on screen at once with k.

Even on poorly written enterprise code bases code is clustered and your “working set” of code files is quite small and is not strewn all over the codebase.

Of course you still need to understand the core system abstractions + base platform + common library code.


...and in idiomatic K code, you apparently still need to constantly wonder what 'a' or 'X' means in your current context, with likely 5 different usages on your screen at once.


I have not experienced that.

If you see many different usages of the same identifier then it’s probably a common construct/pattern you aren’t aware of.

In the same way as ‘i’ tends to be used as the current array index in a for loop - but devs don’t get confused by seeing ‘i’ used in lots of different loops.

Or kotlin devs seeing ‘it’ or class based language devs seeing ‘this’.


So you claim it never happens that two people working on different parts of the code decide to use the same identifier for completely unrelated things?

Seems to me that is bound to happen pretty frequently when the expectation is to use only one- and two-letter identifiers.


there almost certainly is an encoding such that every program any human will ever write fits in 128 bytes, though I doubt we'll ever design one. to convince yourself of this, notice that you don't expect to ever produce two programs with the same blake2 hash.

there's a lot of room for improvement in conciseness of code. I would still be surprised if it was meaningfully possible to write a full-featured modern OS with one page of APL


I think you'd have to write the "Do what I mean." interpreter before you have any chance of making programs that compact.


Hashes do not encode; to convince yourself of this, try to recover a program from it blake2 hash.


Is this C in Russia?


The irony of claiming to represent a readable language, that others are in denial of reality about this, and presenting your thesis on a '90s looking website with typographic lines spanning almost 400 chars at 100% width...

ABTASTTSBMR than the whole sentence. OK.


I very much liked the design. Perhaps I'd slap a max-width on it, but other that that it was a delight to read. (And my browser's reader mode really liked it, too.)


Displays fine for me on my desktop.

I'm happy with just text.


That objection seems trivial.


The whole article is trivial and trite, yet dressed up as something novel.


I don't think so. Even if that's true though, you could do better.


what is this orange website the post mentions?


Let me think.........


He might not be joking, I've always browsed HN with dark mode enabled. For me it's a light and dark grey website :)


It took me several hours to get it. I'm not sure if this was meant to be sarcasm, but thanks.


The javascript example of list.reduce((x,y)=>Math.max(x,y)) should have been just Math.max(...list)


Suggested already, with a rebuttal: https://news.ycombinator.com/item?id=22523151


> list.max(Math.ordering)


The name (and editor extensions) are taken by another 'K framework': http://www.kframework.org/index.php/Main_Page


k spawned in 1992. When did this framework come about?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: