Github suggests reviewers to PR authors based on who's been modifying nearby code recently (ok, I don't know whether that's a general policy, but it happens to me all of the time). And for the past year or so I have been getting tagged to review more and more AI slop from newcomers to the project that we chose to maintain in public. I just immediately nope out of all reviews now if I don't recognize the submitter, because I don't scale enough to be the only actual human involved with understanding the code coming at me. This sucks for the newcomers who actually wrote the patch themselves, but I can't always tell. Put some misspellings in your comments and I'm actually more likely to review it!
I suppose we shall amend to "The determined Real Programmer will fix FORTRAN" ;)
But, for the folks who didn't grow up with the Real Programmer jokes, this is rooted in a context of FORTRAN 77. Which was, uh, not famous for its readability or modularity. (But got stuff done, so there's that)
I wrote a lot of F77 code way back when, including an 8080 simulator similar to that written by Gates and Allen used to build their BASIC for Altair. I don't know what language they wrote theirs in, but mine was pretty readable, just a bit late. And it was very portable - Dec10, VAX, IBM VM/CMS with almost no changes.
I think F77 was a pretty well designed language, given the legacy stuff it had to support.
It was well designed. Hence the "it got stuff done".
But it was also behind the times. And, if we're fair, half of its reputation comes from the fact that half of the F77 code was written by PhDs, who usually have... let's call it a unique style of writing software.
Indeed. Two PhD students came to see me when the polytechnic I worked for switched from a Dec10 to two IBM 4381s.
[them] How can we get our code to work on the IBM?
[me] (examines code) This only looks vaguely like Fortran.
[them] Yes, we used all these wonderful extensions that Digital provides!
[me] (collapse on the floor laughing) (recover) Hmm. Go see Mike (our VAX systems programmer). You may be able to run on our VAXen, but I can't imagine it running on the IBMs without a major rewrite. Had they stuck to F77 there would have been few problems, and I could have helped with them.
Portability is always worth aiming for, even if you don't get all the way there.
I think of the 6809 as a 16-bit microprocessor, myself (pace Wikipedia). It has 16-bit registers, load/stores, and add/subtracts. A nice clean architecture for its day.
The discussion there badly misunderstands the nature of ELEMENTAL procedures in Fortran and their relevance to parallel execution.
ELEMENTAL is relevant to DO CONCURRENT only indirectly. The ELEMENTAL attribute matters there only because it implies the PURE attribute by default, and PURE is required for procedures referenced in DO CONCURRENT. (Which is not a parallel construct, but that's another matter.)
ELEMENTAL in array expressions (incl. FORALL) should not be understood as being a way for one procedure invocation to receive and return entire arrays as arguments and results. That would require buffering during the evaluation of an array expression. Instead, ELEMENTAL should be viewed (and implemented) as a means of allowing a function to be called as part of the implementation of unbuffered array expression execution.
ELEMENTAL has its roots in compilation for true vector machines. It once caused a function to have multiple versions generated: a normal one with scalar arguments, and a vector one with vector register arguments. This would allow a user-written ELEMENTAL function to be called in a vectorized DO loop, just like an intrinsic vectorizable function like SIN. A compiler for today's SIMD "vector" ISAs could implement ELEMENTAL in a similar fashion.
I can see the subtle distinction you make. The flang notes on array composition provide a good introduction to the way array expressions (https://flang.llvm.org/docs/ArrayComposition.html) are treated.
But in practice it looks like the elemental function must be in the same translation unit for vectorization to occur with compilers popular today. Explicit options like !$omp declare simd are a different matter (and have different pitfalls).
Range reduction for trig functions is basically a real remainder of division by 2*pi, and real remainders can be computed precisely if you want to, even when quotients can't be.
> There are thousands of contributors and the distribution is relatively flat (that is, it’s not the case that a small handful of people is responsible for the majority of contributions.)
This certainly varies across different parts of llvm-project. In flang, there's very much a "long tail". 80% of its 654K lines are attributed to the 17 contributors responsible for 1% or more of them, according to "git blame", out of 355 total.
That was ambiguously phrased. The point I was trying to make here is that we don't have the situation that is very common for open-source projects, where a project might nominally have a 100 contributors, but in reality it's one person doing 95% of the changes.
LLVM of course has plenty of contributors that only ever landed one change, but the thing that matters for project health is that that the group of "top contributors" is fairly large.
(And yes, this does differ by subproject, e.g. lld is an example of a subproject where one contributor is more active than everyone else combined.)
I have one on my desk that I often use for quick estimations. It boots up in zero seconds.
reply