Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The same reason people got to choose JavaScript or PHP years later, the platform's adoption, in this case UNIX.


I don't think that's true. When I came of age in programming there were lots of choices already but none that offered the balance of control and speed that C gave. It was a very easy choice; do I invest my next two decades in Assembly, Pascal (in several flavors), Modula-2, Forth, LISP (which required unobtainable computers at the time), Basic (compiled or interpreted) or C?

UNIX had very little to do with it, only very few people were lucky enough to have access to UNIX machines but untold 100's of thousands had access to PCs or 8 bit micros. The first time I saw a UNIX machine it was an Acorn 'Unicorn' and it was so far ahead of what I could afford that it might as well not exist.


Sure it was, there were zero reasons to use C on CP/M, MS-DOS, Atari, Amiga, Mac.

It was just another programming language fighting for developer eyes.

On Windows and OS/2, although IBM and Microsoft decided to go with C for the underlying low level layers, C++ was the way to go for high level coding, C Set++, MFC. With Borland having Turbo Vision, OWL and VCL.

Macs were Object Pascal territory, and when MPW got C and C++ support, PowerPlant C++ framework was the way to go.

Epoch, BeOS and Symbian were also C++ territory.

MS-DOS games were also adopting C++ via Watcom and its DOS extender.

UNIX and the rise of FOSS, based on UNIX culture, were definitely the only reason.


IME C++ wasn't really a mainstream option until the second half of the 90's. BeOS choosing C++ for its operating system APIs was an extremely exotic choice at the time (same level of "exotic" as NeXT choosing Objective-C).

And IMHO, at that time, before or around C++98, C++ didn't fix a single problem of C, but instead just added a lot of new ones (one could argue that this is still the case even today).


> IME C++ wasn't really a mainstream option until the second half of the 90's

I was getting paid for teaching C++ on commercial training courses in 1990 - the C++ courses were probably the most popular after C and UNIX

> before or around C++98, C++ didn't fix a single problem of C

Of course it did, or why would people like me have transferred wholesale from C to C++?


It was so exotic that at my university, FCT/UNL, starting in 1992 they switched the first year students to learn Pascal followed by C++.

C was never taught as such, as any student was expected to know it from their C++ classes.

The professor was a great teacher of what all the ways that C++ fixed C's problems, by providing his own data structures for strings, arrays, vectors, linked lists and hash tables, all with bounds checking enabled by default on his implementation.

Other issues that C++ fixed over C was having implicit conversions, a proper way to allocate memory (malloc() with sizeof, really?), ability to ensure valid pointers via references.


C++ fixes many of C's problems and introduces so many more that are not fixable. ("But why don't you write const-correct code? Why don't you use move semantics? Use the rule of three!")

> malloc() with sizeof, really?

One tiny macro to solve 20% of all the problems people are whining about.

    void _alloc_memory(void **ptr, size_t numElems, size_t elemSize)
    {
        size_t numBytes = safe_multiply(numElems, elemSize);
        void *p = malloc(numBytes);
        if (p == NULL)
            fatal("OOM!\n");
        *ptr = p;
    }

    #define ALLOC_MEMORY(ptr, numElems) _alloc_memory((ptr), (numElems), sizeof **(ptr))

    int *myArray;
    ALLOC_MEMORY(&myArray, 25);
I've been using this for years without a problem.


That is the thing C++ doesn't require developer boilerplate for something that even Algol supports properly.

Plus all C workarounds for "safe" code tend to fall apart when teams scale above 1 team member, as it keeps being proven by endless industry and academic reports.

Now Android NDK is Fortify enabled, with hardware tagging planned for all new ARM based models.


> boilerplate

There is a difference between "explicit code" (which is mostly a good thing) and "boilerplate". It's saying what you mean vs saying what the platform requires you to say (or repeat, involuntarily). C definitely leads to the former, but not to the latter.

That said. It's a 1 line macro! You are being ridiculous. The amount of insanity we have to go through in so many other languages constantly, not just for setting a good base, is something completely different. It's not measured in handfuls of lines, but in number of hairs pulled out.

Compare:

    #define ALLOC_MEMORY(ptr, numElems) _alloc_memory((ptr), (numElems), sizeof **(ptr))

    https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/stl_vector.h
(Not a fair comparison, but I think it makes a very good point)

> for something that even Algol supports properly.

Probably with an allocation builtin, not allowing for custom allocators?

> supports properly.

There is nothing in this simple macro that isn't "proper". There are no ways it can break (although I still feel it would be nice to have expressions macros, not only token macros). The only requirement is you write it yourself. C in general doesn't like to give you canned things. It has made such mistakes in the past (see large parts of libc) and has actually learned from it.

> Plus all C workarounds for "safe" code tend to fall apart when teams scale above 1 team member, as it keeps being proven by endless industry and academic reports.

You could absolutely implement C with managed memory. It just wouldn't be a good idea. Use other languages if you want these tradeoffs.

Some of the most massive codebases in the world are C. (Often disguised as "C++ by experienced devs"). They are maintainable, protected investments, many decades old, and still in working order, precisely because a minimalistic language approach leads to modular APIs. It scales very well, and the "problem" might be mostly that the defect rate per line doesn't go down as the lines go up.

But the best feature of these codebases is that they exist, because C enables independent development of subsystems much better than the intertwingled messes and dead ends that most "statically-systematic" approaches lead to on non-trivial scales.

It's a huge boon that I don't have to think about rewriting interfaces using multiple inheritance or virtual inheritance or template insanity or SFINAE or unique_ptr or move semantics or rvalue references or static assertions or compile time evaluation, or the next fad around the corner, every 5 years.


For a slightly improved version, place a sentinel (say a 4 byte magic number) just before and after the allocated segment. Check on FREE_MEMORY if they're still there. Not perfect but pretty good as an early warning system.


Yes. Another possibility, add something with __FILE__ and __LINE__ if there are memory leaks to debug. (Never did that but it should help).


C was (and still is) a fairly obvious choice for programming when you need full control over the memory layout of an application (which is becoming all the more important with the growing CPU/memory gap). I choose C 15 years before I got into contact with UNIX (first on the Amiga, after that on Windows, and only fairly recently macOS and Linux).


Nothing special about C, plenty of other languages offer similar features.

ISO C doesn't offer full control over anything beyond the abstract memory model of the standard.


So what? Chill out.

We all know that C is not without flaws. I would use other languages if there were good alternatives. For example, I liked some parts of Delphi, but it has too many show stoppers. For example, all local variables still have to be declared in the variables section before the function body, right? And are we still required to make type aliases to use pointer types in important places, such as function signatures?

C is special in that it is the only language that I've known that has a minimalistic attitude, resulting in a shitty language (every language is shit!) whose problems we can actually work around in practice.


Just like C used to declare variables until C99.

Delphi is not Go, you can declare variables at the point of use nowadays.

Just like you can declare pointer types in function signatures, which would fail my code review, as those things tend to get out of hand.

Minimalist attitude leads to write only code bases, where it is impossible to maintain on long term projects with regular rotating team members, as it usually happens in most multinationals.

A workaround done today is a security exploit waiting to happen tomorrow.


> Delphi is not Go, you can declare variables at the point of use nowadays.

Oh, it seems they introduced it in 2018, some time after I quit my 6-months Delphi stint. http://blog.marcocantu.com/blog/2018-october-inline-variable...

So, given the age of Delphi (and Pascal!), Go still has plenty of time to be quicker. Not sure what's missing from it, though.

Of course, I'm sure you knew all of that, and could have just mentioned it. But maybe you just want to convince people of unrealistic propositions, and claim that some obscure technologies were more practical than they really are.

> Just like C used to declare variables until C99.

1. C99 was 20 years ago, 19 years before Delphi.

2. What you say is wrong. You could declare variables at the start of any block since forever (I think it's standardized in C89).

3. What matters is compilers in practice, and I'm pretty sure they allowed you to declare variables anywhere, and also "for (int i..." (which is C99) since forever (as an extension).

> write only code bases

such as C++ code bases?

> regular rotating team members

I've just never seen that not becoming a mess

> A workaround done today is a security exploit waiting to happen tomorrow.

I'm still waiting for my code review. https://news.ycombinator.com/item?id=21290314 . For a start, where are my terrible workarounds?


It's funny, when I'm not on Unix, C is still my go to language which lets me get shit done. I won't have to deal with performance problems or painful FFIs. Yesterday I was dealing with Webassembly, and having it interoperate with WebGL made me pull out the last few hairs I still had on my head. Go figure.


C is not a fad; it's an outlier among languages in the sense that basically it's a portable assembler very close to the metal. If you change the basic architecture of the machine, you can create a better language. For now however it's unlikely to beat C.


I'm not sure that this high level assembler assumption still holds for SIMD-capable CPUs. C compilers are asked to do quite drastic code transformations like autovectorization on these architectures. With these, the tight relationship between the high level code C code and the generated machine code is removed.


You can still treat it that way even with SIMD. I quite enjoy using NEON (ARM SIMD) intrinsics in C, and the like.


When you do that, you write pretty much the equivalent of platform specific assembly code (not exactly, but the differences don't matter for what I want to say). What I am saying is that modern compilers also take your "dumb" code that is not SIMD, but just a pedestrian implementation of something and they still turn it into SIMD or do other very drastic rewrites to it that are hard to reason about. And these optimizations and transformations tend to stack. Something that had inlining, tail recursion optimizations and autovectorization applies to it may end up retaining absolutely no resemblance to what was actually written as C code. Most of the nice properties of C as a low level language come from the fact that you can map the code to assembler in your head - as long as the compiler is not trying to get too clever. Then the intuition becomes merely an illusion and the whole thing becomes harder to use. For example, strict requirements like strict ordering requirements for accesses to hardware registers in a device driver. C has come to a point where you have to pull stunts to prevent the compiler from reordering your memory accesses.


I don't have experience with SIMD, and for the things I do I couldn't care less. I like C as a super-productive language that doesn't get in my way. The output from unoptimized code is magnitudes faster than what I get from higher level scripting languages. And much more efficient than with GC languages for any non-trivial stuff. And I can write that code almost as quick as Python or Java code, and with very little debugging time (after some years of experience).

I might be in the minority, but to me, C is as high-level as we should go, for many many problems. If you really care about registers and SIMD stuff, then your concerns are architecture specific, and that's not really what C does well. What C does is mostly abstracting registers. Why blame it for that? The few places where you need SIMD, well, just insert architecture specific code there.

Is there a way to write portable code that can be better optimized?


> they still turn it into SIMD or do other very drastic rewrites to it that are hard to reason about

Maybe it's just me, but I haven't seen this being a major problem in C. Most optimizations are local and fairly easy to reason about. C++ is a whole different story.


Any language can have intrinsics, in fact the first systems language with intrinsics support appeared 10 years before C was created.


They did change the architecture of the machine. C didn't have concurrency and SIMD built-in.


Portable assembler for an abstract machine modelled on what PDP-11 processors used to be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: