I'm curious about that decision given that optparse is deprecated. It says it's because argparse doesn't allow nested commands, but I don't really see why that feature is so desirable as to warrant using a deprecated module. I probably need to have a play with it to find out.
Click can be extended lazily which helps execution time a lot when working with many, many plugins. It also can be extended at runtime as per configuraton.
argparse requires the parser to have full knowledge of everything which makes it slow if you add many commands. Biggest problem though is that it's parsing system is a bit broken when it comes to escaping. Options with arguments cannot have values starting with dashes which is problematic for delegating subcommands to other things.
For what I wrote Click for I could not find any alternatives that worked that way besides optparse itself but that is hard to use.
Can you provide an example of what you're saying? I've found that argparse.ArgumentParser.parse_known_args() and sub-parsers can do everything I can see click doing. Including lazy loading of plugins and the like.
Inofficially CBOR was designed as a protocol base for CoRE-related protocols. CoRE stands for Constrained RESTful Environments and is an alternative for HTTP in restricted environments like micro controllers. See also https://ietf.org/wg/core/charter/
Just take a look at the Appendix D of RFC7049, where they present reference implementations in C and Python for half-width float decoding: http://tools.ietf.org/html/rfc7049#appendix-D
The approach of Bedrock Linux is very interesting. It makes use of Linux-specific features like bind mounts and tries to unify several linux distributions into one meta-distribution, which gives the framework for the multi-distribution operations. They could also use namespaces for a more strict separation of clients, but that's a detail.
The idea to move completely to musl is a little bit utopistic, because musl libc is in a very early phase if you want to compile any piece of software of the base system with it. It's mostly C99/C11 and POSIX compilant, but there are several GNU-specific libraries missing, and in a world which uses GNU userland on Linux it's not simple to overcome that limitation.
The mentioned /etc problem seems to be the same problem as solved by ip-netns(8). Take a look at the source if you need further information, it's based on other bind mounts.
But I don't think Bedrock Linux is the next-generation approach for Linux distributions, or rather software distributions in general, though they don't claim that. I'm working on my own Linux distribution since about a year and it's based completely on a ports tree, as known from FreeBSD, but with a simpler code base and simpler Makefiles. At the moment, I'm trying to create a stable commit, which will build without issues in several configurations, but that's hard work, especially because Linux or rather the Linux userland is mostly a ghetto™ (you won't get information about much low-level software).
But I think, as you should noticed, that Bedrock Linux has a right to exist, but it won't be the next-generation approach.
> The approach of Bedrock Linux is very interesting. It makes use of Linux-specific features like bind mounts and tries to unify several linux distributions into one meta-distribution, which gives the framework for the multi-distribution operations.
Not just "several" - we're going a bit more ambitious and aiming for most if not all.
> They could also use namespaces for a more strict separation of clients, but that's a detail.
We're purposefully avoiding separation of clients. Everything should interact as closely as is possible to how it does normally in other Linux distributions. If a user wants isolation, something like LXC or OpenVZ could be utilized on top of Bedrock Linux just as it could on top of any other distro.
> The idea to move completely to musl is a little bit utopistic, because musl libc is in a very early phase if you want to compile any piece of software of the base system with it. It's mostly C99/C11 and POSIX compilant, but there are several GNU-specific libraries missing, and in a world which uses GNU userland on Linux it's not simple to overcome that limitation.
We want the core to be fairly minimal; it really should be just enough to bootstrap the other clients. The vast majority of the system should, ideally, be acquired from clients, which can use whatever libraries they want. The missing GNU-specific libraries and other limitations of musl aren't hitting anything we need in the core. If we find we need something that can't be provided by musl, we can always switch tracks or double back in a future release.
> The mentioned /etc problem seems to be the same problem as solved by ip-netns(8). Take a look at the source if you need further information, it's based on other bind mounts.
At the moment we've got another plan for remedying the /etc issue which is well underway. I'm using an early version of it right now with surprisingly little trouble. A very deep investigation of possibles uses for namespaces/cgroups is planned for the future.
> But I don't think Bedrock Linux is the next-generation approach for Linux distributions, or rather software distributions in general, though they don't claim that. ... But I think, as you should noticed, that Bedrock Linux has a right to exist, but it won't be the next-generation approach.
I'm not really sure what you mean by that. I'm not really sure how "generations" work with operating systems. It solves a certain set of problems, and has a certain set of limitations.
The fact that returning memory to the Kernel is hard is supported by the circumstance, that most allocators will use brk/sbrk to resize the data segment of the executing process to allocate memory, at least if they shall allocate few memory.
The other fact, that allocators have to lock global data structures is also not true. Most modern operating systems supports thread-local storage and therefore you don't need locking because you can keep much per-threads allocators, and only if you want to release memory of a foreign thread you have to lock (but that's also bad practice in most cases).
Therefore, this article is great if your horizon end at the default allocators tcmalloc, ptmalloc and jemalloc, but the reality is much more complex. The fact that such a thing doesn't exists isn't founded in the fact that it's hard to implement, it's founded in the fact that there is no need for such an allocator, because most well-written software will allocate large chunks of memory.
> (...) that most allocators will use brk/sbrk to resize the data segment of the executing process to allocate memory, at least if they shall allocate few memory.
The other commonly used backend for malloc() is mmap() without underlying file:
Handy both when allocating large chunks of memory and for allocating pools for smaller suballocations. Has the additional benefit of being zeroed-out at low cost (or no cost at all -- for example via hardware DMA), and also playing nice on systems with constrained / fragmented address space, as kernel is free to allocate at any address visible from userspace.
> Most modern operating systems supports thread-local storage and therefore you don't need locking because you can keep much per-threads allocators
The article explicitly points this out, and points out the problem with it: It means wasting memory on per-thread pools, and the more threads you use, the larger the pools needs to be if you want to prevent contention, compounding the problem.
> because most well-written software will allocate large chunks of memory.
1. Most software is not well written.
2. Most large pieces well written pieces of software that allocates only large chunks of memory has some custom allocator of some sort (or horrible abuses of arrays) embedded somewhere to work around exactly the problems noted in the article. In many cases people end up wasting time writing the same types of specialised allocators over and over.
I've seen plenty of large C and C++ apps that'd have benefitted greatly from a simple arena allocator for example... And I have also seen countless of implementations of arena allocators and various pool allocators and tons of other variations.
In other words: These things do exist. They're common, to the point where they're often covered in books on C/C++. Especially for C++ where there is specific built in (though weak) support for custom allocators.
That library is in many ways deprecated and broken: At first, it uses only old-style classes because it doesn't inherits object explicitly. Furthermore, it uses print in a method; it would be more "Pythonic" to return a str object, which was formatted using str.format.
I think the future is Python 3, and new implementations in Python 2 syntax are simply unneccessary. I would suggest the usage of Python-3-style syntax, which is also valid in Python 2.7 (which isn't hard).
That's simply not true. There is the rule, that NULL must be equal to (void *)0. I had also several discussions about that topic, but it's part of at least C99 and C11.
Yes, (void * )0 gives a null pointer. I mentioned that. However, that doesn't mean all the bits of a null pointer are required to be all 0. The actual machine representation of a null pointer is implementation-defined. The only requirement C99 places on the actual value of a null pointer is that it compares unequal to a pointer to any object or function, and any two null pointers compare equal. See section 6.3.2.3 paragraphs 3 and 4:
An integer constant expression with the value 0, or
such an expression cast to type void *, is called a
null pointer constant. If a null pointer constant
is converted to a pointer type, the resulting pointer,
called a null pointer, is guaranteed to compare unequal
to a pointer to any object or function.
Conversion of a null pointer to another pointer type
yields a null pointer of that type. Any two null pointers
shall compare equal.
If you do this:
int pv = 0;
char * p = (char *)pv;
you are not guaranteed to have p set to a null pointer, because the zero you are casting and assigning is not an integer constant zero. The behavior is implementation-defined, as is going the other way. Section 6.3.2.3 paragraphs 5 and 6:
An integer may be converted to any pointer type. Except
as previously specified, the result is implementation-defined,
might not be correctly aligned, might not point to an entity
of the referenced type, and might be a trap representation.
Any pointer type may be converted to an integer type. Except
as previously specified, the result is implementation-defined.
If the result cannot be represented in the integer type, the
behavior is undefined. The result need not be in the range of
values of any integer type.