This is actually the same thing as regular Python scoping rules; there's not even any fancy OOP logic behind it. Here's the same thing, but using global scope and functions instead of classes and inheritance.
I think there is an argument to be made that classes are special and "reaching upwards" into the superclass scope should not occur - a unique copy should be made - but I also think that Python's way of doing it makes enough sense that it is not confusing. The Python devs are at least consistent about having their own way of doing things.
That's an interesting point, that the behaviour of an inherited class variable is consistent with a case you show where inheritance plays no part at all.
So from that point of view, it comes down to whether we expect that an inherited class variable really is just some variable in an outer scope that we can shadow with a local variable of the same name (per your example), or whether we expect that inheritance provides some stronger notion of ownership of the inherited variable.
I dislike the former case, largely because I dislike the idea that the location at which a variable is stored can appear to change merely by assigning to it. But then, I dislike Python's implicit declaration of local variables for exactly the same reason. So you're right, there IS some consistency there. ;-)
He means immutability when he says "const-ness". There are four possibilities for mutability of a single pointer:
1. The pointer is mutable, but its contents are immutable.
2. The pointer is immutable, but its contents are mutable.
3. The pointer is immutable and the contents are immutable.
4. The pointer is mutable and its contents are mutable.
Right now for owned pointers Rust gives us (3) and (4), but no obvious way to achieve (1) and (2). Although you might argue that the borrowing semantics give us these powers, just not directly with owned pointers - which we shouldn't be using directly if we're asking for that control, we should be lending them out in a well-controlled manner.
You can use privacy for (1); admittedly it's a little hokey, but I feel it's not worth the added complexity to add field-level immutability directly into the language since you can use other language features to effectively achieve it. `std::cell::Cell` and `std::cell::RefCell` give you (2).
It is, but sometimes you have to do it for practicality. Usually this comes up when people use `Rc<T>`, as that only supports immutable types (so any mutability you want needs to be in the form of `Cell`/`RefCell`).
I don't think so, because then you'd get an overconservative iterator invalidation checker. For example:
struct Foo {
a: Vec<int>,
b: Vec<int>,
}
let foo = Rc::new(RefCell::new(Foo::new(...)));
for x in foo.borrow_mut().a.mut_iter() {
for y in foo.borrow_mut().b.mut_iter() {
// ^^^ FAILURE: a is already borrowed mutably
}
}
The failure happens because the RefCell is checking to make sure there are no two `&mut` references at the same time to `Foo`, to prevent iterator invalidation. But this is silly, because it only needs to prevent access to `a` while you're iterating over it, not both `a` and `b`. Changing the definition of `Foo` to this fixes the problem:
Makes sense. Though Cells would really stand out in the code as a potential for race conditions (unless you get a run time failure?). Thanks for the insight.
There is a special bound--Share--that is required on thread-safe data structures and forbids Cell/RefCell. The thread-safe equivalents of Cell and RefCell are Atomic and Mutex, respectively.
/dev/urandom is a perfectly fine source of machine-generated randomness. In order to break /dev/urandom (but not /dev/random), you need to find a corner-case where you can break the cryptographically secure random number generator (CSPRNG) only when you can make certain guesses about its seed values. Since making those guesses about its seed values in the first place from the CSPRNG's output is a hard problem, and making an accurate guess about what eventually happened to the CSPRNG's output is a hard problem, this is pretty unlikely.
But let's quote the kernel source on the matter[1], just to be clear:
> The two other interfaces are two character devices /dev/random and
/dev/urandom. /dev/random is suitable for use when very high
quality randomness is desired (for example, for key generation or
one-time pads), as it will only return a maximum of the number of
bits of randomness (as estimated by the random number generator)
contained in the entropy pool.
> The /dev/urandom device does not have this limit, and will return
as many bytes as are requested. As more and more random bytes are
requested without giving time for the entropy pool to recharge,
this will result in random numbers that are merely cryptographically
strong. For many applications, however, this is acceptable.
The real point to be made here is that, yes, /dev/random is theoretically better - but for many applications, letting /dev/random hang to wait for entropy is worse than having /dev/urandom use a CSPRNG in a way that is generally recognized to be secure.
I would like to add that the original article is talking about using /dev/urandom to generate long-lived keys, not session keys or similar. In this case, the blocking is sometimes acceptable to generate appropriate entropy, since the fact that the key is long-lived implies that you don't do this very often. The argument for /dev/urandom only holds clout when you are making a tradeoff for non-blocking behavior (which is 99% of the time). As such, there is nothing wrong with being slightly paranoid and using /dev/random if you can afford the time spent collecting entropy.
While this is true, still many package managers will insist on pulling in Qt when installing Gimp anyway, one of the suggested dependencies will require it. Lots of GTK apps will, because the OS team isn't thinking in the same terms you are.
And unless you feel like telling your package manager no, an inordinately painful experience involving tracking down the errant dep and blocking it, and having to be forever vigilant going forward against future insistent prods from your OS, you'll just grumble a bit and put up with that nonsense. Life's too short to spend it keeping Qt off your system.
He is saying that more people on average will respond more directly and immediately to punishment than reinforcement.
The context is that organizations' behavior influence via punishment is a short-term tactic: in the long run, we would like to believe that reinforcement poses a net gain. However cultural influence results in short-term behavior control tactics from organizations prevailing, and little heed paid to the tradeoff.
One might also argue that it is cheaper in the short term to punish than to reward, and this further perpetuates the downward cycle as a staple of organization culture.
I haven't messed around with x86 enough to know whether it is actually somehow easier than the ARM stuff, but I certainly agree with you that CPUs are damn complicated.
I loaded up the datasheet for the M3's when typing up this reply. 384 pages. Contrast that with the TI 74181 ALU datasheet from the days of old: 18 pages, most of which are just detailed diagrams of the distances between the pinouts. The logic diagrams fit on a single page. You can build a simple CPU using one of these machines in a few hours in your basement.
Hardware is only going to get more complicated. At what point does it become so complicated that no one person can reasonably understand how a computer works "under the hood", even from an abstract level?
The relationship f(x)=x/x is only defined for x /= 0 and thus is not equivalent to 1. We can see this in [a] when we ask for the domain of the function, and thus we can redefine some function f' as a piecewise function which is defined to have f'(0)=1, but in proofs we must thus make sure to first prove that using f' as a substitute for f does not affect our result.
In one of the below posts we have the suggestion
> But what if you're in a context where you're not reasoning about continuous functions at all? Why would you have to be subject to reasoning that doesn't apply to your situation?
In this case you could either do the above, if you have to concern yourself with e.g. a domain of the set of real numbers arbitrarily close to the undefined location. Alternatively you could just define our original function f only for real numbers greater than 0, in which case you escape the necessity of redefining functions to be easier to work with.
> The relationship f(x)=x/x is only defined for x /= 0 and thus is not equivalent to 1.
If that's the case then:
x * f(x) = x cannot be equivalent to f(x) = x/x which breaks algebra in pretty fundamental ways (since the former would certainly be defined for 0 but the latter would not).
The second major problem is that it also breaks calculus. Let's start with a straight line: f(x) = 2x.
Now let's take the first derivative of this: f'(x) = 2x/x.
Does the line at the point where x = 0 have a slope or not? If this is discontinuous, then you have also broken calculus.
This gets as to why 0/0 is undefined, namely because when you cannot express it as a limit, and have no idea how both zeros are derived (and hence what they mean) you cannot give a specific number. You can come up with equations which for some value reduce to 0/0 but whose limits range from negative infinity all the way to positive infinity. But that doesnt mean that x/x is undefined where x = 0. x/x reduces to 1. Always. Anything else breaks higher mathematics generally.
> "x * f(x) = x cannot be equivalent to f(x) = x/x"
Sure it's equivalent, over a domain not including x=0. This does not break algebra any more than, say, restricting the domain of the square root (when working in the reals) to non-negative numbers. We work in restricted domains in mathematics all the time.
> " f'(x) = 2x/x"
f'(x) = lim (h->0) [2(x+h)-2x]/h. Since h is approaching (and therefore not equal to) zero, there is no problem. Any appearance of 0/0 in the problem is a result of an attempted (but unsuccessful, that is, indeterminate) evaluation -- it's not actually 0/0, it's 2h/h where h is close to but not equal to 0.
We don't need to define 0/0=1 in order to have either algebra or calculus work. We choose to define 0/0=1 in certain circumstances which make certain calculations go more smoothly, and we choose not to define 0/0 in other circumstances where it's either unnecessary or potentially misleading.
> This does not break algebra any more than, say, restricting the domain of the square root (when working in the reals) to non-negative numbers.
Sure it does, because if that is the case, you restrict your domain when you divide by a variable expression. If you divide both sides by x-1, then you effectively rule out 1 from the domain.
That's the problem.
Now this is not the same as 0/0. The point is that 0/0 is only undefined when it persists after simplification and only because you can't define a relationship between the two zeros.
I.e. 0/0 is undefined because 2x/x, 52x/x, and x^2/x give you three different answers as x->0. That doesn't mean that every function which has not been reduced and can transiently evauate to 0/0 is treated as non-continuous.
Regarding derivatives, this highlights the problem because to solve the first derivative of a variable to a simple exponent, you multiply by the exponent and divide by the variable (x^2 becomes 2x, 2x becomes 2, and so forth). What this means is that you may be dealing with a limit but the limit defines a function, which is something like 2x^2/x for the derivative of x^2 and 2x/x for the second derivative.
Unless you allow simplification before determining whether the function is continuous, these things don't make sense. If you allow reduction first, then x/x^2 is undefined where x = 0, but x/x is not, because you can reduce it to 1 before applying any further logic. Both may appear to evaluate to 0/0 however.
There are a huge number of things that seem to break in algebra and calculus if one treats x/x as non-continuous and undefined. The simpler solution is to allow reduction to 1 before determining that it is undefined. (of course 52x/x would reduce to 52 instead, again showing why 0/0 is oversimplifying the problem).
> "you restrict your domain when you divide by a variable expression"
Why is this a problem?
Whenever you perform an operation that has a restricted domain, you restrict your domain. This may result in an actual "not defined at x=1" result, or simply "the value at x=1 is found through an alternative method" result.
> "0/0 is only undefined when it persists after simplification"
When you're working in the context of limits, it wasn't an actual 0/0 to begin with; it was near-0/near-0, which is perfectly OK to simplify. The limit defines a function that already has a restricted domain -- h->0 means h is not actually zero. The expression naively evaluating to 0/0 simply tells you that you need to do more work to properly evaluate it -- 0/0 is not the actual result.
Note that using the limit to find the derivative gives you a function that you'd like to be continuous in x, but the divide-by-zero is in h. Consider f(x)=x^2. The derivative is
lim h->0 [(x+h)^2 - x^2 ] /h
lim h->0 [ x^2 + 2xh + h^2 - x^2 ] /h
lim h->0 [ 2xh + h^2 ] / h
lim h->0 [2x + h] * h/h
since h does NOT equal zero, we can treat h/h=1, and the limit trivially collapses to 2x. Note that we never had the variable x in the denominator of our fraction; we never placed a restriction on x or suggested anything about a discontinuity relative to x. We only restricted h, which was already restricted by the limit itself.
The expression 52x/x has a restricted domain as well (x /= 0). But that's normally not what you mean when you write it. It isn't often you really care about expressions like 52x/x; they are generally just intermediate steps in getting to a real solution.
For example: I have done a lot of work on some equation that is interesting to me, and finally I have reduced it to 5+yx=52x+5. Now obviously the rules of algebra let me subtract 5 from each side and be left with yx=52x, and this subtraction also has no effect on the domains for which our variables may be defined. All is well.
But dividing out the x is what we are concerned with now. Surely y=52 is a solution to the equation - why can this not be true for all values of x?
Well, for nonzero x we have y=52 and nobody will complain. For x=0, though, solving for y is problematic. Note that if x=0, y could be 1, or 33, or any number. If there is some function f such that y=f(x), then it follows that f(x) holds a unique value y for each input of x/=0, but for x=0 we cannot know what y might be; this is what we mean by undefined. Thus we say the domain of f(x) is the set of all real numbers x, such that x is not equal to zero.
If you have been told otherwise, or even gotten away with doing algebra or calculus under the assumption that the domain of our function f may include zero, you are taking a mathematical shortcut rather than performing formal analysis. It is not calculus nor algebra that is broken by saying f is undefined for x=0, but rather your (albeit practically useful) misconception of these systems.
I'll finish with some formal rules of algebra, to hammer this in:
- [P6] Existence of a multiplicative identity: a * 1 = 1 * a = a ; 1 /= 0.
- [P7] Existence of multiplicative inverses: a * a^(-1) = a^(-1) * a = 1, for a /= 0.
These are taken from page 9 of Spivak's Calculus, 3rd edition. He goes on to build the foundations of all of calculus from rules like these. Surely he would not present this as a fundamental axiom of his system, only to immediately (and silently) reject it and build a flawed calculus instead!
Indeed, on pg. 41, when defining functions, Spivak later writes (emphasis his):
> It is usually understood that a definition such as "k(x) = (1/x) + 1/(x-1), x /= 0, 1" can be shortened to "k(x) = (1/x) + 1/(x-1)"; in other words, unless the domain is explicitly restricted further, it is understood to consist of all numbers for which the definition makes any sense at all.
> Sure it does, because if that is the case, you restrict your domain when you divide by a variable expression. If you divide both sides by x-1, then you effectively rule out 1 from the domain.
> That's the problem.
That's the problem that a mathematician must handle. The solution to the equation
x * f(x) = x
is very simple: x is either 0 or such that f(x) = 1.