There's another factor at play in these discussions: Cynicism tends to look smart to outside observers.
Cynicism is often mistaken for competence to outside observers. If a piece of software has a specific problem, the person chanting "This software is garbage" is going to look smarter than the people saying "Hold on, maybe this software is this way for a reason".
> Four studies showed that laypeople tend to believe in cynical individuals' cognitive superiority. A further three studies based on the data of about 200,000 individuals from 30 countries debunked these lay beliefs as illusionary by revealing that cynical (vs. less cynical) individuals generally do worse on cognitive ability and academic competency tasks. Cross-cultural analyses showed that competent individuals held contingent attitudes and endorsed cynicism only if it was warranted in a given sociocultural environment. Less competent individuals embraced cynicism unconditionally, suggesting that-at low levels of competence-holding a cynical worldview might represent an adaptive default strategy to avoid the potential costs of falling prey to others' cunning.
In toxic and mismanaged environments, being a cynic from the sidelines can be a better political strategy than being the positive person in the trenches trying to find solutions and compromises.
Cynism also has the advantage that no solution needs to be provided, so you are never wrong.
> In toxic and mismanaged environments, being a cynic from the sidelines can be a better political strategy than being the positive person in the trenches trying to find solutions and compromises.
Yes. In the best awesome places I have worked, there are great constructive discussions on how to solve problems.
If anyone is an environment that cynism is rewarded, run from there. There are better places to be.
Quit the app as a user. It wasn't a platform thing, in terms of technical aspects although there is probably something to be said about how it shapes community, but it was the community itself.
Intersting. In my anecdotal experience cynics tend to speak in a very confident matter-of-fact tone that suggests others are fools for not embracing the cynical perspective. I imagine this gives cynicism an edge because people don't like to be perceived as too gullible or naive.
I have found that this can be a dangerous gamble on the part of the cynic. Often the cynic's argument can't even withstand a single clarifying question. The first person to ask "can you explain in detail why this won't work?" can often elevate themselves above the level of the cynic, sometimes humiliating the cynic in the process. I have experienced this from both perspectives and as a bystander. Today I try as much as I can to be the problem-solver instead of the cynic. I try to do it in a way that gives the cynic a way to save face, though - it's almost never a good strategy to humiliate a team member.
> The first person to ask "can you explain in detail why this won't work?"
Unfortunately, in some forums, that often leads to that person suffering a barrage of negativity whilst the cynic ignores the question in favour of blustering elsewhere.
I think the same thing is going on with the commentary on economy currently. Whenever I browse around Reddit for example the comments that get upvoted are the cynical ones, "markets are irrational", "feds are pumping up the stocks" etc, but they do not put any real thought into what is happening with the markets and why.
I don't think starting from the EMH is correct either, I suspect nobody really knows why the US stock market doesn't reflect the deepest recession since the 1930s, though of course everyone has opinions. The market may simply be wrong in its assessment of risk at this time (it tends to swing wildly around the true future returns), or the recession might be very short (as mr market currently believes).
How long do you think market is predicting the recession to last?
There's predictions expecting 30% unemployment since March for Q2, and also predictions expecting 50% GDP drop for Q2.
So current market should have these predictions priced in.
I think one major thing that people fail to consider is that this recession is not like other recessions - namely, it's not failing of economics, or a bubble popping, it's a temporary natural disaster and economy is deliberately shut down. So it shouldn't be compared to other recessions which were issues with how economy worked. Right now there's no evidence that economy or system is failing by itself. Of course it might, but there's no evidence.
I think current market price stems from rational strategy by investors. This is why I think markets are rational. Current recession doesn't have implications 5 years into the future. We might lose some potential developments we could've had right now, but it just gets delayed few years, so it's not a big deal.
Maybe it is hindsight to me, but to me it seems clear why stock market has the current price it is. Maybe I'm "overfitting" somehow my theory, but it seems to make sense to me and I'm not seeing a clear logic error with my theory. I've argued a lot on this topic on this forum and reddit, and I haven't seen good retorts yet, it's just that I'm not getting any responses or somebody says "I just disagree and that's it".
If it lasts long enough, a temporary natural disaster becomes an economic disaster.
I think the rough market consensus at the moment is as you state it - this is a natural disaster which will be over soon. That is a possible outcome at this point, but a second wave in the US for example would have significant impacts and cause entire industries to fall into bankruptcy - wiping out shareholders and leading to significant second-order effects on other industries and supply chains, plus causing a consumer-led recession. It seems increasingly unlikely to me that this will be over as soon as you expect. Just to pick one example - that 20-30% unemployment will not go away quickly if this is not over in the next month or two for the US and those people rehired. Companies will not rehire unless they are certain it is over.
Current recession doesn't have implications 5 years into the future
Remind me in 5 years! I don't think it will impact behaviour massively in 5 years, but I do think there is a danger this recession could trigger significant other events which have impact for more than a few years - for example unrest in the middle east after 2008 led to regime change in many countries, and ongoing war in Libya. Economic depression, unrest and wars are a common occurrence after smaller crises than this one.
When people say the market is irrational it is more to point out that the market valuation swings about the future expected returns quite wildly - for example if what you say is true, the market should not have crashed quite so strongly when coronavirus first spread then bounced back quickly again, but it did, because people panic. It does the same in the opposite direction based on incomplete or incorrect information about drugs/vaccines. There's a nice graph in the above book of the expected return from stocks (pretty much a straight line up and to the right, with small bumps on the way), and stock market valuations (massive swings around the expected return based on sentiment in that moment).
How do you determine whether it was panic or just uncertainty?
It could be a rational reason for markets to swing wildly because of uncertainty about future. It might take time to gather enough data to assess the risks properly, and while risks are as unknown it is rational that market will be valued lower.
Each bit of knowledge makes uncertain a little bit less uncertain.
This lecture contains a similar graph on asset prices - he uses returns from the S&P for the period 1871-2013. The book really is worth a read too, it's more accessible than the lecture or slides might indicate.
> The comparison is between the actual stock prices against the constant discount rate of subsequent real dividends.
The EMH states that the market incorporates all currently available information, but future dividends are known only in hindsight. Prices that fluctuate around future earnings therefore are not in conflict with efficient markets at all.
I'd recommend the book if you haven't read it, I don't think my poor summaries do it justice, and it does address some of your objections from memory, and also doesn't attempt to throw out EMH, just questions whether it fully explains market behaviour.
Yes the chart compares measured return (divs) against projected return (prices) so it is not a prediction machine but I think it does illustrate well that market prices are usually far from true value.
If you think there's a significant chance the market is wrong in its assessment of risk, you can make a significant amount of money by betting against it. For example, as of the time of this writing, put options at $310 for December 31 on the S&P 500 (current price $293.94) cost $33.09. That means that if there is at least a 50% chance of a 17%[1] drop in value of the S&P 500 at any time between now and December 31, you can make money on average by buying those puts and selling them as soon as prices are down by 17%. The possibility of making this bet means that the market as a whole is 50% confident that stock prices of the S&P 500 will not drop by at least 17% before December 31.
Likewise, option prices imply:
- A 75% chance[2] prices won't drop by 26%+ (which would be back around the low from late March 2020) by the end of the year
- A 90% chance[3] prices won't drop by 48% or more (around the movement from when it became clear that something was deeply wrong in 2008 to the bottom)
- A 95% chance[4] the price won't drop by 58% or more (which is around the peak-to-trough movement from the 2008 financial crisis)
- A 99.5% chance[5] that the price won't drop by 90% or more (like the great depression).
While it's _possible_ that the market is wrong about its assessment of risk, that would also mean that it's wrong about its assessment of its assessment of risk, and people who are better at risk assessment could clean up.
-----
[1] Ok, technically a 50% chance of a drop of 17.05%. ($310 - ($293.5 * 0.8295)) / $41.75 == 1.9999 i.e. if you buy now and sell as soon as the price drops by $17.05, you make a 2x return on investment.
[2] $20.77 for $280 puts expiring Dec 31
[3] $4.62 for $200 puts expiring Dec 31
[4] $1.83 for $160 puts expiring Jan 15. I actually have a small position here, because "95% chance this isn't at least as bad as the 2008 financial crisis" seems a bit overly optimistic to me.
[5] $0.16 for $75 puts expiring Jan 15. Note that there is extremely low volume this far out on the tail.
The market is usually wrong, but it's very hard to know when it will stop being wrong in one direction and start being wrong in the other (as in the great depression say).
"The market is wrong but not in any way that's possible to make concrete predictions about ahead of time" and "the market is correct relative to available information" seem like functionally identical statements to me.
Unless you mean something stronger than "today's price usually isn't the same as next year's price after taking into account the discount rate" by "the market is usually wrong".
Just because you can't predict it in the short term doesn't make it correct or efficient in the short term, it is only correct in some sense in the very long term.
What does 'irrational in important ways' even mean?
What Shiller has shown is simply that the idea of a perfectly effecient market is self-contradictory.
Efficiency is just a very good approximation in practice.
I believe he shows in that book it's not a very good approximation - the market oscillates from overshooting to undershooting actual returns, and I don't think he believes market movements are always based on rational decisions, hence the title of the book.
> I believe he shows in that book it's not a very good approximation
But what does that mean? What is a good predictor of future returns and what is not? If markets are not, what is?
> the market wildly overshoots and undershoots actual returns
Well, the actual returns are only known retrospectively, but the current prices have to reflect all future returns. So, of course, current prices will always be wrong, that's not surprising at all.
It's simply easier to dismiss the potential legitimacy of actions which challenge one's existing notions than to reconsider those notions.
What's interesting is that "markets are irrational" cannot be a failure of the market but a failure of the rational theory being used to interpret markets. That is, current economic theory is squalid by its attempts to explain market phenomenon; and that is to the extent the market is irrational, the degree it disagrees with the theory. It was for this reason Von Neumann readily decided to eschew economic theory.
Quite contrary to common belief, "rationality" has nothing to do with knowledge nor degree of skill in achieving some objective measure. For instance, one with perfect pitch has an ability which is irrational but one could hardly call it invalid.
Rationality is merely the quality of propounding some intellectual framework of which others can subscribe to in sharing of that understanding based on the framework of symbols and their operations embedded in our global culture.
A thing is said to be rational when it can be interpreted completely into one of these pre-existing frameworks which claim to circumscribe the phenomenal sphere of that thing.
It's simultaneously not that complicated and more complicated.
The idea that "markets are rational" is an extension of the idea that "economic self-interest is rational." That's a political and moral position - essentially equivalent to "greed is good" - and not an empirical argument. You can certainly argue strongly against it, and many people have.
The second-order question is whether markets are genuinely rational in the sense of fulfilling the contract implied by ideas like rational price discovery, accurate predictions of future prospect ("pricing in", etc), and so on.
You can agree with the first position and still argue strongly against the second.
For markets to be "rational" in the sense that's usually meant, both positions have to be true. IMO this is simply nonsense. There's no empirical evidence that markets behave any more rationally or are any better at predicting the future than a herd of animals or a school of fish is, and plenty of evidence - not least regular crashes - that markets are actually very bad at fortune-telling.
The third order question is whether markets are "rational" in the sense that they create a political and economic reality distortion field which benefits their own interests at the expense of the wider economy. This is another political position, but it implies agency - almost a form of sentience - which predicts national and international policy because it influences it - rather than being influenced by it.
IMO this third position is closest to truth. Markets are politics by other means, and the successes come from having access to political and economic leverage that other classes don't have. The idea that markets are "wise" or even good at price discovery is questionable at best. But market morality is clearly a very influential thing, and it's easier to look like a winner when you have your fingers on the scales.
I think the current market rally is an example of this effect. "Markets" are hoping they have enough influence with the Fed and Trump's government to keep a privileged position above the carnage that's going to spread through the rest of the economy. And this is what has really been "priced in."
That's a bit off-topic, but what do you think is happening? I'm considering buying some s&p500 based stock, but I'm a bit afraid. What do you, anonymous stranger on the interweb think?
I have many thoughts on this topic, I'm not an expert and many of these are theories up with so the may also be flat out wrong, but I just disagree with the sentiment that markets are irrational or it's just feds pumping up the stock. I think saying those things is dismissive and anti-intellectual.
When I'm trying to think about the subject and whether I should invest myself, I'm trying to figure out what are the actual consequences of coronavirus long term and what in general are investor's motivations.
Here are few assumptions I make:
1. Stock price is determined by an umbrella of possible consequences and chances for each consequence. These are just examples, but for example there might be 30% chance that we can manage coronavirus while reopening the economy, +50% chances that we will have a vaccine in 1 year period that we can mass-produce etc. For each of those consequences we can think of an appropriate SP500 price (I'm taking SP500 here as an example, but it could be any other index as well). If economy goes back to normal, the SP500 price could go to 3400 again. Investors will try to predict what are the odds and in total stock price should reflect what investors on average think the odds of something to happen are.
2. The way investors react to these possible consequences comes from that specific investor's expectations and long term strategy. It's important to know what is average investors expectations and risk tolerance. If SP500 is 2200 (which it was when people were expecting it to go to 1700) and you expect 90% of chances of SP500 going back to 3400 within 3 years you can determine that you have 90% chances of making 54% returns within 3 years which is crazy good. So at this point it makes sense for most investors to buy in, because even if it goes lower, you will still be making very good returns. Now that SP is at 2800-2900 I think it's still better to buy in than not, but you can't obviously expect as fantastic returns. I bought in and even with a little bit margin, but I have enough margin left to buy in more if stock should fall. I'm investing and looking at least 5-10 years ahead. So it makes sense for investors to buy in now that they are not crippled by fears of what coronavirus might do - it's already quite clear it's not the end of the world even if it might shake up economy further throughout this year.
3. There are possibly some positive long term effects of coronavirus. Increased productivity, tech and automation from having been forced to experiment with WFH etc. I think all of these might increase productivity. I'm speaking of those not to just sound optimistic, but these are something a lot of people don't mention.
These are just rough thoughts I have right now, but in conclusion I have decided to buy in since I have determined that in 3-7 years there are very high chances of making very decent returns. I doubt corona can affect us for that long timeframe.
Of course it's more risky than usual to buy in right now so if you have low risk tolerance it's up to you. If you possibly need that cash within 1-3 years, then maybe not buy in, but whether you should buy in right now should be determined with when you want to use that money and how much you are willing to risk. Usually more risk means more reward since only people who can handle that risk will be buying (so stocks will be more underpriced when there's more risk and uncertainty). And since I don't want to use that money before 5 years anyway, for me it definitely makes sense to take that risk. Also even if stocks fall just after you bought in you must be able to handle that and not think you did the wrong thing and sell. You likely wouldn't be able to sell in time anyway.
Many people are fearful right now which means a lot of people are holding cash or shorting. Once fear subsides, stock will start to climb faster again so you should want to be in before general fear subsides..
Thank you very much for that lengthy explanation.
I experienced the fear of loss before and I was surprised how bad I handled it.
It's shortcircuting all rational decisions.
I love this short post. Software development is fundamentally hard, despite many frequent statements to the contrary by the "everyone should be a programmer" folks.
It requires uncommon persistence, and a love (if not an aptitude) of abstract reasoning in pure form.
I think the thought process goes like this:
"What's he doing?"
"He's doing THAT!?"
"Why?!"
"I don't understand why."
"This software is garbage."
Some people claim to be able to glance at code and know what it's doing quickly, but it's just not possible to ingest a few thousand lines of self-referencing logic and understand it quickly if it is doing much more than shuttling data between a database and some kind of display rendering.
If you've learned to program, you must have learned patience for yourself. You have to show that same patience to the authors of other programs if you want to understand their work. There are many thoughts embedded in their code. It is deeply complex as the person who wrote it.
Another vector for unwarranted criticism to watch out for is self-insertion, not only in software. All of us (probably) constantly do self-insertion and imagine fictional situations where we are victims of some strange injustice.
Hacking mostly in the evening, and at latitudes where the sun doesn't shine for much of the year, I was recently momentarily mad at Libreoffice Calc for having lackluster dark-mode support. But the vast majority of people who use spreadsheets I assume do so at day in well-lit offices so I was inserting myself in a context which basically didn't concern me.
There are a lot of things which triggers us to self-insert, not all of it unwarranted or negative. Maybe one situation which causes problems is when people do it without an invitation...
Empathy and patience are underrated qualities in developers. The empathy to not immediately blame the developers and the patience to find the actual reason behind the problem.
Blaming developers reminds me of the Design of Everyday Things where Norman discusses human error. He makes the point that human error should be an extraordinarily rare cause of an incident. Instead one can always go further and find the root of the human error. Why was the code quality poor? Were the developers being rushed? Were proper standards not put into place? Did we hire the wrong people? Did we misunderstand the problem? Was there a miscommunication between management and developers?
Social stuff has gotten complicated by living in a world with a global internet. We each individually bring our personal experiences and prejudices to the table and often have no idea what the context is.
I got my first experience as a moderator on an email list with about 300 members and they were all "gifted." It was a parenting list for people with gifted kids, basically, and every single person there was used to being the smartest person in the room. So if you disagreed, obviously, you must be an idiot who had no idea what you were talking about.
It took a long time to lay some groundwork and convince people they were, for the first time in their lives, hanging out with 299 other people just as smart as they were who had read just as much as they had on the topic and so forth and, in some cases, drawn other conclusions.
That felt like a long, hard haul to me and then I quit after about six months because I was relatively new to the forum and my boss basically told me I was stabbing her in the back for doing my job and I was all "I ain't being paid enough for this." because it was a volunteer position.
I quit and let her know what she was doing wrong as a going away present, because I'm generous like that. I'm sure my analysis wasn't appreciated. It was most likely interpreted as sour grapes and as me pissing all over her as I left.
HN is better than most places about this kind of thing, but it's just hard communicating on the internet and not knowing the same things about the people we are speaking with that we tend to know about people physically in the same room. We haven't gotten all that sorted out yet and we may never sort it out.
If we don't sort it out, the result may be massive die back of the human race to a level we know how to cope with. The internet is our opportunity to sort a bunch of things that aren't working well, but I don't know if we are successfully tackling those questions.
Perhaps this post by Colin is one small crumb that can move that forward. And perhaps not. Perhaps it will get lost in the crowd and make no real difference.
What the author is really railing against is thoughtless critiques. Declaring software garbage is a usually useless contribution because it only identifies that software can be improved, which is almost always true. However, it fails to provide any concrete advise for improvement that could be debated. There's the saying "it's not science if people can't disagree with you." Seeing how engineering involves the application of science, it's not engineering either. Therefore, these kinds of comments have no place in a professional community.
Being fast to critique _is_ good, provided the critiques are thoughtful. A thoughtful critique takes a clear stance, meaning it can be debated and (in an engineering or scientific setting) experimentally measured to verify it produces the intended effects. A thoughtful critique also poses questions not answered by the original body of work.
In the example described here, fixing the software would have been ideal. However, we can imagine scenarios where this isn't possible (e.g. you have a binary from a now-defunct entity that you must continue using.) The original story should have explained this. If it didn't, the critique "fix the garbage software" isn't unreasonable, albeit badly phrased (instead, "why didn't they fix the software?")
I think what the author is railing against is the tendency to criticize something before becoming deeply familiar with the problem
There's a tendency I've observed nearly everywhere I've worked where a developer wants to totally rewrite some piece of software because "it is garbage." In actuality, his motivations tend to be a combination of "other peoples' code is hard to read and this is not how I would have done it" and not having the depth of familiarity with a problem that you attain doing implementation work.
In any cases, asking why something is a particular way is a good thing to do.
Saying that some piece of software is "bad" (or any synonym of "bad") of course isn't helping anyone. It's demoralizing and counterproductive. I suspect a lot of the bad blood is because so much of the software world is like the faceless behemoths we all love to hate:
- The process of reporting bugs is onerous for most big projects, especially old and/or big ones, which tend to have behemoth issue tracking systems and a bajillion rules (written and unwritten) about what should be in a report, how it should be structured, where it should be reported, and so on. Bonus points for having completely inscrutable fields (domain-specific tags like "T+" or "BZTM" being popular), a manual sign-up approval process, no way to upload files, or not responding to reports for anything older than HEAD even though it's a stable package in major distros.
- The chance of success of a bug report (that is, it's fixed by the time I move to some other alternative) is effectively random. Even heavily funded projects like Firefox leave popular bug reports by the wayside for years, while lots of tiny projects fix even minor issues within days.
- As users become more sophisticated they realize that almost all software is garbage by today's standards. If it's not garbage today it'll be garbage in a year, when it no longer works with any of the surrounding software ecosystem, doesn't implement the latest version of the relevant standards, doesn't use secure crypto primitives, emphasizes [old way of working] over [new way of working], you name it. Software can only stay ahead of the curve by being tiny and actively maintained, or by being really actively maintained.
That's fascinating ... can you provide references for that research? We've found the careful controlling of criticism to be invaluable in getting ideas up front quickly so they can then be dissected, mutated, co-evolved, and their best points combined.
Iterating quickly and ejecting bad ideas quickly has its place, yes, but not always. I'd be interested to see the underlying context for your statement here.
This doesn't support the point but the conversation reminded me about something I read a while ago that is related. Maybe you will find it interesting.
The researchers found that a problem with brainstorming is the verbal component - due to social factors and limited oral and aural bandwidth, less ideas get shared, and people forget what they were about to say.
They decided 'brainwriting' is more effective, particularly 'asynchronous brainwriting' which alternates between writing your own ideas in a group, then reviewing others' ideas. This method would support want you want - getting all the ideas out quickly, because people are less likely to hold back, then iterating on them.
That's from the paper? I wasn't sure from the article - they mentioned 50+ individuals but then said they had various conditions (group first or individual first) so I assumed they ran them in different groups. I'll agree a single group of fifty is ridiculous - 10 would be pushing it for me. You'll probably get 2 people posturing/dominating in most places.
My bad. I just mis-read the article. Just now I went and downloaded the paper, and I see no group size specified, nor indeed any details of the procedure they followed. That might just be the house style of the journal. However, they mention that models predict that creativity increase with the size of the group (to a point). But they don't specify this point.
The recommendations they draw are:
1. Combining group and solitary brainstorming
2. Having group brainstormers interact by writing instead of speaking (“brainwriting”)
3. Using networked computers on which individuals type their ideas and read the ideas of others (electronic brainstorming)
They nicely give some recommended readings:
- Paulus, P.B., & Nijstad, B.A. (Eds.). (in press). Group creativity. New York: Oxford University Press.
- Brown, V., Tumeo, M., Larey, T.S., & Paulus, P.B. (1998). Modeling cognitive interactions during group brainstorming. Small Group Research, 29, 495–526.
Many institutional brainstorming sessions I have attended are a complete waste of time, for all the obvious reasons (mostly ego-dominance and poor leadership). This the paper refers to as 'production blocking'. However, I find that managed correctly, there are few better ways to introduce diversity of thought into a process.
Well the lack of numbers is a bit disappointing. I admit I'm not interested enough to really dig in but I do find the results interesting. And they did shoot down the previous advice I was given, which is to have everyone set aside time to write down some ideas before the meeting. It seems the presence of the group itself is a strong motivating factor.
That may be, for genuinely bad ideas, but that is not really the issue here, which is the summary dismissal of something without due consideration, and, more generally, the predisposition to act that way.
I have seen good things dismissed because the critic has some arbitrary standards of style, or is disinclined to make an effort to understand the implementation, or has failed to learn enough about the problem domain to understand all the complications, or wants to rewrite from scratch because "I'm a software engineer, not a maintainer", or "not invented here", or to lower expectations, or to intimidate and dominate... To be fair, at some time or other I have been guilty of most of these things.
I suspect that college sometimes gives people the mistaken impression that analysis and critical thinking (and being smart) is all about picking apart someone else's thoughts, but it takes more than that to create something worthwhile.
Creativity is a process. The function of a brainstorm is to initiate that process with open-ended thinking. Further into the process, more closure is required.
Being slow to criticise, particularly with colleagues you have to work day in and day out with, is critically important.
That said, much of the FOSS currently available is either itself crap, or built on libraries and/or toolkits that are crap. I cite specifically anything of the GTK flavour, many popular things built in C/C++, and to a lesser degree QT.
Why is this? Writing capstone technology (compilers and languages) is very hard. Many industries start life fragmented, but looking now at the tech landscape, sadly there is more hegemony in the proprietary world than the open source one.
If FOSS is ever to be adopted by a wider community that is inviting to newcomers (both users and hackers), this has to be addressed. It is not being addressed, and the power and elegance offered by the tech giants (Swift, C#/.NET etc.) continues to lap open source equivalents.
One example I'll cite is Inkscape. I now honestly believe it would have resulted in better software if this has been written once per platform (WinForms/WPF + Cocoa) rather than with GTK. It's years between minor bug releases, and it gets slower with each release. The (just released) 1.0 is unbearably slow (i.e. unusable). I've spent many hours trying to get familiar with the codebase so I could contribute to it. There's no documentation to speak of for developers, and the little that I've grasped so far leads me to thing it's better left to rot. The core is based on Sodipodi, a project abandoned in 2003.
FWIW I have started writing a clean room replacement in Swift/AppKit, but I'm considering altering this to Swift/UIKit for iPad so it'll be cross platform (albeit Apple only).
I'm going to guess you're pretty exclusively an Apple user. I think a big part of the problem is that a lot of the people developing FOSS don't have access to Apple's platforms for doing development, and in many cases, even testing. Because of the walls that Apple has set up for it's platform, doing so is often impractically expensive.
I think this is part of what is meant by being slow to critisize. Yes, FOSS often sucks on Apple's platform, but it's also useful to understand the factors involved, and what motivates people.
P.S. The best of luck in your endevors! I think the world of SVG editors could use some solid competiton.
I'm guessing you mean something other than FOSS (free / open source software) in your critique of FOSS. Reason being, Swift is FOSS (https://github.com/apple/swift), both the compiler and the standard library. Likewise, the Microsoft C# compiler is also FOSS (https://github.com/dotnet/roslyn).
You seem to enjoy closed-source platform libraries (WinForms, AppKit), and want to promote developing software that relies on them. The survival characteristics of this approach don't seem very strong. Good luck!
Yes, that's true, but unfortunately these libraries only work on a closed-source platform, namely Microsoft Windows. My apologies if this wasn't clear in my earlier post.
> The (just released) 1.0 is unbearably slow (i.e. unusable).
I've been playing with it on the work laptop (32GB, 16 core i9) and yeah, it's an unpleasant experience even there. But here's hoping they'll get to tweaking because it's pretty much the only way I can create embroidery files easily (via InkStitch)
FOSS seems to be adopted by most Internet companies, where Swift, C# and .NET are fringe languages.
clang and gcc produce faster code than Visual Studio and are more pleasant to use. IBM is now using clang in its terrible xlc compiler.
Mathematica uses gmp (in (gasp) C!), there's no commercial alternative to gmp or mpfr.
If you are writing apps for iOS or Windows, sure, the cited languages are better. That is quite a narrow field though and I'm not sure how FOSS could enter that field.
> the power and elegance offered by the tech giants (Swift, C#/.NET etc.) continues to lap open source equivalents
Ignoring for the moment that both of those are open-source, I've used both of them for several years on major projects, and I'm not sure I see this "power and elegance" you refer to.
Did you mean to say "old concepts dressed in the newer fashion, and backwards compatibility"?
That said, much of the FOSS currently available is either itself crap...
I understood the article was about criticizing persons and their reasons for doing something. What you say is an opinion on results, an opinion that could be correct. But you can't make a convincing criticism of behaviour based on results, except with better results.
I think the Linux kernel is crap, for what it's worth. That's not to say there isn't very smart people solving very complicated problems with it. But it is a monolith with a egomaniac in charge. This view is not solely my own.
On reflection, I have kind of an interesting relationship with this thesis. Whether you should be quick to criticize kind of depends on the purpose of the discussion. Maybe in the context of the person writing the code that kills webservers, it was a reasonable idea. It could well be true at the same time that the software creating that context is, in fact, garbage.
I tend to think from the perspective of how to build systems tomorrow that are not garbage, so I lean towards labeling as garbage things that cause annoying code to have to be written; it's an optimistic perspective in a certain sense, that things can be better. But if you're going to bother judging past projects, which is already something to be careful with, it seems pretty clear as OP states that you'll learn more about by looking for reasons that the way they did things made sense at the time. With all the pesky humans involved, it may well be a sociological lesson rather than a technical one.
Counterpoint: there is a practically infinite supply of possible projects to look at. Maybe we should be more eager to lay a program aside at the first major bug - there may well be five other programs that do the same thing - just as we should probably be more eager to give up on a bad book or movie these days. Maybe such an attitude would lead towards better priorities in software development - currently far too many projects prioritise advertisable features at the expense of working reliably.
> Additionally, there might be time pressures, political pressures, engineering constraints, access problems, and more.
All of those (except possibly "engineering constraints") are merely excuses for why it ended up being garbage, not reasons why it's not garbage.
This conversation goes both ways. When someone calls my work garbage, I can sit there and try to come up with excuses, or I can say "yes it is!" and we can move on towards a solution. Just because someone takes a swing doesn't mean you need to fight them.
Curiously, I only see the author's response in software. In every other field I've worked in, an experienced person is free to call something garbage (or worse), and the recipient of this criticism is expected to not waste everyone's time by making excuses. You'd think the internet would give us a thicker skin, not a thinner one.
> When someone calls my work garbage, I can sit there and try to come up with excuses, or I can say "yes it is!" and we can move on towards a solution.
When someone calls my work garbage, but isn't willing to sit and discuss what is garbage about it, nor willing to discuss the cost of fixing it, I will ignore him/her.
Talk is cheap. It's trivial for someone to come up with a surface reason to complain. It's harder to take all factors into account.
I was once doing a project where I was solving something in a brute force, inefficient way, but it was simple to code and we had the computer time available. One developer kept loudly complaining in meetings about how crappy and dumb my method was. I finally set up a meeting with him and asked him how we would do it. His solution was far more efficient, but would also take up to a week to implement. I asked "Given our deadline, do you think it is worth losing a week of computer time to implement your method?" He thought about it and said "No, my method won't make up for the time lost."
If you're not willing to sit in the trenches with me, your feedback will have lower weight. I'm in the business of solving problems, not in the business of perfection. At some level, my software is always going to be garbage. You want it not to be? You better have a stake in the outcome, then.
You turn this into a question about "having thick skin" or "making excuses". But the article is not at all about how the receiver is supposed the take the criticism, the point is that shallow dismissals (like "garbage") typically indicate the critic does not understand the full picture.
This is really about maturity I think? With children we do something like:
thoughts > filter > communication > thoughts
With grown ups it should work like:
thoughts > communication > filter > thoughts
You want the full spectrum of gradients of terribility. If I wrote a "steaming pile of crap" I don't want to hear "it is not very good". "not very good" is reserved for things that are "not very good".
food half frozen != under cooked
The alternative is teaching people not to give you feedback because you are emotional unstable and might take it the wrong way.
Just because someone is a dick doesn't mean they are wrong. You have to adjust for their lack of social skills. You want to own the filter, don't require others to maintain it for you. That is a terrible idea! To them the trash talk is treasure.
With the first I want to know why immediately with the later I will eventually ask for clarification. It is also an invitation for critical examination.
I do some accounting for who says what. How much they normally complaint? How often are they right? How do they deal with similar feedback?
There is this hilarious place few get to visit where everyone is complaining about everything all of the time while the mood is just wonderful. The work delivered is of exceptional quality.
On a related note, suppose that you already spent too much time to try to really understand the code and asked the developers for the motivations for their strange desing decisions. How do you present criticism in a constructive way then?
By showing you understand the constraints the developers were under, and acknowledging their frame of mind. Criticism is fine, giving it without understanding the context is not.
If you are able to summarize the problem in such a way that the developers say 'That is right', and they are not correcting you further on your understanding of the problem, then any criticism from your side is not as much of an attack. By framing it like 'I might be missing some context here, I think X, Y, and Z were your reasons for choosing this approach, but I think there is a factor F you haven't considered' you are having an intelligent discussion instead of just burning the developers to the ground for missing some piece of information.
You might still come to the conclusion the devs are dumb, but at the very least you have made a good impression by not barging in like you know better than the people that have probably spent a lot more time with the problem. Step one is showing that you understand the domain, and realistically, quite a few times you won't even get to the next step of asking whether they considered a factor that went unmentioned, since you've missed critical information in your assessment of the problem.
To summarize: Interview the devs about their decision processes and the domain. Show you understand, and repeat the problem back to them in your own words. If they agree with your version, then and only then ask about things you think they missed, or decisions you still can't comprehend given your information. Reduce your ego, as hard a task as that might be.
As I said, I already spent too much time trying to understand their reasoning. In the end I just couldn’t, and I really tried, avoid the conclusion that they were incompetent. And I failed, even though I really tried, to present my criticism in a way that didn’t make my conclusion obvious.
I really, really don’t like making people feel stupid.
Good, that is already half the battle. Not jumping to the conclusion that the others must be incompetent can be more difficult than it sounds, and judging by your comment it looks like you have that down.
Were you able to ask the devs directly about their reasoning? Asking them about certain decisions, or trying to get them to explain the thought process and repeating it back to them in your own words is an incredibly important step. You might not agree with them at all, but the part of what your own ideas are is parked at that stage. Then, once you've confirmed their thought process, and they agree to your version of it, is the point to ask about improvements or different strategies that you see. 'Would there be any drawbacks to [your approach], because I think it would provide benefit [x], [y], and [z]' is a great point to start off from once you have that confirmation on your version of their thought process.
Now, if you don't have any access to the original devs it will become much more difficult to try this process. Best you can probably do is try and see if you can find a person that is willing to play devils advocate, or working backwards from your reasoning to see if there is any knowledge that when eliminated from the thought process will make you arrive to the conclusion of the original devs.
I did interview some of the devs about their reasoning, and they kind of admitted that they probably had screwed up. The problem is the "tech-lead" that claims that he has 20 years [sic] of .NET experience, but can't implement the strategy pattern correctly, but most importantly doesn't understand when and how to use it, which is evident by him hardcoding the "strategies" in the calling methods. He also has six layers before above dapper (no exageration, I have counted them) of classes for a simple stored procedure call. One of the problem is that five of those classes don't add any functionality, since they just pass through the string with the name of the stored procedure and the arguments. "This is how it done", is the answer I get when I question the wisdom of all the empty classes, and then he walks around and telling other people that I don't understand OOP.
After failing in my attempts to be transferred to an other team I'm sending out my resumé.
I agree with this but I have to admit it is hard as hell to practice. If you can become slow to criticize though, when you do it people tend to pay more attention, a great upside to the effort required.
> I agree with this but I have to admit it is hard as hell to practice.
You may want to self-analyze your need to criticize. What is criticizing satisfying in you? It must be something otherwise you wouldn't have the urge to do it.
It varies from person to person, but for a lot of academic (and perhaps tech) folks, there is a strong history in their training to be correct (in both research and classes) - therefore they are hyperaware of things that seem off.
In the work environment, though, the focus is on being useful, which has overlap with correctness, but the two occasionally differ. A lot of times such people tend to be correct but not useful. Pointing out a flaw in something where there would be no significant impact were the flaw absent is the opposite of being useful.
Unfortunately, people in the audience often get fooled by this and give the comment more weight than it deserves.
The time necessary to understand why the fence was put up, it is usually greater than the time necessary to push down the fence, test the system, and put back up the fence if necessary.
Except that you don't always know all the ways the system needs to be tested. I've come across code that was broken, seriously broken, multiple thousands of pounds broken, because someone replaced apparently pointless code with something simpler, tested every case they thought of, and deployed it.
Are your tests really complete? Have you tested everything?
Perhaps we simply work in different fields, and there are things that matter to me that you would shrug off and not care about.
However, what I am trying to convey is that the burder of explaining and testing the code, and why the code is needed, should be on who put it out first.
If we are not brave and decisive when we decide to cut code, the complexity of the codebase will just explode in no time.
It is already extremely difficult to simplify and remove code, if the person doing this job is also the one that need to figure out, from scratch without documentation, why the code was put there in the first place, the simplification is never going to happen.
> ... the burden of explaining and testing the code, and why the code is needed, should be on who put it out first.
Yes. But what happens when that code is there, live, working, and the originator is no longer available, and you need to enhance it. You come across a complex section and you can't really see why it's written like that. It feels like you're suggesting that one should just replace it with the simpler version you think will do the job just as well, and see what breaks.
Due to my usual context, that approach fills me with horror. I've seen it go wrong too many times ever to be comfortable with it. Admittedly, if you're producing simple web sites then usually it won't be a problem, but when there are potentially millions of dollars of revenue at risk when a condition happens that you didn't anticipate and which was part of the reason the original code was so hairy, you become a little more wary of the "move fast and break things, then fix them" approach.
> If we are not brave and decisive when we decide to cut code, the complexity of the codebase will just explode in no time.
And if you are brave and decisive and cut code, and it turns out that the complexity was there for a reason you didn't understand, then that can be a company-terminating move.
Again, our contexts are different, and yes, I do understand the need for controlling complexity in code, and simplifying where possible.
Properly understanding and analysing the risks is something I rarely see.
I suspect we'd actually agree in a given context, I'm just horrified to watch people unthinkingly advocating the approach in places where it doesn't apply. I'm not claiming you're one of them, quite possibly you'd do the risk analysis and understand the context, but I've seen it too often to take for granted that otherwise clever people won't do it.
"We'll just re-write it, it won't take long, how hard could it be?"
I do actually work on quite low level code. My mistakes are neither millions of euros nor company ending scenario. Just a lot of complaints.
I am not advocating for move early and break things fast. Not at all.
But in my environment I find the need to find a reasonable compromise.
Compromise that can also be having a set of test suites that allow to test things and rollback.
I am the one advocating for "clean code is the one you never have to look at, because it works, not the one that looks nice in the editor".
I understand your point, I understand that we work in different environments, but I also think that a codebase is a living creature that, if left unchecked, will just explode in complexity.
In some environment make sense to tame that complexity, in some other make sense to reduce it!
Colin is talking about criticising implementation decisions in software, in particular unconstructive criticism bereft of context.
Colin is criticising your idea, but is also trying to understand how you formed that idea in the first place, in an effort to hedge against the possibility it has good reasons he can't see. He's not just saying, "your idea is garbage".
This comment was flagged dead, and I have vouched for it. Please don't downvote it ... it clearly shows an underlying misunderstanding, and as such deserves to be answered.
My comment that you reference was an attempt to find the merit in the idea that I couldn't see, but I assumed would be there because I was assuming the proposer thought it was a good one. I was starting from a position of "Well, this person thinks it's a good idea, but I can't see why, so let's see if we can find how it would work, and what problems it solves. Then we can address any potential drawbacks."
It was intended to be an invitation to get some discussion about how it would work, what it would do, what problems it would solve, and where the value lay. I was surprised and disappointed that no reply was forthcoming, and the thread died.
I guess I just wasn't clear enough, and for that I am saddened.
I liked the reply you posted, I'd be happy for getting such feedback / replies. And I think your reply can be edited a little bit, to start in a little bit more positive tone,
by changing from this:
> There are many, many problems with the idea, ...
Nope. I think it's better to start with, This is interesting. then the questions. if you really see problems, state them. lead with engagement not criticism.
Cynicism is often mistaken for competence to outside observers. If a piece of software has a specific problem, the person chanting "This software is garbage" is going to look smarter than the people saying "Hold on, maybe this software is this way for a reason".
There are various (somewhat weak) studies on the topic. For example: https://www.ncbi.nlm.nih.gov/pubmed/29993325
> Four studies showed that laypeople tend to believe in cynical individuals' cognitive superiority. A further three studies based on the data of about 200,000 individuals from 30 countries debunked these lay beliefs as illusionary by revealing that cynical (vs. less cynical) individuals generally do worse on cognitive ability and academic competency tasks. Cross-cultural analyses showed that competent individuals held contingent attitudes and endorsed cynicism only if it was warranted in a given sociocultural environment. Less competent individuals embraced cynicism unconditionally, suggesting that-at low levels of competence-holding a cynical worldview might represent an adaptive default strategy to avoid the potential costs of falling prey to others' cunning.
In toxic and mismanaged environments, being a cynic from the sidelines can be a better political strategy than being the positive person in the trenches trying to find solutions and compromises.