Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Extreme Programming is not an arbitrary desire to turn all the knobs you can find to 11. It started as a question: what happens if we take certain practices that are good and do them more intensely? E.g. if some testing is good, what if we test pretty much everything? That team found that they really liked turning particular practices way up.

Well, of course they're entitled to their opinion, but that's all it is: an opinion. An argument that if some testing is good then test-driving everything must be better, or that if code review is good then full-time review via pair programming must be better, has no basis in logic. And those kinds of arguments go right back to the original book by Kent Beck, and they have been propagated by the XP consultancy crowd from the top right on down ever since.

IMHO, if a trainer is going to go around telling people that if they don't program a certain way then they are unprofessional, then that trainer had better have rock solid empirical data to back up his position. Maybe as you say, I do have a giant misunderstanding, and in fact Object Mentor do make their case based on robust evidence rather than the sort of illogical arguments I've mentioned. In that case, I assume you can cite plenty of examples of this evidence-based approach in their published work, so we can all see it for ourselves. Go ahead; I'll wait.

I think a lot of the rest of your points are similar misunderstandings along with some cherry picking. E.g., the OM blog post on pairing. He said that people sometimes asked him for basic background materials, so he posted some links. To go from that to "Brett Schuchert apparently advocates pair programming based on.." is either very poor reading comprehension or the act of somebody with an axe to grind.

This is a consultant who presumes to tell others how to do their job, openly posting asking for any source material from others to back up his predetermined position, and then claiming in almost the very next sentence to favour material based on research or experience. He says that the links he gave (the ones where much of the original research is either clearly based on flawed-at-best methodologies or simply not there at all any more) are things he often cites. And he gives no indication, either in that post or anywhere else that I have seen, of having any library of other links to reports of properly conducted studies that support his position. I don't think criticism based on this kind of post is cherry-picking at all, but of course if it is then again you should have no difficulty citing lots of other material from the same consultant that is of better quality and supported by more robust evidence, to demonstrate how the post I picked on was an outlier.

The same goes for any of my other points. If you think I'm cherry-picking, all you have to do to prove it is give a few other examples that refute my point and show that the case I picked on was the exception and not the rule. If you can't do that -- and whether or not you choose to continue the debate here, you know whether you can do that -- then I think you have to accept that I'm not really cherry-picking at all.

As to not doing TDD being unprofessional, I'd generally agree. I tried TDD first in 2001, and have worked on a number of code bases since. For any significant code base that's meant to last and be maintainable, I think it's irresponsible to not have a good unit test suite. I also think there's no more efficient way to get a solid suite than TDD.

Please note that I'm not disputing that an automated unit test suite can be a useful tool. On the contrary, in many contexts I think unit testing is valuable, and I have seen plenty of research that support such a conclusion more widely than my inevitably limited personal experience.

On the other hand, I don't accept your premise about TDD. For one thing, TDD implies a lot more than merely the creation of unit tests. Among other things, I've worked on projects where bugs really could result in very bad things happening. You don't build that sort of software by trial and error. You have a very clear statement of requirements before you start, and you have a rigorous change request process if those requirements need to be updated over time. You might have formal models of your entire system, in which case you analyse your requirements and determine how to meet them at that level before you even start writing code. At the very least, you probably have your data structures and algorithms worked out in advance, and you get them peer reviewed, possibly by several reviewers looking from different perspectives. Your quality processes probably do involve some sort of formal code review and/or active walkthrough after the code is done, too.

If you came into an environment like that, and claimed that the only "professional" thing to do was to skip all that formal specification and up-front design and systematic modelling and structured peer review, and instead to make up a few test cases as you went along and trust that your code was OK as long as it passed them all, you would be laughed out of the building five minutes later. If you suggested that working in real time with one other developer was a substitute for independent peer review at a distance, they'd just chuck you right out the window to save time.

TDD is not an alternative to understanding the underlying problem you're trying to solve and knowing how to solve it. A test suite is not a substitute for a specification. Pair programming is not a substitute for formal peer review. They never have been, and they never can be.

I haven't gone into it here, but of course there are other areas where TDD simply doesn't work either. Unit testing is at its best when you're working with pure code and discrete inputs and outputs. It's much harder to TDD an algorithm with a continuous input and/or output space. Tell me, how would you test-drive a medical rendering system, which accepts data from a scanner and is required to display a 3D visualisation of parts of a human body based on the readings? Even if this particular example weren't potentially safety-critical, how would you even start to test-drive code where the input consists of thousands of data points, the processing consists of running complex algorithms to compute many more pieces of data, and the observable output is a visualisation of that computed data that varies in real time as the operator moves their "camera" around?

If you (or anybody) wants to discuss this further, probably better to email me; that's easy to find from my profile.

I appreciate the offer, but I prefer to keep debates that start on a public forum out in the open. That way everyone reading can examine any evidence provided for themselves and draw their own conclusions about which positions stand up to scrutiny.



An argument that if some testing is good then test-driving everything must be better, or that if code review is good then full-time review via pair programming must be better, has no basis in logic.

That's not the argument at all. That is, as I just said, the reason they decided to try that. Their reasons for continuing to do it and further to recommend it are entirely different.

[...] better have rock solid empirical data [...]

You do realize that almost everything that goes on in the industry is not based on rock-solid empirical evidence, right? And also, that you're privileging an arbitrary historical accident by saying that new thing X has to have evidence when the common practice doesn't?

If you came into an environment like that, and only "professional" thing to do was to [...] make up a few test cases as you went along and trust that your code was OK [...]

That is not something I have ever heard any Object Mentor person say, and it's not something I said. It's so far from what I've ever heard somebody like Bob Martin or Kent Beck say that your misunderstanding is so dramatic that I have a hard time believing it's not willful.

I prefer to keep debates that start on a public forum out in the open.

Well, I'm not trying to have a debate. If you'd like to have one, you'll have to do it without me.


Their reasons for continuing to do it and further to recommend it are entirely different.

So you keep saying. The problem is, almost everything Object Mentor advocate does seem to be based on some combination of their personal experience and pure faith. I object to someone telling me that my colleagues and I are "unprofessional" because we happen to believe differently, particularly when we do have measurable data that shows our performance is significantly better than the industry average.

You do realize that almost everything that goes on in the industry is not based on rock-solid empirical evidence, right?

That may be so, but most people in the industry aren't telling me how to do my job, and insulting me for not believing the same things they do.

That is not something I have ever heard any Object Mentor person say, and it's not something I said.

Good for you. XP consultants have been making essentially that argument, in public, for many years. TDD without any planning ahead is inherently a trial-and-error approach, which fails spectacularly in the absence of understanding as Jeffries so wonderfully demonstrated. Plenty of consultants -- including some of those from Object Mentor -- have given various arbitrary amounts of time they think you should spend on forward planning before you dive into TDD and writing real code, and often those periods have been as short as half an hour. You may choose not to believe that if you wish. I'm not sure even they really believe it any more, as they've backpeddled and weasel-worded and retconned that whole issue repeatedly in their recent work. But I've read the articles and watched the videos and sat in the conference presentations and heard them make an argument with literally no more substance than what I wrote there.

You keep saying that I'm misunderstanding or cherry-picking evidence. Perhaps that is so and I really am missing something important in this whole discussion. However, as far as I can see, throughout this entire thread you haven't actually provided a single counterexample or alternative interpretation of the advice that consultants like those at Object Mentor routinely and publicly give. You're just saying I'm wrong, because, and there's not really anything I can say to answer that.


That's because you're trying to have a debate with the Object Mentor guys rather than a discussion with me. Your problem with them isn't my problem, and neither are your misunderstandings. It is not my job to argue you into a better understanding of something you clearly can't stand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: