Very interesting poll. Sample size rather small but so is the RoR community. With so many gems and plugins available now I'm not sure how one would ask that question but I think that would be some interesting information to have.
What surprised me the most was the number of responders working on teams with more than 5 developers. I'm still in the advanced learning stages of this framework but I can't imagine trying to coordinate any kind of project in RoR with more than 2 or 3 people.
Now, it may be that when they say 8 or 9 developers what they mean is 2 or 3 guys working the view/controller code and maybe one guy on the models and database and everyone else doing the functional, unit, and final user testing. That would make more sense.
The test/notest divide is very roughly 50/50, I'd like to know what percentage of testers found tests to actually help catch bugs, and what percentage of notesters wished they had tests because they are confident tests would have caught eventually discovered bugs sooner.
I ask this because the (rails) testing koolaid still tastes funny to me. I wonder if any practice that focuses the attention on code quality, whether TDD, BDD, unit testing, pair programming, cleanroom, etc, would catch the same bugs.
I'd venture to guess nearly all developers using unit tests would say they are helpful, otherwise they wouldn't be using them. This isn't meant to be snide. I'm just saying that if you already suspect koolaid then any feedback is basically worthless because the only people responding are the ones who have seen measurable results and idiots who drink koolaid, and there's no way to tell the difference. The feedback from non-testers might be a little more useful, but it's still a little suspect considering it's from people who are confident more testing would improve their software yet still don't do it. Furthermore, unit tests are only as good as the test cases themselves and a survey can't account for quality of tests. I guess my point is that while surveys can be interesting, at the end of the day they're just a popularity contest and not very good for determining what works and what doesn't.
Exactly. I'd still love to see real study on whether unit testing is worth it (which our survey doesn't even begin to address, since it's very hard to measure). I think some good studies on that would be invaluable, but it's nearly impossible to control for all the variables, so I'm not holding my breath.
Failing that, most of the "evidence" one way or the other is based on asking people who test or don't test - and probably most testers would say that testing is great while non-testers would say testing wouldn't catch the kinds of bugs that matter. On the flip side, you have some testers saying it's not valuable (because their test quality is bad) and some non-testers saying they are sure tests would help, but only because they imagine tests to be a silver bullet.
In any case, our survey was just supposed to get a rough feel for how many people were doing testing (and what tools they used). What surprised me most was that even small teams (1 and 2 people) often had a test suite. This flew in the face of the anecdotal evidence we had previously gotten from other startups (almost none of ones we have talked to use tests).
If you look at page 470 of the latest Code Complete, it shows that unit tests have a very low defect detection rate,
Cleanroom software engineering has the best record of quality, but there's no buzz for that now in the ruby community, maybe because it doesnt sell books :-P
I'm not entirely sure that I understand where you are trying to go with this comment. What is the difference between 'testing koolaid' and TDD in your mind, for example?
Personally, I use TDD. My tests are pretty much just a simple way of forcing me to think about the functionality that my app needs, and making sure that that functionality works. I find that it's the quickest way to actually get something up and working, which is a rewarding feeling that I find quite motivating.
I almost never test edge cases etc - but if I do find a bug/receive a bug report, I always write a test that demonstrates the bug before fixing it.
So, does that mean I drunk the testing koolaid? (sorry if that comes across as snarky, it's not meant to be, it's an hinest question).
Almost every class I develop needs some thing to run it while I build it. I just make that thing Test::Unit in ruby.
I started off with simple scripts for test harnesses long ago, and realized there was no reason not to use a test as my harness. The tests, for me, catch the most stuff initially. The value later on of some basic test cases is less, but that is all bonus as they're already written.
Netbeans is nice, but I am actually in the process of leaving it for straight emacs. Netbeans stalls out, is a memory hog, and crashes nearly daily for me right now. I also found that I really only use it as a SVN GUI, and file manager. I the code completion is slow and never useful to me, many of the auto complete features are just annoying. My Ruby tools like autotest, irb, debugger, and rake tasks are just more responsive on the command line (some don't really work at all inside Netbeans).
I really liked it for awhile, but I started realizing that I didn't use or really like most of the advanced features, and all the work it does trying to provide those features are makes it slow.
Do anyone else's Netbeans seem to index things for the first hour the program is opened?
Go vim! :) Nice to see that "we"'re #2. BTW I started with Ruby on Apatana and tried NetBeans, but they required too much mouse movements (the mouse kept puking and then quit).
Perhaps most common used gems or plugins?