Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm considered slow (ish), but I have very few bugs returned. Most (but not all) the faster devs have many bugs (often trivial, but sometimes show stopper) and spend time going back and forth with QA.

My thinking is, QA isn't there to find my bugs. They're there to make sure I don't have any. I seem to be in the minority on that thinking.



I also consider myself on the slower side (I've never objectively measured, but it's how I feel), but I try to not let any errors leave my local environment. Sometimes I wonder if it all depends on how you came up programming. My early jobs were all in finance dealing with money and payroll systems. Mistakes meant people didn't get paid properly, and led to lots of clean up work for everyone involved. This led me to think hard about every change, and multiple ways to test the change.

Even today it would never occur to me to make a change and push without testing said change, although I see people do it all the time.


> Even today it would never occur to me to make a change and push without testing said change, although I see people do it all the time.

How is this even a thing? I don't understand it, yet I come across it time after time.

If you weren't self-educated, did you just throw together assignments without testing them? At work, how can you just write code without testing the functionality.

I've seen people work on webapps that didn't bother to navigate to page they changed. I'd pull in code, go to test my change, and find that the page looks like a GeoCities atrocity. Then I start rolling back changes only to find the code I pulled is the culprit.

I can understand writing a unit test that doesn't cover as well as one thinks. Not taking 15 seconds to visually inspect a visual change? That's unacceptable.


>At work, how can you just write code without testing the functionality

consider this scenario: developer is tasked to make an update to a report. He does not have reporting services available locally (long and complicated setup) and neither does he have access to the data he will be reporting on (because of data privacy regulations). He makes the change to the report, checks in the code without any testing and it then blows up in qa. This is a common scenario that i have seen many times before, especially when developing for systems or with data that developers don't have full access to.


As professionals we should demand the tools to do a proper job. That is NOT too much to ask.


There are only two kinds of scenarios that I've seen people do this where it might be vaguely acceptable:

1. Where it is impossible for the developer to test the change himself. For instance, if the bug is not reproducible by the developer due to some difference between the production and development environment that he lacks concrete information about (e.g. external customer). In which case, the developer should test that it didn't break his own system in some additional way and should at least warn the customer that there's no guarantee it'll work.

2. If you are under some contractual obligation to ship on a particular date, and the thing you are shipping is going to be a giant bug ridden turd anyway due to time constraints caused by poor project management. This is not ideal at all, and not something to take any pride in doing, but it may be passably acceptable to cut corners like this if management tells you to because otherwise your employer is going to lose money for not delivering. Especially if you have a "defect release" planned in the contracted schedule later on anyway (e.g. because all parties to the contract already know the schedule is going to produce a turd and planned accordingly).


There are edge cases where that could make sense.

I've made live changes in production systems without testing them. If the production system is down, it's probably not going to get more down. Some of those changes are made on the live system and then back-ported into the release process. (Sometimes we aren't even sure which supporting index would help enough to come back, or which query to "neuter" to get site functionality mostly back. In cases like that, you might need to do development in production.) I've approved someone else shipping binaries built on a developer desktop to get a production site back functioning more quickly and then commit the changes are re-release from the build/deployment pipeline. We've pushed changes to prod without QA review. There are times to follow the measured, careful, prudent approach to development (most times) and there are other times where a meter of $10K/minute suggests that a lower latency change process is more appropriate and higher EV for the company.

That said, I've seen far more instances of no-reasonable-excuse events where code that was checked into master couldn't possibly compile, people doing an "svn resolve; svn commit" without actually resolving anything and checking in the ====== ++++++ conflict markers and both conflicted sections, etc.

"There, I fixed it!"


I'm with you - QA should be a double check, a fall-back; if you haven't checked, how is that going to be possible? The risk with coders leaning on QA is that they've outsourced their conscience; and the only people really testing and checking the code don't know it very well. Not good.

FWIW, I think dropping QA for periods to shock those who are starting to lean on it is a good idea; but it should still be there most of the time.


I have "slowed down" a fair bit, especially at the start of projects where i don't "do" a lot except think through scenarios. I write a lot better code than I used to.


I struggle with this sometimes. There's a line to walk between getting stuff done efficiently. If you stray too far either side of that line, you're going to drag everyone else down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: