> And then you have open source software which is heavily used in a lot of commercial products that might get attacked. With this kind of thing there's a big incentive not to report bugs to the project but to sell them to a company who has no incentive to see them fixed...
Then a lot of people need to revisit their own thoughts on exactly why they use open source, and the consequences of what happens if it goes away.
> For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.
That might happen, but it depends on intent. Products may be realized to be weak afterwards, due to a changing software landscape, so the company hiring them is a good thing - the software gets fixed in some way, and a need was fulfilled.
Then the question becomes: how likely do you think it is that someone purposefully creates a flawed product in order to make a gain later, after the project has been completed, released; after they have left the job; after they have been hired by these testing companies? It sounds as likely to me as people purposefully putting back-doors into software for future self interest. There exists another outlet for unethical intent, but in my opinion, the problem begins with ethics to begin with.
Software is too complex to perfect. There will always be bugs. Regulation of industry and regulation of the regulatory bodies is additionally, very complex. The idea of regulating where knowledge is allowed to flow on top of that horrifies me: i.e., whether a developer is allowed to work at one of these companies eventually, in judgment of prior work.
I wish people had greater incentive to maintain a standardization of ethics, but this is all theoretical to begin with, at least from my direct observations. There is nothing to judge unless it happens, and then the best one can do is act.
Then a lot of people need to revisit their own thoughts on exactly why they use open source, and the consequences of what happens if it goes away.
> For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.
That might happen, but it depends on intent. Products may be realized to be weak afterwards, due to a changing software landscape, so the company hiring them is a good thing - the software gets fixed in some way, and a need was fulfilled.
Then the question becomes: how likely do you think it is that someone purposefully creates a flawed product in order to make a gain later, after the project has been completed, released; after they have left the job; after they have been hired by these testing companies? It sounds as likely to me as people purposefully putting back-doors into software for future self interest. There exists another outlet for unethical intent, but in my opinion, the problem begins with ethics to begin with.
Software is too complex to perfect. There will always be bugs. Regulation of industry and regulation of the regulatory bodies is additionally, very complex. The idea of regulating where knowledge is allowed to flow on top of that horrifies me: i.e., whether a developer is allowed to work at one of these companies eventually, in judgment of prior work.
I wish people had greater incentive to maintain a standardization of ethics, but this is all theoretical to begin with, at least from my direct observations. There is nothing to judge unless it happens, and then the best one can do is act.