Why do we generate software bugs?

I have a simple question, but a difficult one for you.
What do you think are the main causes that generate software bugs?

  1. Over confident or lazy programmers that think instead of thinking and testing.
  2. Lack of abstract thinking - focusing on low level problems and losing track of the overall concept while programming

Maybe not perfectly worded, but those two I encounter most often, with lack of documentation as a third, often mixed in with the other two.

1 Like

I agree. Mostly lack of testing and incomplete mental models. Also complexity of the tech stack and/or the problem domain exacerbates the problem.

Any why question can never be answered by others according to philosophy :slight_smile:

It is possible to have bug free software for few lines theoretically provable by mathematical models which increase in complexity as we add each more line and hence in practice it is not possible.

I think that bugs can have different origin. Sometimes we have a clear idea but we fail to represent it in code, so these bugs are due to limited technical skills. But other times is our understanding of a problem that is incomplete. In that case a bug represent something we need to learn, a refinement we need to do to our understanding.


And this second issue the “cultural” problem could be mitigated somehow, using tech skills or different testing methodologies?

I think it is actually surprising when stuff works. Bugs are in the very nature of building software (due to the problem not being well understood, due to the solution not actually addressing the problem, due to composing solutions for two different problems that interfere with each other etc). Looking hard enough, almost every piece of software will have bugs. Ensuring software has no obvious bugs requires a lot of skill, energy, process and tools, but in most cases, one or more of those things will be lacking. Personally, the best I hope for is to end up with bugs that only happen in edge cases that are not important to users.


Most softwares have just so many variables (in multiple senses) and use cases, that covering them all is often times humanly impossible. I agree what Rafael said, that it’s natural to make bugs, and the trick is to learn to live with the fact by using robust design, exception handling, error recovery etc.

The other thing is bugs caused by misspelling, syntax and features that work unexpectedly, etc. Due to inexperience, complex syntax and/or language obscurity bugs easily come up. You may need to stare for a long time at a line of code to see the bug, or then you just don’t understand what it actually does. For example, if you write a long regular expression (regex), it might be difficult to see later on why it doesn’t work as expected.


Languages that raise the level of abstraction can help with that.


Abstraction and modularity help to reduce complexity, “moving parts”, in the program. Actually, modularity could be seen like increasing the abstraction level inside the program. Well-defined modules help to reduce the complexity of the components that use the modules. That makes it easier to comprehend the functionality of a component, which leads to less bugs. The bigger the software is, the more important role the software architecture has in bug-proneness.


This phrase reminded me of Rust, with its concepts of ownership, borrowing and life time, as a way to help the programmer avoid errors at compile time, that makes it increase its popularity among developers. Microsoft expects Rust can help them to avoid some bugs in their projects.

Because, in the age of the Internet, programming errors are often associated with security problems, which can have very harmful consequences.

That’s why I find it very interesting to develop tools that, by their very design, are capable of avoiding as many errors as possible.

The experience of the last two years in a cloud environment that includes in-home and mobile devices – a mix of systems, configurations, and hardware has changed my opinion about bugs. There are so many “moving parts,” including system updates, user updates, OS updates, new hardware and so forth. I am close to the conclusion that “this is the new normal.” That we should expect bugs, failures, changes, security updates, and an otherwise constantly moving environment. Beyond the usual unit testing that should cover the standard and corner cases for methods, it is not possible to reproduce the production environment so completely that any runtime bug could be predicted. Instead, the emphasis is on metrics and forensics. Microservices are failing all the time, being updated regularly, updates are occurring on devices. Of course this is not the environment for all software, but increasingly it is. Even classical overrun/under-run issues can be caused by updates to dependent libraries. Now, we program more toward measuring performance and capturing failure, while we create better test environments and production rollout/rollback. Also, the automation needed to integrate patches very quickly. Hopefully, functional, stress, security and other pre-production testing completely covers the production environment, but of course it never will. The focus has become the entire process that integrates changes, fixes, configuration, updates.

1 Like

I liked a lot this audio/book:

Meltdown. Why Our Systems Fail and What We Can Do About It

The book mentions that complexity and coupling between systems is a cause of errors.

1 Like

As a grizzled veteran of the software industry, i have studied this issue very carefully over the years. Human beings are subject to making mistakes, and so our software will reflect our limitations. There are actually only a small number of categories or kinds of bugs that occur. The most common bug is incorrect order of calculation, which has multiple manifestations such as using an undefined value, or overwriting a value before it was about to be used, or calculating something where the constituent parts haven’t gotten ready yet. Then there is the off-by-one error. Then there is the drawing something offscreen, or underneath something instead of on-top.

Then there are a whole bunch of syntactical errors which hopefully the compiler will catch, such as misspelling a name, or forgetting to include all the required parameters to a function. Some languages are much better at catching errors at authoring time vs. at runtime.

There are also mistakes related to legibility, usability, and lack of beauty, but those are not so much bugs as design errors. The computer program is running as expected, but maybe the users don’t like it much. Those types of errors can be just as grievous as the computational bugs.

1 Like