Technical Debt Creating Risk?
At a recent meetup for Tech Vancouver, a speaker was presenting on the idea of technical debt. Technical debt is the act of sacrificing quality for speed or convenience. For coders, this means that certain parts of the code are not as clean or stable as they should be. Hence, you are indebted to that shortcut with both risk and the commitment to revisit it later. The implications for this practice and risks will become more severe in the future.
Agile methodology is a progressive form of code development and tends to support these practices as well. Agile development helps to break down the work into more manageable chunks. The issue is that these chunks are intended to be iterative. There is no assumption that the work product would be 100% on the first or even the second pass. The software will have numerous iterations before it is even close to considered complete. This removes the burden of perfection and time issues on the delivery of a viable product but also increases the risk for errors. Agile is not a bad practice it is very common. However, it does require more strict quality control.
Patch, Patch, Patch
With the connected world, patching is a given. In the early days before the wide reach of the internet, you couldn't ship a piece of software that was faulty. There were no effective means of patching that software if there was an issue with the quality or reliability of the code. A poorly released product could mean a failed product or even a failed company as a result. This is still a risk, but the bar to what is acceptable in a release seems to be much lower than before.
Beta release of a product used to be done with a select group of customers to ensure bugs were caught before the release and also that the features worked in the way that suited the users. Presently the role of a beta release seems to have expanded. Beta releases act as a wide release to capture many of the issues that should arguably be fettered out in quality control. There is an old ideology in IT that you never adopt version one of a product. Let others test it out, let the software company fix the major bugs in an x.1 release, then you can adopt the release. Some software seems to be in a perpetual state of beta where nothing is ever complete. If there are no patches being released, there are new features, that will then need to be patched. In some cases, this reality is driven from the marketing department. Delivery dates for a product or a release are set well in advance. All efforts are made to hit that date even if it means in some cases that an inferior product is the result. After all, we can just patch it right?
One of the arguments for open source development is that you can't hide bad code. Everyone can read, review, comment, and correct issues. The argument is that the risks are lower since the code has already been vetted. Therefore, the code should be fewer errors and vulnerabilities. In private sector development, the idea of public code is not practical. So how will this problem be managed in the future? With the rise of cybercrime, the need for lower risk software is rising. In a recent interview with Lachlan Turner of Arcinfosec, we talked about the growing risk of cyber terrorism. Losing your data to encryption is a serious problem for a business, but losing control of pressure in a gas pipeline could be catastrophic. We may see more legislation to ensure some standards are in place with code. For example, standards guide for code implementation when dealing with high-value assets. No one should assume that these controls would eliminate risk, but certain standards for software are a likely progression from manufacturing practices that ensure our physical products do no harm. The future is a connected world where everything is hackable. The exposure of physical risk will become a more prevalent issue as software plays a more visible role in our physical worlds, beyond the digital world.