The Panama Papers. The largest data leak in history. How did such a massive breach occur on a law firm dealing with high profile politicians, celebrities, and sports stars? Was it a sophisticated attack? Did it require months of planning and a super smart secretive hacker team? The truth is a shocking negligence to manage IT basics. In most cyber-security breaches, the attack vector is actually a known vulnerability. In the case of Mossack Fonseca, the firm where the data was pulled from, a hacker would have had a wide range of vulnerabilities to choose from. As noted in this wired article, their exchange server hadn't been patched since 2009, their corporate portal was very poorly configured and was also not being securely maintained. Mossack Fonseca has confirmed that this attack, was not an inside job and that the likely attack vector was through the poorly maintained Exchange server. Their corporate portal hadn't been updated in months and the configuration allowed you to browse the backend folders if you guessed a folder name. Small and mid-size businesses often do not give sufficient thought to what would happen in the event of a security breach on their infrastructure. In some cases they do not believe they are at risk as a target, others simply do not understand the level of risk that is presented. It's shocking that the law firm at the center of the Panama papers was not more aware of the risk being presented by their lack of due-diligence in managing their IT infrastructure. A law firm deals with a laundry list of private information and not taking effective action to defend that information is inexcusable. Businesses that are mindful of their security risk, often think too big about their needs. As demonstrated by the Panama papers, the risk is often much more elemental than people think. Having sophisticated intrusion detection, advanced digital rights management, and encryption doesn't address a simple issue like patching your systems regularly. It's like installing laser trip wires and steel reinforced doors on your house, but leaving the garage door open. Fancy measures won't protect you when you ignore the basics.
At a recent meetup for Tech Vancouver, a speaker was presenting on the idea of technical debt. Technical debt is the act of sacrificing quality for speed or convenience. For coders, this means that certain parts of the code are not as clean or stable as they should be. Hence, you are indebted to that shortcut with both risk and the commitment to revisit it later. The implications for this practice and risks will become more severe in the future.
Agile methodology is a progressive form of code development and tends to support these practices as well. Agile development helps to break down the work into more manageable chunks. The issue is that these chunks are intended to be iterative. There is no assumption that the work product would be 100% on the first or even the second pass. The software will have numerous iterations before it is even close to considered complete. This removes the burden of perfection and time issues on the delivery of a viable product but also increases the risk for errors. Agile is not a bad practice it is very common. However, it does require more strict quality control.
Patch, Patch, Patch
With the connected world, patching is a given. In the early days before the wide reach of the internet, you couldn't ship a piece of software that was faulty. There were no effective means of patching that software if there was an issue with the quality or reliability of the code. A poorly released product could mean a failed product or even a failed company as a result. This is still a risk, but the bar to what is acceptable in a release seems to be much lower than before.
Beta release of a product used to be done with a select group of customers to ensure bugs were caught before the release and also that the features worked in the way that suited the users. Presently the role of a beta release seems to have expanded. Beta releases act as a wide release to capture many of the issues that should arguably be fettered out in quality control. There is an old ideology in IT that you never adopt version one of a product. Let others test it out, let the software company fix the major bugs in an x.1 release, then you can adopt the release. Some software seems to be in a perpetual state of beta where nothing is ever complete. If there are no patches being released, there are new features, that will then need to be patched. In some cases, this reality is driven from the marketing department. Delivery dates for a product or a release are set well in advance. All efforts are made to hit that date even if it means in some cases that an inferior product is the result. After all, we can just patch it right?
One of the arguments for open source development is that you can't hide bad code. Everyone can read, review, comment, and correct issues. The argument is that the risks are lower since the code has already been vetted. Therefore, the code should be fewer errors and vulnerabilities. In private sector development, the idea of public code is not practical. So how will this problem be managed in the future? With the rise of cybercrime, the need for lower risk software is rising. In a recent interview with Lachlan Turner of Arcinfosec, we talked about the growing risk of cyber terrorism. Losing your data to encryption is a serious problem for a business, but losing control of pressure in a gas pipeline could be catastrophic. We may see more legislation to ensure some standards are in place with code. For example, standards guide for code implementation when dealing with high-value assets. No one should assume that these controls would eliminate risk, but certain standards for software are a likely progression from manufacturing practices that ensure our physical products do no harm. The future is a connected world where everything is hackable. The exposure of physical risk will become a more prevalent issue as software plays a more visible role in our physical worlds, beyond the digital world.