How to write more secure code?
March 20, 2019 8:35 AM   Subscribe

No matter how fancy the password checking scheme is, or how many encryption steps it involves, isn't there usually a single "if" statement somewhere? If so, other security vulnerabilities could allow that to be overwritten pretty simply, to skip the check. How could code be written more securely?

I don't usually write high-security stuff, so maybe there are already well-known solutions that are used by ssh or pam or whatever. If so, I'd like to know about them.

Certainly, a well-written program will have no buffer overflows, so the code will not be alterable in memory. But it seems like it'd be safer if the basic security mechanisms still functioned even if some other part of the system is written poorly.
posted by Galaxor Nebulon to Computers & Internet (12 answers total) 7 users marked this as a favorite
 
The short answer is, not really. Designing a truly secure system, taking into account software, network, and how users will be authorized, is slow and expensive to design and expensive to operate. And not as convenient as a less secure system. A determined attacker only has to find the one weakest link, which could be a bum "if statement" but just as likely is a misconfiguration, or user Bob's willingness to give out their password when "IT" calls. Buffer overflows are not a technical issue, in a sense, because languages have existed for a long time that don't have them. But a modern software setup has a huge number of components, 99% of which you're not writing or even auditing, so even if your own code is perfect, the authentication framework you're using to let users across the enterprise get in is going to have a problem. Compartmentalization helps a great deal, and so to use your example with the password, a high-security system will probably have a dedicated authentication server that deliberately doesn't have anything exposed except authentication. That way the application server, even if it has a severe 0-day vulnerability, won't be compromised except by an authorized user. But doing things this way dramatically increases the design and maintenance burden. It takes an incredible amount of effort to set up a server so that it only does 1 thing instead of the 1000 things that servers are set up to do to make it easy for Jane sysadmin. It takes even more effort to verify that the other 999 things are not being done, and more effort on top of that to assure that they don't creep back in as system software is updated, applications add features, and things are "fixed" in the configuration of the server.

Bruce Schneier has a number of books that are written for the general technical audience that talk about this in detail. He also argues, pretty convincingly in my opinion, that until there is powerful liability written into the laws, such that the company would be put in jeopardy, that none of this will change.
posted by wnissen at 9:11 AM on March 20, 2019 [5 favorites]


Usually the operating system enforces permissions, so process A can't inject data into process B without having elevated privileges. That doesn't mean that there aren't bugs all the time that allow you to escalate privileges (there kind of are) but if everything is working correctly, you can't just pwn an unprivileged process and use it to subvert the SSH server.

You may be interested in capability-based security.
posted by BungaDunga at 9:12 AM on March 20, 2019 [1 favorite]


If security vulnerabilities exist, they often allow injection and execution of arbitrary code. That means it doesn't matter what the original program looked like (how many "if" statements), the attacker can run whatever they want.

For a more general overview of the issue, you might be interested in some of Bruce Schneier's writings, and his blog is a good read as well
posted by chrisamiller at 9:14 AM on March 20, 2019 [2 favorites]


Formal verification is one approach... it's really hard and an active research area. They try to write a system and prove that even if you manage to subvert one part of it, you won't be able to subvert the rest of it, subject to the invariants you've proven. It's a kind of formally-proven compartmentalization.

And it can all explode when your assumptions about the CPU or about the hardware turn out to be wrong.

If security vulnerabilities exist, they often allow injection and execution of arbitrary code. That means it doesn't matter what the original program looked like (how many "if" statements), the attacker can run whatever they want.

This is undeniably true, but if you manage to execute unprivileged code, you may not actually have access to the important bits of the system without having a second exploit- this time of the kernel or some other process that is privileged- that lets you muck about in other processes. Layer in a virtual machine and a hypervisor, and it gets harder. That's why you can run code on a virtual private server and generally not worry about your co-tenants stomping all over you. Hypervisor breakouts 100% exist, and SPECTRE-like attacks are terrifying, but it's not like there aren't ways to make this at least harder to do.
posted by BungaDunga at 9:21 AM on March 20, 2019 [3 favorites]


A corollary to "use open source libraries instead of writing your own" is to only use libraries and environments that are regularly maintained, update when appropriate, and not to be afraid of replacing the entire library / rewriting its usage when necessary. Just because something was secure three years ago doesn't make it secure now.

Some examples:

Windows 7 is going out of public support next year.
Java Runtime Environment is out of commerical support.
The version of OpenSSL released in Jan 2015 has already had 18 released patches.
TLS 1.1 is already on very shaky ground and will be unsupported next year.

All of this involves a significant maintenance burden.
posted by meowzilla at 10:48 AM on March 20, 2019 [2 favorites]


In the corporate world, writing 'more secure code' for a regular development group means basically 3 things: 1) keeping all 3rd party software up to date, 2) writing (or outsourcing) the basic access authorization software at the company-level, not implementing it on a per-app basis, and 3) using a code analyzer to identify the most common defects, and fixing them. Every developer being an on-going security expert is completely impossible. #4 might be consolidating all hardware on a single platform and maintaining it.
posted by The_Vegetables at 11:22 AM on March 20, 2019 [3 favorites]


Best answer: 1) Never write your own crypto.
2) Defensive Programming, read the whole article and find some launching points for what you are trying to do.
posted by rhizome at 11:46 AM on March 20, 2019 [7 favorites]


(Oh, God, I was trying to push this very notion at $WORK the other day. Bless you for wanting to do the right thing, Galaxor Nebulon!)
posted by wenestvedt at 1:10 PM on March 20, 2019


Security is very much a spectrum-type thing, not a binary-type thing. Academics will talk about a "threat model" in relation to some security scheme, which typically means "the capabilities you give the attackers." So for example, an end-to-end encryption scheme will not allow a man in the middle attack but will be vulnerable to, say, a denial of service attack. In general, when you're trying to design a secure scheme, you try to anticipate the types of things a black hat would do and build it to withstand those things. Obviously you can't always do this 100% accurately, and even if you could, you will make mistakes, and you don't have infinite resources to implement your security scheme anyhow. Sometimes security really just means "hard enough to attack that the bad guys will find easier prey" not "100% immune".
posted by axiom at 5:29 PM on March 20, 2019 [1 favorite]


Developers do need to understand whether they are needing to make the software equivalent of a luggage lock or a bank safe. To that end, it is important that a proper risk analysis which details what is being secured, how much it is worth, who it is being secured from, what the most likely attack scenarios are and what regulations exist for the security of such information - should be done as a pre-requisite. The risks of over-securing a system (people won't use it - or they will write down their password by the safe door, or you will just waste money on development) are not quite as serious as for under-securing it. But they are still significant.
posted by rongorongo at 3:03 AM on March 21, 2019 [1 favorite]


. The risks of over-securing a system (people won't use it - or they will write down their password by the safe door, or you will just waste money on development) are not quite as serious as for under-securing it.

Those rules should be provided to developers by their business owners - the types of data that must be secured is explicitly labeled by the government.
posted by The_Vegetables at 7:56 AM on March 21, 2019


A beautiful object lesson of how security errors occur was just revealed today, when it was announced that Facebook was logging usernames and passwords in plain text. This was not a design issue, the password database itself was correctly designed and secured (at least as far as anyone knows). But requests to the password database were being logged, since everything gets logged when you have a huge web app composed of a bajillion services. And nobody noticed, for years. It was only during a code review that this came to light. And Facebook is much, much more hardened against attacks than a typical site; imagine how much money the Mafia or dirty tabloids would pay to access Facebook accounts.
posted by wnissen at 1:00 PM on March 21, 2019 [1 favorite]


« Older Help me look like a lawyer for the least amount of...   |   Help me break up a long car trip with kids Newer »
This thread is closed to new comments.