Hyper-V a security risk?
November 1, 2022 1:16 PM   Subscribe

I was told Hyper-V was disabled at work as it and containers are a security risk. Is there any truth to that or is it possible their monitoring software can't penetrate containers and they're using security risk as an excuse? I doubt I can get it enabled as they're so paranoid they geo-restrict where we can login our laptops, but I'm genuinely curious if that's an excuse or not.

I doubt this company and I will be together much longer as I have to ask permission for any software to be installed, there's so much security software they somehow intercept SSL certs, I can't turn off antivirus that sometimes deletes software I'm working on for whatever reason thinking it is a virus.

So no need to tell me to leave, but the hyper-v as a security risk along with containers has me baffled.
posted by geoff. to Computers & Internet (12 answers total) 2 users marked this as a favorite
 
I mean, running stuff in a container seems like a good way to ensure it's not scanned by anti-virus software or monitored by whatever other software they have running, except to the degree they monitor the container software as a whole. So from their perspective it seems like that would be a security risk, yes. (As you say you should just find a new job, though, and try not to spend too much mental energy on the weird stuff they do at your current job.)
posted by inkyz at 1:58 PM on November 1, 2022 [1 favorite]


It is, but it is more towards the sophisticated on the continuum of risk. Hyper-V allows an entirely separate environment that doesn’t show up in standard security tools and has root/Administrator inside of the VM; with those privileges you can do a lot of potentially risky things that a regular user account cannot do. There’s been some work with escalation/escapes attacking the hypervisor and some work embedding rootkits with Hyper-V to avoid detection.

It is probably opaque to any sort of security software without heroic measures.

Containers are a different issue from virtualization. While they’re frequently run in VMs in Hyper-V on Windows, they’re a different set of technologies. From a corporate IT POV, containers pulling down an image from dockerhub are allowing mystery code from an untrusted source. Same issues with tooling visibility by default.

Anyway, all that said developer workflow / experience counts for a lot. A good infosec / it management team would have different rules for Jane the developer compared to Bob the accountant. You’re better off arguing need and productivity instead of risk, but you may need to end up voting with your feet.
posted by theclaw at 1:59 PM on November 1, 2022 [4 favorites]


They almost certainly don’t mean “malware/virus risk”, they mean “our SSL certificate, etc won’t exist in guest operating systems so our security monitoring is at risk
posted by Nonsteroidal Anti-Inflammatory Drug at 2:11 PM on November 1, 2022 [5 favorites]


Response by poster: So without Hyper-V and just containers my understanding that while it shares the kernel, etc. for sake of simplicity code executed within the container -- barring a future exploit as with any software -- cannot be executed outside the container. As in it shares hardware for performance issues but unless I explicitly open a port the code would remain isolated from the host OS correct?
posted by geoff. at 2:18 PM on November 1, 2022


is it possible their monitoring software can't penetrate containers and they're using security risk as an excuse?

That is a security risk. Not only does it mean that monitoring software can't look for nefarious activity being conducted by the user (including if an attacker compromises your account, which assuming Active Directory is pretty easy if the attacker has gotten Domain Admin) but I'd have no idea if you were running dangerously out of date and vulnerable software inside a VM or container (outside those downloaded exclusively from a trusted repository) that could lead to compromise.

I recall that you just shifted from freelancing to working for a large org - everything you're describing is pretty common and largely necessary for them. In addition to just following generic best practices, many large companies are subject to regulation that requires these things, even if they didn't want to do them.

I can't turn off antivirus that sometimes deletes software I'm working on for whatever reason thinking it is a virus.

The solution to that is not to turn off AV but to find out why it's incorrectly identifying your software as malicious.
posted by Candleman at 2:19 PM on November 1, 2022 [7 favorites]


The issue isn’t isolation from your computer, it’s isolation from the rest of the network. By default most containerization software doesn’t intercept outgoing network connections, meaning malicious software in your container is inside your organization’s firewall. I am amazed at what people install in containers and just assume they’re fine.
posted by goingonit at 2:33 PM on November 1, 2022 [4 favorites]


Response by poster: Yes I went from freelancing to a large org and while containers and kubernetes we can debate pros and cons it greatly increases my ability to setup complex environments quickly not to mention the ability to pass around containers and know that everyone is using the same thing more or less.

Using a makefile or scripting works with senior developers but as we all know one everyone seems to have something that isn't installed and wasn't checked by the script, etc. Especially helps in keeping your laptop "clean" as containers don't run the risk of leaving behind artifacts, you can switch version easily, etc. But it is what it is.

This is a software company, largely offshore. My previous experience was to assume my laptop could always be compromised and not keep any sensitive data on it and use mock data when needed. Rarely the code itself is necessarily valuable, it is not like I am working on the next big thing, but that is my view of security and just use common sense practice so this is a bit new.

Another question, do more product oriented tech firms like FAANG or Silicon Valley firms typically have this level of security or do they trust developers to be professionals? My issue now is that I don't really code anymore per se but I work with developers who are more junior than I'm used to and I simply don't have time mentor all of them where in the past I could give them a container with all the services setup and they can just upload their code. Not ideal but given my bandwidth and time constraints that's where I'm coming from. I'm still getting used to developers refusing to use new technologies because they haven't been trained even if I provide them resource and guidance. I just assumed that was part of being a developer but maybe a cultural issue I'm not understanding.
posted by geoff. at 2:40 PM on November 1, 2022 [1 favorite]


I've worked at multiple of the FAANGs and yes, their computers are [generally] pretty locked down. Less so in their earlier days, but definitely once they are into tens of thousands of employees. Sometimes there's more freedom on developer devices, but certainly not complete freedom or lack of security software. I don't know on this specific case because that's not something I've ever tried to do while I worked there. But the annoying security software and problems it causes is an evergreen source of discussion at the virtual watercooler at every large-ish company I've worked at.
posted by primethyme at 4:26 PM on November 1, 2022 [1 favorite]


So there are a few different things to unpack here.

First off, your company should absolutely have some sort of reliable build and dependency management system that lets different people build artifacts and deploy software to test/staging/prod environments in repeatable ways. Virtualization can be a way to build such a system but it's emphatically not a way to circumvent the need for such a system.

For instance: at a large company, you can't have people just pick whatever base images they want for containers/VMs -- you will want a centralized repo for these images and in all probability a dedicated team for maintaining them. The last thing you need is for a big security vulnerability to come in software your security team didn't know was being deployed. This means that there's some fixed overhead involved in setting up virtualization as part of a deployment strategy, and if your company picks a different strategy instead, so be it.

Secondly, VMs are not a good solution for deploying application software to customers. Servers sure, but if this is software that will ultimately run on a client machine, it will have to play nicely with other things, so again, you don't get to dodge that problem.

And third, for all the reasons above, virtualization doesn't just let people run untrusted third-party software inside a shared network!

Also, yes, all large companies lock down their machines to a greater or lesser extent. It's not a question of trust, it's a question of attack surface. You can't have a security team that is auditing and patching whatever software 1000 different people decide to install, no matter how reasonable they are: limiting the set of things you let on the network is the only way a large company can manage and secure its environment. Companies can do a better or worse job of this, but they all have to do something.
posted by goingonit at 5:23 PM on November 1, 2022 [4 favorites]


My previous experience was to assume my laptop could always be compromised and not keep any sensitive data on it and use mock data when needed.

In a corporate environment, it's not a matter of what's on your computer, it's that once an adversary gains a foothold (say through an out of date browser in your VM), then they can spread throughout the network where the tasty, tasty data is.

do they trust developers to be professionals?

It's not a matter of being professional. Developers are are focused on development, not security (which no offense, you're demonstrating - and I say that as a former developer). It's not realistic to expect thousands of developers to all be security interested and aware. They're incentivized to write code, create features, and fix bugs so they'll do whatever they think will get this task done, even if it's not the most secure thing. So Security has to add some guardrails.

it greatly increases my ability to setup complex environments quickly not to mention the ability to pass around containers and know that everyone is using the same thing more or less.

Have you asked for Docker rather than a full blown VM tool?

I'm still getting used to developers refusing to use new technologies because they haven't been trained even if I provide them resource and guidance.

You will definitely find this with onshore talent as well but this is generally held to be very common with some offshore cultures.

Have you sold senior management on using these new technologies? Direction coming from them is going to have more success than you on your own trying to bring in something new.
posted by Candleman at 6:39 PM on November 1, 2022 [3 favorites]


do they trust developers to be professionals?

Seconding Candleman. Yes, we do trust you to be professionals, but professional developers, not professional security experts. I work on the IT side of the fence adjacent to and sometimes with security teams, and developers are considered one of the highest risk users exactly BECAUSE they think they can "just use common sense".

Developers are more likely to cause a security breach, or do something which risks a security breach, because as mentioned above they're interested in getting things done, and they generally believe they're too smart and too knowledgeable to do something stupid.

But doing something which causes a breach or risks a breach is often not caused by ignorance or stupidity, it's caused by being in a rush, or tired, or under pressure, or distracted, or stressed, and developers are just as susceptible to those things as everyone else, except developers are more likely to dive into something unknown, or ignore a warning sign, because they believe it couldn't happen to them. Plus they often have access to valuable IP (the code) and tools which allow an attacker to do things they couldn't do from a non-dev account, so higher-risk than a lot of other accounts.

they're so paranoid they geo-restrict where we can login our laptops

This is incredibly normal. It's one of the most basic security features you can implement for your estate, along with MFA and managed devices. Any org that has this facility and doesn't use it is borderline negligent and in some industries not turning on geo-restriction would be considered malfeasance.

Security professionals aren't trying to stop you working, they're trying to keep the innumerable avenues of attack as locked down and monitored as possible. Security vs usability is a massive, massive question in the industry for all types of users and whilst your company sounds like they could do with a bit more usability, they're also generally following standard rules that I've seen in lots of places. And I've seen much stricter than what you describe.

no need to tell me to leave

You might have trouble finding somewhere to go if you want to be able to run virtualisation and containers without oversight, or expect to be able to login from anywhere in the world without telling the security team. Try seeing this from the other side of the fence and understanding their perspective, it might help your frustration and allow you to work towards some mutually acceptable changes.
posted by underclocked at 12:47 AM on November 2, 2022 [3 favorites]


My experience working in software dev is that there's always something of a tug-of-war between developer productivity (especially as a lot of tools assume you have admin on your box) and security. You'll likely need to make your case in terms of dollar signs for this level of locked down, and depending on your specific environment you may or may not be successful. There are a lot of different strategies for mitigating harm, but locking down boxes is one of the easier and cheaper ones.

That said, I wouldn't necessarily say that this level of locked down is universal, even at large software companies, though I'd say the trend is going towards more locked-down than less especially as remote work becomes more common.
posted by Aleyn at 12:11 PM on November 2, 2022


« Older Pet Monitoring Camera   |   Productivity app for Macbook Newer »
This thread is closed to new comments.