Can someone explain Docker for Windows?
December 26, 2019 12:21 PM Subscribe
I'm going to play a bit dumb here, I've been using Docker extensively. I have an environment that requires Windows and Linux containers, things have been evolving very fast and I want to reaffirm things. Some of these questions can be answered more broadly, "how do large organizations like Amazon use microservices and prove their code works?" and i do not mean so much in an integration test/unit test way but presenting to someone in upper management or other stakeholder that doesn't realize that passing integration tests / using proxies should work. Sorry for the broad nature of the topic, it will make more sense inside.
I'd like to get a couple of questions out of the way. I'm using a proprietary product, Sitecore, which is a CMS and also a complex commerce platform. See the docker compose file to get a simple commerce site up here. With the exception of a few services they're all needed. You're building on top of an existing platform and need 13 services, all running with a few exceptions Windows Core. Let us ignore the fact that's incredibly resource intensive as Windows Core itself is ~5GB and even with sharing kernels and Docker stuff it is just not possible to run on a developer desktop. But let me take a step back for a moment.
1. Since this is an existing application, there's standard things you'd expect such as a /bin and /views folder among others that are basically exposed as a traditional Linux bind-mount (non-empty directory that looks empty). If you look at how this was developed what is actually done is an empty volume that has a very complicated watch folder, so you'd publish you're assemblies and whatever and it'll try to merge it into the existing folder. This has a lot of disadvantages besides not being 100% reliable I cannot see what's there. My understanding of Docker (again playing dumb here) is that in a traditional Linux environment would solve this with populating va volume with a container:
$ docker run -d \
--name=nginxtest \
--mount source=nginx-vol,destination=/usr/share/nginx/html \
nginx:latest
Is this a Windows limitation or a reason this isn't done in Windows containers? During development, especially early development, seeing what's there and what you're copying over is extremely important. Production would be more formal multi-stage, build, test, production layer. I guess I don't understand Windows Containers well enough to see why I can't just see what's on the drive without running a container, then terminal in, then VIM into files not exposed. Is this just a limitation of Windows? This is a huge leap for developers used to RDP'ing into a machine to see what's going on, and even with my extensive experience it is time consuming, It'd be much easier if I could just see what's on there. What I'm seeing now is similar to a Linux container "non-empty directory bind-mount" which the documentation even says obscures things.
2. The build scripts are incredibly complicated to build what would be considered a "base image" for development. There's Powershell scripts all over the place being referenced and those don't get picked up as layers in Docker, or at least changes in external scripts aren't seen in a no-cache setting. This seems to kind of defeat the purpose of Docker, do large Linux projects have similar external bash scripts that are referenced? I guess I have not been exposed enough to large, complex build systems in Docker to know if this is an anti-pattern but it is already difficult going through Dockerfile layer reduction techniques of using / or ` escape delimiters plus a bunch of a external scripts. Seems like maybe one script and everything taking place in Docker is kind of why Docker is there. Is this normal or right behavior? Not criticizing the repository just trying to understand how I should proceed.
3. I'm running on WSL2, MSFT "fast ring" latest release and Docker latest release. My goal is to get as much of these Server Core services to Linux as possible. What I'm confused about is that Docker Desktop (which no longer supports compose on edge), I still have "Switch to Linux Containers" or "Switch to Windows Containers," I assumed the whole idea of WSL2 was to get rid of this. I can also be on "Windows Containers" and build Linux containers with the --platform linux switch, but I don't have K8s in this mode, I have to switch to Linux Containers. I'm completely confused as to why I need to do this. Can I even run Windows Containers and Linux Containers on a K8s instance locally? I have read the latest release notes from Docker Edge and MSFT WSL2 releases, they're very confusing. I think they're making the assumption you're using just Linux or just Windows. Again I'm well aware I'm in a land of uncharted territory, and there's known weird bugs even in "simple" environments. But what I'm looking for is yeah you should be able to do both (AKS preview mode supports this), but you might have weird networking issues on ports or whatever.
4. Last question. If you take a look at the build scripts for the Sitecore Docker they're complicated. They're supporting multiple versions on a proprietary system. You do need all the services to run end to end in the most complex setup. In my case this also involved another layer talking to an external PIM (Dynamics). Also I would need to write plugins or extensions for the various services to support this. In the past, before e-commerce complexity we'd run it all locally which had its own issues but we were dealing with an IIS site, SQl and Solr. Even then pre-docker and complexity we'd need to run a load balancer on our machines and replicate a production environment because we'd see weird errors in the underlying system that would only show up on production. This actually helped create really bug free and stable releases.
In any case, I must be missing something. A lot of Docker/K8 blogs talk about get Alpine servers down to 20MB and I'm stuck with 5GB+ images. Certainly this is a solved problem, I need to often demo to a client that a feature works and running slim integration tests, while accurate in reproducing say a full order with tax and shipping applied, isn't what a marketing client wants to see. They want to see it "in context" with the site. Ignoring the fact that expectations should be set to avoid this, it has has come up enough with colleagues on completely different projects that there must be a way places like Amazon (sorry just picking a large company) have such a large micro-service infrastructure environment and are able to demo to non-technical stakeholders.
Right now my only viable option is to write proxy services for all the services (time consuming, no one would want to pay for that increased scope), rewrite the complex code so that the services are actually on one environment on a dev machine (doesn't reflect production and causes real errors), or just give everyone giant AWS/Azure dev environments to RDP/VNC into which causes issues with remote employees or those that are in air-gapped environments.
Sorry for the rambling but I do have what I think is a good build process, but is based on having a couple of services running like a Go server on Alpine, not giant Windows Containers with possibly Linux containers to simply reduce the footprint. I'm trying to see if I'm on the right track. Only random idea I had was that ignore low-bandwidth and air-gap employees and have a "dev kubernetes" instance where if you're developing on one instance you can target the other services to the specific git tag/version of the image you're working on in that instance, keeping from having everything your own dev machine.
I'd like to get a couple of questions out of the way. I'm using a proprietary product, Sitecore, which is a CMS and also a complex commerce platform. See the docker compose file to get a simple commerce site up here. With the exception of a few services they're all needed. You're building on top of an existing platform and need 13 services, all running with a few exceptions Windows Core. Let us ignore the fact that's incredibly resource intensive as Windows Core itself is ~5GB and even with sharing kernels and Docker stuff it is just not possible to run on a developer desktop. But let me take a step back for a moment.
1. Since this is an existing application, there's standard things you'd expect such as a /bin and /views folder among others that are basically exposed as a traditional Linux bind-mount (non-empty directory that looks empty). If you look at how this was developed what is actually done is an empty volume that has a very complicated watch folder, so you'd publish you're assemblies and whatever and it'll try to merge it into the existing folder. This has a lot of disadvantages besides not being 100% reliable I cannot see what's there. My understanding of Docker (again playing dumb here) is that in a traditional Linux environment would solve this with populating va volume with a container:
$ docker run -d \
--name=nginxtest \
--mount source=nginx-vol,destination=/usr/share/nginx/html \
nginx:latest
Is this a Windows limitation or a reason this isn't done in Windows containers? During development, especially early development, seeing what's there and what you're copying over is extremely important. Production would be more formal multi-stage, build, test, production layer. I guess I don't understand Windows Containers well enough to see why I can't just see what's on the drive without running a container, then terminal in, then VIM into files not exposed. Is this just a limitation of Windows? This is a huge leap for developers used to RDP'ing into a machine to see what's going on, and even with my extensive experience it is time consuming, It'd be much easier if I could just see what's on there. What I'm seeing now is similar to a Linux container "non-empty directory bind-mount" which the documentation even says obscures things.
2. The build scripts are incredibly complicated to build what would be considered a "base image" for development. There's Powershell scripts all over the place being referenced and those don't get picked up as layers in Docker, or at least changes in external scripts aren't seen in a no-cache setting. This seems to kind of defeat the purpose of Docker, do large Linux projects have similar external bash scripts that are referenced? I guess I have not been exposed enough to large, complex build systems in Docker to know if this is an anti-pattern but it is already difficult going through Dockerfile layer reduction techniques of using / or ` escape delimiters plus a bunch of a external scripts. Seems like maybe one script and everything taking place in Docker is kind of why Docker is there. Is this normal or right behavior? Not criticizing the repository just trying to understand how I should proceed.
3. I'm running on WSL2, MSFT "fast ring" latest release and Docker latest release. My goal is to get as much of these Server Core services to Linux as possible. What I'm confused about is that Docker Desktop (which no longer supports compose on edge), I still have "Switch to Linux Containers" or "Switch to Windows Containers," I assumed the whole idea of WSL2 was to get rid of this. I can also be on "Windows Containers" and build Linux containers with the --platform linux switch, but I don't have K8s in this mode, I have to switch to Linux Containers. I'm completely confused as to why I need to do this. Can I even run Windows Containers and Linux Containers on a K8s instance locally? I have read the latest release notes from Docker Edge and MSFT WSL2 releases, they're very confusing. I think they're making the assumption you're using just Linux or just Windows. Again I'm well aware I'm in a land of uncharted territory, and there's known weird bugs even in "simple" environments. But what I'm looking for is yeah you should be able to do both (AKS preview mode supports this), but you might have weird networking issues on ports or whatever.
4. Last question. If you take a look at the build scripts for the Sitecore Docker they're complicated. They're supporting multiple versions on a proprietary system. You do need all the services to run end to end in the most complex setup. In my case this also involved another layer talking to an external PIM (Dynamics). Also I would need to write plugins or extensions for the various services to support this. In the past, before e-commerce complexity we'd run it all locally which had its own issues but we were dealing with an IIS site, SQl and Solr. Even then pre-docker and complexity we'd need to run a load balancer on our machines and replicate a production environment because we'd see weird errors in the underlying system that would only show up on production. This actually helped create really bug free and stable releases.
In any case, I must be missing something. A lot of Docker/K8 blogs talk about get Alpine servers down to 20MB and I'm stuck with 5GB+ images. Certainly this is a solved problem, I need to often demo to a client that a feature works and running slim integration tests, while accurate in reproducing say a full order with tax and shipping applied, isn't what a marketing client wants to see. They want to see it "in context" with the site. Ignoring the fact that expectations should be set to avoid this, it has has come up enough with colleagues on completely different projects that there must be a way places like Amazon (sorry just picking a large company) have such a large micro-service infrastructure environment and are able to demo to non-technical stakeholders.
Right now my only viable option is to write proxy services for all the services (time consuming, no one would want to pay for that increased scope), rewrite the complex code so that the services are actually on one environment on a dev machine (doesn't reflect production and causes real errors), or just give everyone giant AWS/Azure dev environments to RDP/VNC into which causes issues with remote employees or those that are in air-gapped environments.
Sorry for the rambling but I do have what I think is a good build process, but is based on having a couple of services running like a Go server on Alpine, not giant Windows Containers with possibly Linux containers to simply reduce the footprint. I'm trying to see if I'm on the right track. Only random idea I had was that ignore low-bandwidth and air-gap employees and have a "dev kubernetes" instance where if you're developing on one instance you can target the other services to the specific git tag/version of the image you're working on in that instance, keeping from having everything your own dev machine.
Response by poster: Actually to clarify my number one question that's been vexing me, and I think it has something to do with containers, Docker volumes and how symbolic links work in NTFS.
I have something like "c:\inetpub\wwroot\(bunch of existing files)" where I don't really build it from source but extract from zip files, etc. In the old way of doing it this would be physically in that folder, publish my custom assemblies from msbuild and it would work. If I had a breaking build I had a script that would basically copy (bunch of existing files) and blow away my changes. It worked well and I could see things like "oh for some reason that did not get updated" or "that dll is being copied by some sort of weird nuget thing and shouldn't be there" ... basically I could see the working directory, it was on my local file system.
If this were a completely new project from scratch this wouldn't be an issue because whatever is in my blank directory is really what's on the file system as it is an actual empty directory.
I don't need anything complicated, just non-persistent data in a container that I can physically see during local development that maps to the container. So if I have a folder somewhere where the container exports what's in the IIS folder but still maps to the IIS folder and I can push an update and it shows up as if I was physically pushing to the directory on a normal file system, great. If it goes away when the container is shut down that's fine and actually describable.
The end-goal is that developers would use their usual process of publishing to a folder and seeing changes, and also seeing all the the things existing in that folder that they didn't create and have a symbolic link or whatever fancy term replaces symbolic link so that the IIS process picks ups the changes.
For an actual build it would be a multi-stage, (bunch of existing files from base image) then a stage to build custom changes, testing, and final build. That's not really relevant and works fine, but that would require a lot of overheard when you're just in development mode and not ready to commit and create a tagged image.
Again, sorry to make this seem complicated but really shouldn't be a hard problem and appears solved in Linux.
posted by geoff. at 3:02 PM on December 26, 2019
I have something like "c:\inetpub\wwroot\(bunch of existing files)" where I don't really build it from source but extract from zip files, etc. In the old way of doing it this would be physically in that folder, publish my custom assemblies from msbuild and it would work. If I had a breaking build I had a script that would basically copy (bunch of existing files) and blow away my changes. It worked well and I could see things like "oh for some reason that did not get updated" or "that dll is being copied by some sort of weird nuget thing and shouldn't be there" ... basically I could see the working directory, it was on my local file system.
If this were a completely new project from scratch this wouldn't be an issue because whatever is in my blank directory is really what's on the file system as it is an actual empty directory.
I don't need anything complicated, just non-persistent data in a container that I can physically see during local development that maps to the container. So if I have a folder somewhere where the container exports what's in the IIS folder but still maps to the IIS folder and I can push an update and it shows up as if I was physically pushing to the directory on a normal file system, great. If it goes away when the container is shut down that's fine and actually describable.
The end-goal is that developers would use their usual process of publishing to a folder and seeing changes, and also seeing all the the things existing in that folder that they didn't create and have a symbolic link or whatever fancy term replaces symbolic link so that the IIS process picks ups the changes.
For an actual build it would be a multi-stage, (bunch of existing files from base image) then a stage to build custom changes, testing, and final build. That's not really relevant and works fine, but that would require a lot of overheard when you're just in development mode and not ready to commit and create a tagged image.
Again, sorry to make this seem complicated but really shouldn't be a hard problem and appears solved in Linux.
posted by geoff. at 3:02 PM on December 26, 2019
As of march or so, Kubernetes v1.14 today, Windows Server node support has officially graduated from beta to stable! but I'm sure they need separate containers. I don't think windows has real symbolic links, oh I'm wrong as of win10 but it looks more like a hack than built into the filesystem (no ms cannot just use ln -s, sigh)
But maybe distill these questions a bit, perhaps a concrete example that fails.
posted by sammyo at 3:35 PM on December 26, 2019 [1 favorite]
But maybe distill these questions a bit, perhaps a concrete example that fails.
posted by sammyo at 3:35 PM on December 26, 2019 [1 favorite]
This question is really a bunch of questions, as sammyo suggested - distilling the question might help. I will try to answer a few of these as I understand them. TBH you may want to wander over to StackExchange or similar.
1) Have you tried docker inspect`ing the containers to see what's in the volumes you're trying to look at? I have no real hands-on experience with Windows containers volumes via Docker but my understanding is they should be handled similarly to Linux volumes.
2) If I understand this you're asking why some of the build tools and such aren't shipped in layers of the final containers? If so, that's not an anti-pattern, it's pretty desirable to end up with a container in production that doesn't have developer tools or intermediate layers in order to (among other things) provider smaller images and prevent people from reverting to earlier versions of the container and using those things maliciously.
3) "I assumed the whole idea of WSL2 was to get rid of this." My understanding is that WSL2 uses an actual Linux kernel and that gives Docker some advantages, but WSL2 wasn't developed by MSFT strictly for that reason. It solves a lot of problems the WSL1 had with slow file system access, and a number of Linux apps not working quite right. I believe Microsoft's goal with WSL2 is to woo developers who really want a *nix environment at their fingertips and Docker-using folks are a subset of that audience, not the entirety of it.
The docker daemon is talking to a kernel to make calls to set up containers and manipulate them, presumably the Windows and Linux kernel calls are wildly different. Not shocked that the Docker Desktop stuff wants you to specify which you're looking for... I can't say whether you can do Windows & Linux together with Docker's distribution of k8s locally.
4) "In any case, I must be missing something. A lot of Docker/K8 blogs talk about get Alpine servers down to 20MB and I'm stuck with 5GB+ images."
What all is in your images? If you're pulling in IIS and SQL Server, for example, I'd expect the size to jump quite a bit. Alpine images often bloat up pretty substantially once people actually pile their software onto the images. To round back to #2, this is one reason to dump a lot of intermediate layers from an image once software is built and the build tools are no longer needed on it.
From sammyo: "As of march or so..." Yes, but Kubernetes expects that Windows and Linux pods will run on different hosts.
As far as demoing to customers when a facsimile of a production environment is just not practical on a laptop... I'd fall back to recorded demos. For developers in-house I'd provide a dev/test Kubernetes environment and registry server where each developer can push their test images and use blessed base images.
posted by jzb at 4:27 PM on December 26, 2019 [1 favorite]
1) Have you tried docker inspect`ing the containers to see what's in the volumes you're trying to look at? I have no real hands-on experience with Windows containers volumes via Docker but my understanding is they should be handled similarly to Linux volumes.
2) If I understand this you're asking why some of the build tools and such aren't shipped in layers of the final containers? If so, that's not an anti-pattern, it's pretty desirable to end up with a container in production that doesn't have developer tools or intermediate layers in order to (among other things) provider smaller images and prevent people from reverting to earlier versions of the container and using those things maliciously.
3) "I assumed the whole idea of WSL2 was to get rid of this." My understanding is that WSL2 uses an actual Linux kernel and that gives Docker some advantages, but WSL2 wasn't developed by MSFT strictly for that reason. It solves a lot of problems the WSL1 had with slow file system access, and a number of Linux apps not working quite right. I believe Microsoft's goal with WSL2 is to woo developers who really want a *nix environment at their fingertips and Docker-using folks are a subset of that audience, not the entirety of it.
The docker daemon is talking to a kernel to make calls to set up containers and manipulate them, presumably the Windows and Linux kernel calls are wildly different. Not shocked that the Docker Desktop stuff wants you to specify which you're looking for... I can't say whether you can do Windows & Linux together with Docker's distribution of k8s locally.
4) "In any case, I must be missing something. A lot of Docker/K8 blogs talk about get Alpine servers down to 20MB and I'm stuck with 5GB+ images."
What all is in your images? If you're pulling in IIS and SQL Server, for example, I'd expect the size to jump quite a bit. Alpine images often bloat up pretty substantially once people actually pile their software onto the images. To round back to #2, this is one reason to dump a lot of intermediate layers from an image once software is built and the build tools are no longer needed on it.
From sammyo: "As of march or so..." Yes, but Kubernetes expects that Windows and Linux pods will run on different hosts.
As far as demoing to customers when a facsimile of a production environment is just not practical on a laptop... I'd fall back to recorded demos. For developers in-house I'd provide a dev/test Kubernetes environment and registry server where each developer can push their test images and use blessed base images.
posted by jzb at 4:27 PM on December 26, 2019 [1 favorite]
Response by poster: Hey sorry for this diary post of a question but I spent this afternoon researching and I learned a lot of things:
- Docker for Windows updates don't specify really if they're pertaining to Windows Containers or Linux containers but it is safe to assume Linux Containers unless otherwise mentioned.
- Docker documentation in regards to Windows Containers should be ignored unless it specifically says that Windows Containers behvaior is X, assume they mean Linux.
- It seems as far as a development platform an all Linux environment works okay using Windows as the development platform.
Back to my original question. On Linux if you mount a volume into a container from the image it'll take whatever layer on the docker image and do a read-write on that into the container as a non-persistent volume, which is the behavior I was expecting and what was outlined in the documentation. I mount a non-empty directory that has text1.txt as a layer in an image I will see that on my host directory as if it is just a normal fiile. As far as I can tell this is due to the Unified File System used by Linux. I could be wrong about this, but NTFS has weird ACL issues that prevents this simple operation which is why it appears all Windows Containers (or dockeriles more accurately) have an empty directory that copies over files from the source directory. Again, not really documented as far as I can tell and if you go through github issues there's a lot of back and forth between MSFT and Docker saying each is behaving per spec. From what I can tell NTFS would need a lot of work to be compliant with Docker and Windows Containers will for the foreseeable future be second class citizens.
Again one would assume "Docker for Windows" updates would encompass Windows Containers but really it specifically means running Docker on Windows with a Linux distro unless otherwise specified.
) If I understand this you're asking why some of the build tools and such aren't shipped in layers of the final containers? If so, that's not an anti-pattern, it's pretty desirable to end up with a container in production that doesn't have developer tools or intermediate layers in order to (among other things) provider smaller images and prevent people from reverting to earlier versions of the container and using those things maliciously.
Yes I may have stated that wrong I'm well aware of building images where the build tools and artifacts aren't copied down.
Kubernetes v1.14 today, Windows Server node support has officially graduated from beta to stable!
Sort of, but technically yes, Again, as far as I have read this afternoon and had time to play with there's a lot of "gotchas" in this, the least being that it appears Helm charts don't play with Windows containers and there was a MSFT initiative called Draft to solve this that appears to be abandoned. If I can't run docker-compose, volume mounts or Kind (local K8s) with Windows containers, like Linux this is a POC in my opinion.
To their credit MSFT is making great strides but it is very confusing right now in the documentation and terminology.
Worst of all is that the Github issues are back and forth. Docker says they can't solve MSFT file system problems (true) but then there's an open debate as to whether that issue should remain open even though Docker can't solve it directly. Windows containers are in a weird state right now where they work but not according to documentation.
As far as demoing to customers when a facsimile of a production environment is just not practical on a laptop... I'd fall back to recorded demos. For developers in-house I'd provide a dev/test Kubernetes environment and registry server where each developer can push their test images and use blessed base images.
That's what I have setup so thanks for affirming I'm doing that right.
I should mention that I ran into a bunch of hacks that get around various issues, like recompiling kernels or building Windows Containers then point the file system to something Linux can understand all of which sound like horrible time consuming options that are interim fixes.
posted by geoff. at 12:22 AM on December 27, 2019
- Docker for Windows updates don't specify really if they're pertaining to Windows Containers or Linux containers but it is safe to assume Linux Containers unless otherwise mentioned.
- Docker documentation in regards to Windows Containers should be ignored unless it specifically says that Windows Containers behvaior is X, assume they mean Linux.
- It seems as far as a development platform an all Linux environment works okay using Windows as the development platform.
Back to my original question. On Linux if you mount a volume into a container from the image it'll take whatever layer on the docker image and do a read-write on that into the container as a non-persistent volume, which is the behavior I was expecting and what was outlined in the documentation. I mount a non-empty directory that has text1.txt as a layer in an image I will see that on my host directory as if it is just a normal fiile. As far as I can tell this is due to the Unified File System used by Linux. I could be wrong about this, but NTFS has weird ACL issues that prevents this simple operation which is why it appears all Windows Containers (or dockeriles more accurately) have an empty directory that copies over files from the source directory. Again, not really documented as far as I can tell and if you go through github issues there's a lot of back and forth between MSFT and Docker saying each is behaving per spec. From what I can tell NTFS would need a lot of work to be compliant with Docker and Windows Containers will for the foreseeable future be second class citizens.
Again one would assume "Docker for Windows" updates would encompass Windows Containers but really it specifically means running Docker on Windows with a Linux distro unless otherwise specified.
) If I understand this you're asking why some of the build tools and such aren't shipped in layers of the final containers? If so, that's not an anti-pattern, it's pretty desirable to end up with a container in production that doesn't have developer tools or intermediate layers in order to (among other things) provider smaller images and prevent people from reverting to earlier versions of the container and using those things maliciously.
Yes I may have stated that wrong I'm well aware of building images where the build tools and artifacts aren't copied down.
Kubernetes v1.14 today, Windows Server node support has officially graduated from beta to stable!
Sort of, but technically yes, Again, as far as I have read this afternoon and had time to play with there's a lot of "gotchas" in this, the least being that it appears Helm charts don't play with Windows containers and there was a MSFT initiative called Draft to solve this that appears to be abandoned. If I can't run docker-compose, volume mounts or Kind (local K8s) with Windows containers, like Linux this is a POC in my opinion.
To their credit MSFT is making great strides but it is very confusing right now in the documentation and terminology.
Worst of all is that the Github issues are back and forth. Docker says they can't solve MSFT file system problems (true) but then there's an open debate as to whether that issue should remain open even though Docker can't solve it directly. Windows containers are in a weird state right now where they work but not according to documentation.
As far as demoing to customers when a facsimile of a production environment is just not practical on a laptop... I'd fall back to recorded demos. For developers in-house I'd provide a dev/test Kubernetes environment and registry server where each developer can push their test images and use blessed base images.
That's what I have setup so thanks for affirming I'm doing that right.
I should mention that I ran into a bunch of hacks that get around various issues, like recompiling kernels or building Windows Containers then point the file system to something Linux can understand all of which sound like horrible time consuming options that are interim fixes.
posted by geoff. at 12:22 AM on December 27, 2019
Response by poster: I will go further and reference this GitHub issue that thoroughly explains NTFS behavior. In short, it appears to be a combination of factors with ACL permissions being one of them. Keep in mind that this is addressing node development on Windows using containers and node uses symlinks or a version of it internally for package management. What I wanted was a simple use case, a folder that IIS worker process picks up and that folder already has files in it in the image layer but those be exported as non-persistent files via the docs. Obviously I'm hardly an expert in filesystem behavior but it appears that junction points/symlinks/whatever behave differently enough internally (as opposed to externally when calls like COPY are made on the host machine via Docker) that it is easier or only possibly on NTFS to do what the referenced GitHub docker image does. Watch for file changes in an empty folder and attempt to internally (to the OS) copy them for development.
4) "In any case, I must be missing something. A lot of Docker/K8 blogs talk about get Alpine servers down to 20MB and I'm stuck with 5GB+ images."
I completely understand that data is data is data, if you have a 50 GB database docker won't help. If you have a microservice that only requires 2MB of compiled files to run, putting that on a giant Windows server would seem unnecessary unless those 2MB rely on a beast of IIS to do their functionality. Again, I think this is the intention of dotcore
Looking further this is to just get the base system up and running for a specific legacy poduct. To get your custom code in there, and keep in mind this would just be the main web server instance, there's a whole other ball of confusion here where they break up all the things into components in a ReactJs type manner then use another build process (cake) to merge all of them together into a webdeploy package that would be added to a dockefile that would update just one of the several web services necessary.
FYI I was asking here instead of StackExchange because there you need to specify a problem set where here I often get better answers to questions like these such as "change your process."
posted by geoff. at 7:13 AM on December 27, 2019
4) "In any case, I must be missing something. A lot of Docker/K8 blogs talk about get Alpine servers down to 20MB and I'm stuck with 5GB+ images."
I completely understand that data is data is data, if you have a 50 GB database docker won't help. If you have a microservice that only requires 2MB of compiled files to run, putting that on a giant Windows server would seem unnecessary unless those 2MB rely on a beast of IIS to do their functionality. Again, I think this is the intention of dotcore
Looking further this is to just get the base system up and running for a specific legacy poduct. To get your custom code in there, and keep in mind this would just be the main web server instance, there's a whole other ball of confusion here where they break up all the things into components in a ReactJs type manner then use another build process (cake) to merge all of them together into a webdeploy package that would be added to a dockefile that would update just one of the several web services necessary.
FYI I was asking here instead of StackExchange because there you need to specify a problem set where here I often get better answers to questions like these such as "change your process."
posted by geoff. at 7:13 AM on December 27, 2019
Response by poster: I'm going to keep going on from a "lessons learned," in case anyone stumbles on this and is like me and is confused. Or worse they use VS and think they're doing Docker and K8s the "right" way or at least in a way where I'd be able to hand off a Dockerfile to anyone. Here is the DockerFile generated from VS:
To anyone who doesn't know Docker and probably doesn't read the first comment, you wouldn't know that I can compile and run this on 3.1, native on Windows, run unit tests native on Windows, if I wanted t modify the Dockerfile run Unit Tests on a native Linux (Alpine/Ubunutu) VM or bare metal and have the application run well.
Conveniently this runs really well with Azure, so expect a lot of people who think they know Docker but don't realize what's going on. I think MSFT bought Docker for Windows, but what I did was set up WSL2 in Docker for Windows as Alpine. In VS I set the csproj to "Linux" (again no specific kernel, just the word "Linux"), the VS/MSbuild or whatever they're calling the compilation toolchain now has a targeting system that looks as the csproj, sees Linux, somehow goes to Windows for Desktop, looks at the specific Linux OS/kernel I specified and builds a valid image/container off that.
It works so well I feel guilty using it, and it does tie you pretty hard not only the MSFT toolchain and now cloud platform but probably will break during an upgrade at somepoint one of the things that Docker is specifically made not to do.
I'm certainly not blaming MSFT for make a very smart business decision to use Linux's scalability/small footprint to run their expensive cloud services while enabling easy development and debugging, but I sort of feel as we're back to where we started and tied to a fragile ecosystem. Not sure if I gave this to someone on OS X running VS Code it would work, my guess is it would so it solves that problem sort of.
This is less of a rant but as someone who often needs to switch platforms and cloud providers a lot of what they say vs what they do still remains the same vendor lockdown that has apparently proved itself viable as its been that way for some 30 years. I still can't write a standard protocol for blob storage on AWS/Azure/whatever like I could for SFTP, so in someways it has gotten worse.
But again, I completely understand that if everything was completely integrated it'd be a race to the bottom on price.
I'm part of the problem too, as part of the POC I'm doing I'm using Azurite (Azure blob emulator locally) to show the client they can emulate Azure locally for devs and not have to have their own expensive Azure storage accounts per dev. I'm going to go out on a limb and say Azurite does not emulate real Azure performance problems and has enough setup differences that local storage or even if I needed to for some reason show remote transfer, an FTP server, docker or otherwise, would be just as easy if not easier to setup and at least junior devs wouldn't be fooling themselves into thinking they're using the real thing and be shocked their poorly implemented code isn't thread-safe or whatever problems come into play moving from a mock setup to production. But that is just how sales works.
In any case as of today, Docker is made for Linux and Linux file systems. WSL2 + Visual Studio + Windows preview release + dotnetcore 3.1 allows the best of both worlds and reduces the footprint of running multiple VMs. I was expecting all kinds of socket and file system issues but haven't experienced any, I'm sure the more specialized you get away from a stock Linux system the more it'll break down. There are some localization issues I've encountered because I tried to find where it breaks, but don't really need to address those now. But if you need to do a POC because a client wants it and wants it on Azure and "not on your local machine" this solves this, even if that mindset is out of date and it runs cross-platform. I don't think I've ever seen a vendor change cloud platforms, and wouldn't develop apps to to do that, I would still modify the K8 Yaml files and and Dockerfiles to be more realistic and at least made sure it ran built and ran tests on the target OS Kernel/platform.
Also be aware that if you're new to dotnetcore, the changes are fast, and blog posts from even a year ago might be completely out of date. I found myself being like the dotnetcore 2.1 introduced a feature that's a core component and the next iteration completely blew it away? That's not typically how Microsoft platforms typically. I still know former clients who have clung to Webbforms and the v-next/.NETStandard/dotnetcore all happened while they slept. i'd rather have the latter without the marketing name changes.
Again, weird rant but I hope all this helps someone who has been straddling Linux and Windows worlds. Like anything, problems are being solved and new ones are being created.
posted by geoff. at 9:59 AM on December 29, 2019 [1 favorite]
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Assets/Assets.csproj", "Assets/"]
RUN dotnet restore "Assets/Assets.csproj"
COPY . .
WORKDIR "/src/Assets"
RUN dotnet build "Assets.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Assets.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Assets.dll"]
To anyone who doesn't know Docker and probably doesn't read the first comment, you wouldn't know that I can compile and run this on 3.1, native on Windows, run unit tests native on Windows, if I wanted t modify the Dockerfile run Unit Tests on a native Linux (Alpine/Ubunutu) VM or bare metal and have the application run well.
Conveniently this runs really well with Azure, so expect a lot of people who think they know Docker but don't realize what's going on. I think MSFT bought Docker for Windows, but what I did was set up WSL2 in Docker for Windows as Alpine. In VS I set the csproj to "Linux" (again no specific kernel, just the word "Linux"), the VS/MSbuild or whatever they're calling the compilation toolchain now has a targeting system that looks as the csproj, sees Linux, somehow goes to Windows for Desktop, looks at the specific Linux OS/kernel I specified and builds a valid image/container off that.
It works so well I feel guilty using it, and it does tie you pretty hard not only the MSFT toolchain and now cloud platform but probably will break during an upgrade at somepoint one of the things that Docker is specifically made not to do.
I'm certainly not blaming MSFT for make a very smart business decision to use Linux's scalability/small footprint to run their expensive cloud services while enabling easy development and debugging, but I sort of feel as we're back to where we started and tied to a fragile ecosystem. Not sure if I gave this to someone on OS X running VS Code it would work, my guess is it would so it solves that problem sort of.
This is less of a rant but as someone who often needs to switch platforms and cloud providers a lot of what they say vs what they do still remains the same vendor lockdown that has apparently proved itself viable as its been that way for some 30 years. I still can't write a standard protocol for blob storage on AWS/Azure/whatever like I could for SFTP, so in someways it has gotten worse.
But again, I completely understand that if everything was completely integrated it'd be a race to the bottom on price.
I'm part of the problem too, as part of the POC I'm doing I'm using Azurite (Azure blob emulator locally) to show the client they can emulate Azure locally for devs and not have to have their own expensive Azure storage accounts per dev. I'm going to go out on a limb and say Azurite does not emulate real Azure performance problems and has enough setup differences that local storage or even if I needed to for some reason show remote transfer, an FTP server, docker or otherwise, would be just as easy if not easier to setup and at least junior devs wouldn't be fooling themselves into thinking they're using the real thing and be shocked their poorly implemented code isn't thread-safe or whatever problems come into play moving from a mock setup to production. But that is just how sales works.
In any case as of today, Docker is made for Linux and Linux file systems. WSL2 + Visual Studio + Windows preview release + dotnetcore 3.1 allows the best of both worlds and reduces the footprint of running multiple VMs. I was expecting all kinds of socket and file system issues but haven't experienced any, I'm sure the more specialized you get away from a stock Linux system the more it'll break down. There are some localization issues I've encountered because I tried to find where it breaks, but don't really need to address those now. But if you need to do a POC because a client wants it and wants it on Azure and "not on your local machine" this solves this, even if that mindset is out of date and it runs cross-platform. I don't think I've ever seen a vendor change cloud platforms, and wouldn't develop apps to to do that, I would still modify the K8 Yaml files and and Dockerfiles to be more realistic and at least made sure it ran built and ran tests on the target OS Kernel/platform.
Also be aware that if you're new to dotnetcore, the changes are fast, and blog posts from even a year ago might be completely out of date. I found myself being like the dotnetcore 2.1 introduced a feature that's a core component and the next iteration completely blew it away? That's not typically how Microsoft platforms typically. I still know former clients who have clung to Webbforms and the v-next/.NETStandard/dotnetcore all happened while they slept. i'd rather have the latter without the marketing name changes.
Again, weird rant but I hope all this helps someone who has been straddling Linux and Windows worlds. Like anything, problems are being solved and new ones are being created.
posted by geoff. at 9:59 AM on December 29, 2019 [1 favorite]
This thread is closed to new comments.
Introducing the Docker Desktop WSL 2 Backend - Which says basically no hyperv needed and doesn't address Windows Containers, but then at the end says that WSL2 will be a fallback maybe possibly.
And:
WSL2 Release Notes which is a lot more technical and fixates on obscure bugs.
I feel as if the push is to get everyone dotcore, which isn't practical in my use case as I do not have access to the source, but I understand supporting legacy environments is not productive and forcing companies to make the move is necessary.
This isn't navel gazing at new toys, I'm just trying to build software in a way that development promotes to production in a reliable fashion. I feel as if ignoring some of this Microsoft/Docker scramble to develop something reliabble across platforms is fine. It just feels like someone else has figured this out, and a lot of it relies on using clouds that aren't possibly for secure on-premise environments.
posted by geoff. at 12:50 PM on December 26, 2019