Help me dive into the Cloud (Computing)!
January 10, 2017 9:09 AM   Subscribe

I've been tasked with moving one of our programs that we rely on to "the cloud". This program is BOTH a server, AND contains the clients (that is, our computers do not have the client installed on them, instead it runs over the network). Is there a cloud solution that would work well for this?

My current idea is to move the server to a IaaS provider, and then spin up clients as needed, and then remote into the client from our office. Remote work shouldn't be a problem for this (our Internet was recently upgraded, so we should have plenty of bandwidth). The biggest concern for this idea (in my mind) is 1. Client costs (not sure how much that will balloon things, given we have 5 or 6 clients needed all at basically the same time for 8+ hours a day) and 2. Having a way to ensure that the clients are all on the same physical infrastructure as the server, so that speed is maximized (ideally, hosted on the same iron, frankly, but that may be asking too much).

To be clear about the infrastructure of the program, there is NO installed local client. Everything is just a shortcut that points back to an EXE on the server. But the client RUNS locally. And I wish to put both the server AND the clients into the "cloud". Oh, and it runs on Windows, so the "cloud" machines will need to as well.

Anyone have any experience with this or suggestions to offer (or even prefered VPN providers)? The program in question is pretty niche (Mail Manager), and they do not provide any sort of cloud based equivalent (hence this question :D).

Thanks!
posted by TrueVox to Technology (11 answers total)
 
If I understand correctly, it sounds like you have an application (.EXE), where the actual file itself lives on a file server, but your users run it normally from their desktops. This is not ordinarily a setup that lends itself well to IaaS-style computing, where you have (roughly speaking) a bunch of servers running some applications remotely, and the users connect to them via a web interface (for instance). By forcing your application to run remotely, you're going to add several layers of complexity here. The only way I know of to achieve what you're asking is to use a virtual desktop setup, like Amazon WorkSpaces. In that scenario, Amazon would be running all of your client desktop machines, and your users would "remote" into them: using their physical keyboard, mouse, and screen as virtual keyboards, mice, and screens for the Amazon servers, running Windows and your application.

But this is usually only useful in two scenarios: when your users require portability across, say, different physical sites but connecting to the same "desktop". Think of, say, branches of a bank or different hospitals where employees routinely change locations but need to access the same computer. The other scenario is when your users only have "thin clients" instead of real desktop/laptop computers: basically just a keyboard+mouse+screen with network access to the virtual servers.

I think it would be helpful if we understood a bit more about what specific problem you were trying to solve here. Why do you think you need to move this application "to the cloud"? If it's because of one of the virtual desktop scenarios above, something like Amazon WorksSpaces may be a good option.
posted by tybstar at 9:27 AM on January 10, 2017 [1 favorite]


I'm not sure that I understand the necessity behind this requirement:

2. Having a way to ensure that the clients are all on the same physical infrastructure as the server, so that speed is maximized (ideally, hosted on the same iron, frankly, but that may be asking too much).

Given that it sounds like the client is executing on the user's machine (even though the .EXE lives on the server). The way I understand this, the client is currently not on the same physical infrastructure as the server, so the question is, does it really need to be?

What's the driving force behind going "cloud" with this? Is it because users want to be able to access the software from any device? Or is it just that the powers that be don't want to maintain the server anymore?

If it's just the latter, it might be easier to assess what performance is like if the server is moved remote (to the cloud), but the clients basically still function as they currently do (maybe loading the .exe from a different location on the local network).

If it's the former, then the remote desktop shenanigans might be your best bet.
posted by sparklemotion at 9:42 AM on January 10, 2017


then remote into the client from our office

This is the problem detail. I think you're describing running a Windows program from a Windows file share over the Internet. This is not typical of a SaaS setup. The only safe way to do this is via VPN or some sort of secure tunnel.

I would investigate accessing the program via Remote Desktop instead of executing the program locally on users' computers.
posted by LoveHam at 10:40 AM on January 10, 2017


Best answer: Yeah, if you used something like RemoteApp, you could just have the server sitting there, and then the server would create mini-clients within it as required, no need for the headache of spinning up client machines at all. Here's Microsoft's Checklist on hooking up RemoteApp to be accessible via the Internet.
posted by Static Vagabond at 11:22 AM on January 10, 2017


Response by poster: Wow! Great responses, thank you everyone! From top to bottom:

Tybstar: What you describe is about accurate, and AWS WorkSpaces ARE one of the options I stumbled upon just after posting this, so you're on the right track. And you're quite right that this does not normally lend itself to IaaS. The driving force behind this is the same as in all great endevors: Because the boss says so (he's made a "Locally Serverless" goal of three years - we'll see how that goes... ;D).

Sparklemotion: You're RIGHT! It's not currently running on the same hardware. BUT we ARE all connected by Gig ethernet to one another, which is speedy enough to be workable. I don't see a way to do this in the cloud unless A. I have a HUGE pipe or B. We put the clients in the cloud too. Hence this question. :) The clients would want to be on the same local infrastructure to minimize latency. What I DON'T want is for the server to live in (for example) New York City and the clients to live in San Francisco. As for the "why", you're right - it's the second one. But that won't work (I don't believe) due to latency between client & server in that case (going from a gig connection in-house to a 30+ meg up/down fiber likely won't feel "good" anymore).

LoveHam: NO, that is what I'm trying to AVOID doing (as you're right - that would suck. To get around that (since that is the default solution to what I want to do), I'm proposing having BOTH the server AND the clients in the cloud (with just thin-clients of some sort on-site). As to why it must be executed on the user's computers, that's just a quirk of how they built it - god only knows why.

Static Vagabond: Is that something that can work with ANY Windows program, or does it need hooks coded into it?

Again, thank you all! :)
posted by TrueVox at 12:17 PM on January 10, 2017


But that won't work (I don't believe) due to latency between client & server in that case (going from a gig connection in-house to a 30+ meg up/down fiber likely won't feel "good" anymore).

Forgive me for pushing on this (it's bringing up memories from my previous professional life), but do you know that it "won't feel good" anymore?

The reason that I ask is because, if the client is running locally, and the client is reasonably well written, it shouldn't be waiting for data from the server often enough and/or for long enough for users to feel it, in a lot of applications.

Don't get me wrong, there are definitely poorly written clients out there, and there are situations where the client ends up waiting on disk read/writes all the time and so the latency is unavoidable. But I wonder if you're sure that this is the case here, or if it's more the case that running ancient software on newer hardware has sped everything up and so users/admin just believe that "the faster network" is what makes the tool more usable nowadays.
posted by sparklemotion at 12:36 PM on January 10, 2017


Best answer: Perhaps I am missing something here. Sounds like you are just launching an app off a Windows file server, is that right? Your users are running the client program from shared storage? And the same software also runs, separately, as the server application?

If so, this sounds like a job for Windows Remote Desktop Services. This used to be called Terminal Server. You set up RDS on a Windows instance, install your app, configure environment as appropriate, users connect remotely with RDP. Run the server on another instance, or even on localhost on the RDS server. So we are talking about 1 or 2 instances, easy peasy. If you need SQL Server or something, I would recommend to run this on a managed service to save yourself some hassle.

You can provide a full Windows desktop, or just a single application (that's RemoteApp, as suggested above). Here's a blog post about running this sort of set up on Azure, but you can do the same shit on AWS or Google Cloud. You can use RD Gateway for your authentication layer (lots of security considerations here), or if that's too complicated for just a few users, then you can just set up a site to site VPN from your office to cloud.

If the client application is very heavy and running RDS server is not cost effective, then I would agree with tybstar about investigating a managed VDI type of situation if you can. I wouldn't want to incur the management cost of running a bunch of extra Windows desktops if I could avoid it.

Because RDP is pretty efficient, for this type of application I think latency shouldn't be too much of a problem (you should test this with some pilot users though). However, if this is a mission critical application, it's a good idea to consider availability and connectivity deeply.

Just off the top of my head, some random things to think about:

- What is the cost to your business if this app is not available? If you move this app to the cloud, will you need highly available routers, highly available internet connections, etc. If you do site to site VPN, can you ensure high availability of the tunnel?
- You need enough bandwidth and a reasonable latency connection. You should run this app in a datacenter that is as geographically close to your office as possible.
- Make sure you have good QoS rules. You don't want business to be interrupted because Steve from accounting is downloading the new Call of Duty from Steam and saturating your connection.
- If you lose your instance, or your infrastructure provider has an issue and you lose an AZ or even a whole region, how fast can you redeploy, what is your HA and backup strategy, etc.
- Capex vs opex if people care about that.

Once you've got your connectivity and security stuff figured out, this sounds like it's definitively doable without too much hassle. Since this is the first service of (sounds like) many, I would recommend thinking deeply about which cloud you deploy on to avoid getting boxed in later. It's a non trivial decision, so make sure you consider the full ecosystem of managed services that are available on each, and what you are going to need in the future.

Regarding your second concern, this is not really a problem anywhere. If you deploy in AWS for example, instance to instance communication within the same AZ and VPC is usually very very low latency, sub millisecond. This statement is a little bit of an over simplification because AWS infrastructure is so complicated, and at large scale you have to consider it, but generally this is true. Bandwidth depends on instance type, and you can get up to 10G. Across AZ's is also generally pretty low latency (inter AZ connectivity can and does drop occasionally, though). Connectivity between AWS regions is over the internet, so definitely don't do that. In EC2 you can get a dedicated host if you really really want to co locate your clients and server on the same physical hardware, but I don't think this is necessary and this constraint will impact durability.
posted by tracert at 5:31 PM on January 10, 2017


Also, upon further inspection, I agree with sparklemotion. If you can only move the server (and still be supported by the vendor) that's way easier. I'd suggest to try that first too.

To help you do some estimation, you could examine how much bandwidth the server uses during production, then you'll know how feasible that is. I don't know if you can do this with Windows, but if you can induce some artificial high latency on a client that would be a handy quick test too.
posted by tracert at 6:11 PM on January 10, 2017 [1 favorite]


Response by poster: Sparklemotion: Don't feel bad for pushing! You've no way to know that the program is... can we just say poorly written? Is that too harsh? ;)

Sadly, no, I'm pretty sure. Unmentioned in all of this is the years of experience I have with the program and trying to optimize it to run well for our uses. But I'm still glad you asked. :)

Tracert: I mean, generally that's what seems to be happening, I think, but it may be more complicated than that (as our vender is quite insistent that we follow their user install directions, even though all that SEEMS to do is put a shortcut on my desktop to the server file, and installs a minimal directory on my C:\ drive (entirely populated with one of 3 things - an empty folder, a folder with PDF manuals, and a third folder with two files that MIGHT be config files, but the names are not clear to me, and they have no extension). That said, assuming it IS what you say, would RDS work for ANY app, or does it have to be specifically written to function with RDS? If it could work for ANY app, then that MIGHT be just the ticket!

General availability and connectivity are considered (we've got fiber to the building with cellular LTE backup), but thanks for mentioning them - goodness knows they're easy to forget. :)

As for your point-by-point:

1. You're right, we do need high availability, but not 100% availability (Amazon's SLA seems reasonable, for example).
2. Bandwidth is a concern, but I THINK we can manage (and geo-availability is a great consideration - thanks for reaffirming that)
3. You're referring to traffic shaping and what not on our local router, right? Should be sorted when it comes up - made the higher-ups buy a PFSense firewall/router, so we should be covered. :)
4. I'm new at this, what would losing an AZ be? But yes, you're right more generally. Are you suggesting having a local backup, or having several cloud providers (a "main" one and an "emergency" one)?
5. Not my department specifically, but we have people who would care, so I'll have to bring that up.

Hopefully not too many - we only have three things beyond Office and what not that we use, and the others already have drop-in-cloud replacements to some degree (basically, file server, Active Directory, and Quickbooks Enterprise). My hope is that this will be the biggest bugbear to deal with. That said, your point about choosing wisely is well taken - do you have any direct suggestions of providers who would meet that ideal nicely (assuming they're active in the northeast United States datacenter wise)?

And to your last point, good! I had hoped it would be the case, but I didn't want to take any more for granted than I had to. :)

Thank you all again!
posted by TrueVox at 9:18 AM on January 11, 2017


Best answer: About RDS. RDS is just multi-user Windows, and you connect with Remote Desktop. So this means applications do not need to be specifically written or hook into RDS in any way. However, applications run in a multi-user environment need to be written well. That is, they should behave correctly and not require full administrator rights to run properly. User settings should be stored in each users Application Settings folder (it has been a long long time since I've had to deal with Windows, so my terminology might not be so accurate here). With enterprise software written by small companies, this is often not the case.

So you will need to test this out. From your description, it sounds like those config files getting written to the root directory may be an issue. When users run the program, do they need to write to these files? If so, even if you give them write permission in the first place, users will trample on each other and shit will get fucked up. But if those files just contain some config settings and are read only this is fine. A similar problem may exist with privileged registry keys needing to get set on every run. There is a lot of garbage software in the world.

To debug this sort of thing, you can examine the process behaviour using Process Monitor, which'll show you filesystem activity and registry access attempts. Launch it and see what this program does.

About resiliency in the cloud. I will go off on a brief tangent now.

AWS and Google Cloud are divided into Regions, which are geographically separated facilities, and Availability Zones (Google just calls these Zones), which are separate failure domains within a Region. So, for example, you can deploy an application into the AWS US-EAST-1 region, which is in North Virginia. Within that region, you may deploy in one AZ, or several, depending on your needs. Each AZ is an independent failure domain. They each have their own power, are in different buildings, have separate connectivity to the internet, etc. AZ's are connected to each other with high speed links, but these can go down so you need to consider this. Azure doesn't have the concept of a zone, just regions. You just deploy to multiple regions to get higher resiliency.

There can be events which take out just one instance, or all your instances, or an entire AZ, or several AZ's, or just the links between AZ's, or an entire region, or multiple regions, or the entire service globally and the whole internet is fucked, or something else I haven't thought of. How you deal with these scenarios and how much redundancy you need is a function of how much the business wants to spend on disaster recovery, including management overhead and whatever. Build the solution which fits your business.

So I can't really tell you the answer here about what's appropriate, if you should keep that local backup around. Just from the things you have said it sounds like you just probably need something pretty low key. Multi cloud has a lot of overhead because you have to learn two systems. I would just try to get the best uptime I reasonably could on one provider, in a single region, and be ready to redeploy my application if things get fucked. And then if you need additional redundancy, keep going.

About choosing which cloud to deploy on. To be honest I don't feel comfortable making a recommendation here. Because I don't know your business I don't know what to say. I think maybe you are a pretty small Microsoft shop, and maybe Azure is a good bet for you because they are going to have a lot of managed services you can use. But what do I know? I am just some asshole from the internet. This decision requires a lot of context.

Finally, I think perhaps you might be theorycrafting a little too much here. Just sign up for some stuff, click around, and actually do it. Once you have set up a test environment, done some testing on each cloud, and fucked around with the solutions suggested here, I think you will have a lot more context and things will be much clearer. It has been my experience that the best way to explore new technology is to just commit some time and jump in.
posted by tracert at 8:13 AM on January 13, 2017 [1 favorite]


I don't know why I said fuck so many times I guess it's because it's Friday.
posted by tracert at 8:17 AM on January 13, 2017 [1 favorite]


« Older Styles of Salsa?   |   How to easily get a 2nd number on an Android phone... Newer »
This thread is closed to new comments.