• I usually try to not compare PaaS/IaaS providers directly, because they serve different audiences and all have their strengths and weaknesses. That said, Azure’s love of ceremonious ordeals and enigmatic rituals, multiplied by poor UX and general lack of sensible design, are sometimes just infuriating. Frustration further intensifies as I remind myself just how easily some of these goals are accomplished in AWS (and, ostensibly, Google cloud as well? I don’t yet have sufficient expertise in it to draw informed conclusions).

    Allow me to illustrate.

    Today we have a (seemingly) simple task at hand. We’d like to offload TLS termination for our app to the cloud provider’s load balancer, and go about our lives. Will it work? Let’s find out!

    Easy enough, right?

    This is how the two platforms stack up in helping you protect your HTTP services with TLS in this year 2018:

    AWS Azure
    You may request and issue up to 100 of SSL certs for free to use with AWS services, via the web console. You may then reference them with Terraform / boto / aws cli. Certs are signed by AWS’s CA. There’s no free cert option. Azure have partnered with GoDaddy to bring encryption to the masses (at a notable discount of 0%), because clearly running a CA for their customers is just too fucking much to ask of the Redmond giant.
    Such a cert is usable anywhere in AWS, including the Application Load Balancer (ALB), except vanilla EC2 instances. The cert that you can buy in Azure is only deployable on Azure app services, i.e. the Web Apps - the Azure PaaS product. The Compute suite of products, such as Load balancers of any kind, are not included. You do have the ability to export the cert for use with other services (after all, you paid for it). The process is complicated and requires PowerShell. If you’re on a Mac or Linux, sucks to be you (j/k). Also, as part of the process you have to create a Key Vault, for no good reason whatsoever. This does however mean that you can use it with VM instances, which is an advantage over EC2.
    The cert, once provisioned, is immediately available for use with AWS services. Once again, use console / Terraform /boto / aws cli / whatever else tickles your fancy. So you either bought a cert from Azure, or brought your own. Either way, the cert, once provisioned, still isn’t really ready for use. You still need to upload it to Azure. Yes, I know you bought it from Azure. I know. That’s just how it is, ok?
    To use the cert with an ELB, you pass its ARN into your scripts. Or, if automation ain’t your thing (it should be), you may pick the cert from a list when setting up a load balancer or another service. You’d expect Azure to have something like Certificate Manager, yes? Well, you’re in luck - Key Vault to the rescue! Remember, you made one earlier? Yes, you do have to create an actual “key vault” resource via either Portal or CLI to store your certs. It’s not just a platform service offered by Azure. It’s an object you have to create and maintain, because you love micromanaging this shit. (Worry not, one vault can hold many keys - thanks, Obama Nadella!).
    You’re done. In fact, you were done after the 2nd step - that was the entirety of setup required to terminate SSL on an AWS load balancer. Feel free to move on to more meaningful work. We still want to use the cert with a load balancer^H^H^H^H^H App Gateway[1], right? Guess what. That’s like the only fucking Azure service that CAN NOT get your cert from the goddamned Key Vault.
    you’re done, remember? Yes, you guessed correctly. You need to upload a cert to each App Gateway separately. Yes, we love manual work SO MUCH that will happily do this in Azure and thank them after, even though we pay them good money to have a platform that doesn’t suck balls.

    Any questions?

    Do you think I’m doing it wrong on Azure? Please share your perspective.

    P.S. I haven’t touched on end-to-end SSL here, just for the sake of wrapping up this rant or we’ll be here all night. Suffice to say, it’s super easy in AWS and a huge fucking pain in the ass in Azure. Are you surprised?

    [1] Few conversations can get as confusing as those where you keep saying "load balancer", but really mean "app gateway".

  • Wed, Jan 3, 2018

    “Ohai Azure Portal, how I’ve missed you!” – said no one ever.

  • Fri, Oct 13, 2017

    Azure functions can look at blob storage and react to things.

    But actually not really all that well.

    Excerpt from the Documentation:

    When you’re using a blob trigger on a Consumption plan, there can be up to a 10-minute delay in processing new blobs after a function app has gone idle. After the function app is running, blobs are processed immediately. To avoid this initial delay, consider one of the following options:

    Use an App Service plan with Always On enabled.

    Use another mechanism to trigger the blob processing, such as a queue message that contains the blob name. For an example, see Queue trigger with blob input binding.

    Let’s deconstruct this a bit.

    The important parts are the "Consumption Plan" vs "App Service", and how those relate to the Always On mode.

    See, Azure Functions have two methods of operation (“plans”). The “Consumption” plan executes the function only when triggered. So if nothing is calling it, the function will go to sleep. A Function runs ephemerally and you need not think of its underlying resources whatsoever, aside from paying per invocation.

    The App Service plan, on the other hand, launches a VM that will host your functions, and that VM remains running. You don’t need to directly manage it (nor can you), but you are being charged for all the minutes it’s humming away. Also, unlike the Consumption plan, you need to manage autoscaling yourself.

    Only on the App Service plan you are given the option to enable “Always On”, which will prevent your function apps from going to sleep.

    So in contrast to the probably familiar pattern of AWS Lambda being triggered by a change in S3 bucket, the Azure Blob storage doesn’t immediately trigger your function on change in blob storage, unless the function is already running. Otherwise, you are waiting for the scheduled wake-up window (feel free to correct me on Twitter if I am misunderstanding something). I personally find this behaviour to be super confusing, and inferior to what the rest of the cloud has come to to expect of the “serverless” patterns.

  • Sun, Oct 1, 2017

    Good morning. Today we will take the terms “domains”, “fault”, and “update”, and make it sound more sophisticateder because competitive advantage.
    - Azure marketing people, probably

    I mean, it’s good they have thought of this. It’s even on the exam. But really, as the user of Azure, I don’t need to care about how they power their racks and in what order they are restarted. I care about stability of my VMs, but it’s ok to leave the mechanics of fault-tolerance to be a black box. For the most part, it would suffice for me to know that if I launch a group of 3 machines, I’ll have almost 3 machines running most of the time. I don’t have any control over this anyway, so those “domains” are trivia and implementation details.

    That aside, Microsoft’s general aversion to visual presentation of data rears its ugly head here once again. They could have designed the UX around this as a nice grid, with current status of each slot in the fault/update domain, etc. Could’ve even put this next to each VM. But no. Everything must look like a spreadsheet.

    The important takeaway of the entire feature: You should, for best availability vs cost effectiveness, try to horizontally scale your VMs in sets of 5: N % 5 == 0. That’s how many update domains exist. N < 5 - and you’re not utilizing the full fault-tolerance potential. 5 < N < 10 - and you are overprovisioning some of those update domains.

Hosting AWS Docker Microservices Tooling Automation