Archives: September 16, 2022

Configure HTTP checks in vROPS

This week I am playing around with vROPS. I was trying to set up an HTTP check against one of the APIs we host internally. This took me a bit longer than I would have liked. In vROPS 8, VMware switched to the telegraf agent for agent-based monitoring. It took me a while to figure out, but HTTP checks are done using the telegraf agent. Let’s install the agent on one of our servers, so we can configure HTTP checks in vROPS!

Cloud proxy

Before we can start deploying the agent, we need to deploy a cloud proxy. This cloud proxy is an appliance that handles all communication to the agents. Typically you would only need 1 cloud proxy per geo, but this can vary based on your requirements.

To deploy a cloud proxy, go to data sources and click on Cloud Proxies.

Cloud proxies

On the next page, click the New button at the top. You will be given the link to download the Cloud proxy OVA, as well as a one-time key. You need to enter this key when deploying the ova. The key is used by the cloud proxy to authenticate to vROPS.

The ova deployment is pretty straightforward. Make sure you correctly enter the one-time key. Give the proxy a friendly name, so you can recognize it in vROPS and enter the correct network details for your environment.

After the deployment is done, it can take a few minutes before the proxy is shown in vROPS. But when it does, you should see something like this.

Cloud proxy overview

Telegraf agent

The telegraf agent is an “open source, plugin-driven agent for collecting and reporting metrics”. It’s a small agent that you can push out to your servers using vROPS or the scripts that VMware provides.

To start the installation, go to Environment and click Applications. On the next page, click on Manage Telegraf agents.

Manage Telegraf agents

In the pane on the right, select the VM(s) to which you want the Telegraf agent to be deployed. Click on the three dots at the top and select Install.

Install telegraf agent

In the install wizard, you can use common or specific credentials for every VM you’re deploying to. In my case, I’m using the same credentials for all VMs. On the next page, enter the credentials. If you’re deploying to domain-joined servers, make sure the account you provide has local administrator privileges.

Next, you’ll be taken to a summary page to review all the servers you’re deploying to. To start the installation click on Install agent. vROPS will now go out and push the agent to the servers you specified. Wait a few minutes until the status is shown like in my example.

Installation successful

After the installation is complete, it will take a couple more minutes until the collection status is shown as green and you can start configuring the agent.

Configuring HTTP check

Once the collection status is green, you will see an arrow in front of the VM name. Unfold the menu and click the 3 dots before HTTP check and click Add.

add HTTP check

A new window will pop up on the right-hand side. Give your check a meaningful name. This name will be everywhere in vROPS. Enter the details of your check and click on Save when you’re ready

HTTP Check

Wait a few minutes for vROPS to configure the check and for a couple of polls to pass. Once the check is healthy, you’ll see the icon change.

HTTP Check running

Now, you can just use the global search, or go to the VM object to check on your HTTP check.


Automated template builds with Packer

Over the past few months, I’ve worked a lot with Packer in my day job. It’s been quite a journey moving all the server templates used by my clients over to Packer. I’ve also been rebuilding my lab for a few weeks now, the Packer templates and associated Azure Devops pipeline was one of the first things I set up.

Setting up Azure Devops Agent

I’ve been getting familiar with Azure Devops over the past few months. So using that in my lab was going to be the fastest way to get going again. Over time, I will probably move this over to another product to broaden my horizon.

After you’ve set up your ADO tenant and project – which is out of the scope of this blog post – it’s time to setup our ADO agent.

To set up the agent, go to your project settings, Agent pools. Next, click on the pool you want to add the agent to, and on the top right click New agent. Follow the steps as outlined by the instructions and set up the agent. Microsoft actually has excellent documentation on this. You can find the documentation here.

Once the agent is set up, it’s time to set up Packer. Go to https://www.packer.io/downloads and download the packer version for your OS. I’m using a Windows agent, so I went ahead and downloaded the AMD64 build of packer for Windows. I downloaded the agent and put it in a c:\packer folder on my disk. Next, we need to make sure Packer is able to run. To do this, we will add the folder, we just created, to the %PATH% variable.

Open up your Advanced system settings, go to Environment variables and edit the PATH system variable. Next, click on New to add a new path and add the folder where you put the packer exe. In my case, it’s c:\packer

Editing the path variable

Setting up the repo

In your ADO project, hit the + sign at the top left and add a new repository. Name the repository to your liking and click on Create.

I’ve published my packer repository to Github, so you can download the files and upload them to your ADO repository or git tool of choice.

Setting up the pipeline

This is where we finally bring everything together. By setting up the pipeline to trigger on new commits, your packer templates will be rebuilt every time a new commit is pushed to the master branch.

The pipeline yaml file is also uploaded to my GitHub repository. To set it up, open Azure Devops, and go to Pipelines – Pipelines. Next, click on the new pipeline button and select Azure Repos Git (if you’ve followed along that is).

Select the repository you created and hit next. If you’ve uploaded the yaml file that I posted, you can hit Existing Azure Pipelines YAML file. If not, hit starter pipeline.

In the end, your pipeline should look like this

trigger:
  branches:
    include:
    - master
  paths:
    include:
    - VMware/Windows/*

resources:
- repo: self

pool:
  name: Default

stages:

- stage: RunPacker
  jobs:
  - job: CreateImage
    timeoutInMinutes: 0
    steps:
    - task: [email protected]
      displayName: 'Use Packer Latest'

    - pwsh: |
        packer init .
      workingDirectory: '$(build.sourcesDirectory)/VMware/Windows/'
      failOnStderr: true
      displayName: "Packer init"
      name: "PackerInit"

    - pwsh: |
        packer validate .
      displayName: "Packer validate"
      workingDirectory: '$(build.sourcesDirectory)/VMware/Windows/'
      failOnStderr: true
      name: "PackerValidate"
   
    - pwsh: |
        packer build .
      displayName: "Packer build"
      workingDirectory: '$(build.sourcesDirectory)/VMware/Windows/'
      failOnStderr: true
      name: "PackerBuild"

A couple of things to note, the pipeline uses PackerTool to get the latest version of Packer downloaded to the agent. You can add it to Azure Devops from the marketplace. If you add templates for other operating systems in the future, make sure you add them to the paths include at the top.

Packer templates

My packer templates are all built using the same windows.pkr.hcl file. In there, I’ve defined 3 sources; 1 for Server 2019, 1 for Server 2022, and another one for Server 2022 Core. The 3 different operating systems are defined as separate sources, but the actual config of those sources is largely the same.

This structure made it easier for me to support multiple operating systems that have the same config. For every OS, there’s a separate subfolder that contains the autounattend.xml file. I just found out you can use packer variables in there as well. This will allow me to consolidate everything into 1 autounattend file.

PowerShell scripts

To be able to run the PowerShell scripts, you will need to set up a webserver, that’s reachable from the machine being built. When I first got started, I looked at a lot of examples. While trying to wrap my head around the concepts of Packer, I didn’t want to spend a lot of time learning some additional config management tool.

I noticed in Mark Brookfield’s examples that he had a simple structure to get applications installed through PowerShell. I liked the simplicity of it and decided to follow his example. Check out his blog and Packer repo on Github. They helped me a lot and I’m sure they’ll be a great resource for you!