Becoming a VMUG leader

As many of you already know, last year Erik Schils decided to step down as a VMUG leader after 15 years. When I saw the announcement, I immediately thought to myself that I don’t want to see this community disappear. I started talking to other folks in the community and quickly we had a bunch of enthusiastic people ready to step up and keep on building this community.

I won’t be flying solo, my good friends Jens Herremans (blogtwitter) and Kristof Asaert (blogtwitter) will be joining me as VMUG leaders. And we also have Stijn Depril (LinkedIn), Maarten Caus (blogtwitter) and Harold Preyers (blogtwitter) joining in a supporting role.

Why I want to become a VMUG leader

Ever since I discovered the vExpert program and VMUG, I’ve wanted to be a part of those communities and help build them out into something more. I’ve been doing this as a vExpert since 2016. That’s also the year I did my first VMUG session. I was instantly hooked!

So coming back to the title of this paragraph, why do I want to become a VMUG leader? I want to help build the VMUG in Belgium into something greater than it is today. There is an enormous potential for growth, there’s still a lot of people that are only now discovering the VMUG community.

VMUG is all about users and I want to bring it back to the users. Not only the veterans that have been coming to VMUG for 5, 10 15 years, also the people just starting out or coming to their first VMUG. I first started doing public speaking at the VMUGBE in 2016. It was a great experience for a first time speaker and I want more people to take that first step into public speaking. VMUG is the ideal platform for it.

Another part of wanting to become a leader is that it gets me way out of my comfort zone. I’m usually one of those people in the back that just sit there quietly and I’ve been wanting to do something about that for a while. By becoming a VMUG leader I can start working on that for myself while also building out the VMUG community, that’s a win win if you ask me! πŸ˜€

What is VMUG / vBeers?

Officially launched in August of 2010, the VMware User Group (VMUG) is an independent, global, customer-led organization, created to maximize members’ use of VMware and partner solutions through knowledge sharing, training, collaboration, and events.

In Belgium, this is an annual VMUG event that was hosted in Mechelen. Providing a space for partners, vendors and customers to meet, network and attend sessions hosted by industry and community experts.

On the other hand, we have the concept of “vBeers”, which is a community-based initiative that focuses on connecting like-minded people that have a passion for technology & virtualisation. This is a monthly event (every 3rd Thursday of the month) with 1 or 2 short sessions. These sessions can be technology-related but that’s not required. The goal of vBeers is to provide a low-entry platform for inexperienced public speakers to experience and test public speaking between similar vGeeks, but that doesn’t mean we never invite experienced speakers. πŸ™‚

If you are interested in one of those, be sure to head over to the vCommunityBelux slack workspace for updates on events and meet the other vCommunity folks.

vCommunitBelux slack workspace

What are our goals for 2022?


Building on the existing foundations we have today, we want to fuse the VMUG and vBeers concepts into one. Creating an event like VMUG with the atmosphere and energy of a vBeers. We want to return to the basics of VMUG and that’s an event by Users for Users. Meaning, how you can get practical value out of the event. As leaders, we are just a small portion of how we see the VMUG concept grow in Belgium. 
We also need YOUR input on how we can bring VMUG to the next level. You can help us with filling in a short survey. This will only take a few minutes and provide essential information on what we can do to make VMUG the perfect format for “you”.


We are currently working hard on getting things rolling and hope to host a Physical/Virtual VMUG later on this year in june! 


We’re also planning to do an online pre-VMUG session. This will be a 2 hour event with lots of community speakers that will present some smaller sessions. 


For the vBeers, I’ve already mentioned that we host this every 3rd Thursday of each month. We’re also thinking about organizing a bigger vBeers twice or three times per year. We would then invite a partner that would dive deeper in a specific area.

Stay tuned and looking forward seeing you soon on our next vBeers or VMUG!  

Configure HTTP checks in vROPS

This week I am playing around with vROPS. I was trying to set up an HTTP check against one of the APIs we host internally. This took me a bit longer than I would have liked. In vROPS 8, VMware switched to the telegraf agent for agent-based monitoring. It took me a while to figure out, but HTTP checks are done using the telegraf agent. Let’s install the agent on one of our servers, so we can configure HTTP checks in vROPS!

Cloud proxy

Before we can start deploying the agent, we need to deploy a cloud proxy. This cloud proxy is an appliance that handles all communication to the agents. Typically you would only need 1 cloud proxy per geo, but this can vary based on your requirements.

To deploy a cloud proxy, go to data sources and click on Cloud Proxies.

Cloud proxies

On the next page, click the New button at the top. You will be given the link to download the Cloud proxy OVA, as well as a one-time key. You need to enter this key when deploying the ova. The key is used by the cloud proxy to authenticate to vROPS.

The ova deployment is pretty straightforward. Make sure you correctly enter the one-time key. Give the proxy a friendly name, so you can recognize it in vROPS and enter the correct network details for your environment.

After the deployment is done, it can take a few minutes before the proxy is shown in vROPS. But when it does, you should see something like this.

Cloud proxy overview

Telegraf agent

The telegraf agent is an “open source, plugin-driven agent for collecting and reporting metrics”. It’s a small agent that you can push out to your servers using vROPS or the scripts that VMware provides.

To start the installation, go to Environment and click Applications. On the next page, click on Manage Telegraf agents.

Manage Telegraf agents

In the pane on the right, select the VM(s) to which you want the Telegraf agent to be deployed. Click on the three dots at the top and select Install.

Install telegraf agent

In the install wizard, you can use common or specific credentials for every VM you’re deploying to. In my case, I’m using the same credentials for all VMs. On the next page, enter the credentials. If you’re deploying to domain-joined servers, make sure the account you provide has local administrator privileges.

Next, you’ll be taken to a summary page to review all the servers you’re deploying to. To start the installation click on Install agent. vROPS will now go out and push the agent to the servers you specified. Wait a few minutes until the status is shown like in my example.

Installation successful

After the installation is complete, it will take a couple more minutes until the collection status is shown as green and you can start configuring the agent.

Configuring HTTP check

Once the collection status is green, you will see an arrow in front of the VM name. Unfold the menu and click the 3 dots before HTTP check and click Add.

add HTTP check

A new window will pop up on the right-hand side. Give your check a meaningful name. This name will be everywhere in vROPS. Enter the details of your check and click on Save when you’re ready

HTTP Check

Wait a few minutes for vROPS to configure the check and for a couple of polls to pass. Once the check is healthy, you’ll see the icon change.

HTTP Check running

Now, you can just use the global search, or go to the VM object to check on your HTTP check.

Automated template builds with Packer

Over the past few months, I’ve worked a lot with Packer in my day job. It’s been quite a journey moving all the server templates used by my clients over to Packer. I’ve also been rebuilding my lab for a few weeks now, the Packer templates and associated Azure Devops pipeline was one of the first things I set up.

Setting up Azure Devops Agent

I’ve been getting familiar with Azure Devops over the past few months. So using that in my lab was going to be the fastest way to get going again. Over time, I will probably move this over to another product to broaden my horizon.

After you’ve set up your ADO tenant and project – which is out of the scope of this blog post – it’s time to setup our ADO agent.

To set up the agent, go to your project settings, Agent pools. Next, click on the pool you want to add the agent to, and on the top right click New agent. Follow the steps as outlined by the instructions and set up the agent. Microsoft actually has excellent documentation on this. You can find the documentation here.

Once the agent is set up, it’s time to set up Packer. Go to and download the packer version for your OS. I’m using a Windows agent, so I went ahead and downloaded the AMD64 build of packer for Windows. I downloaded the agent and put it in a c:\packer folder on my disk. Next, we need to make sure Packer is able to run. To do this, we will add the folder, we just created, to the %PATH% variable.

Open up your Advanced system settings, go to Environment variables and edit the PATH system variable. Next, click on New to add a new path and add the folder where you put the packer exe. In my case, it’s c:\packer

Editing the path variable

Setting up the repo

In your ADO project, hit the + sign at the top left and add a new repository. Name the repository to your liking and click on Create.

I’ve published my packer repository to Github, so you can download the files and upload them to your ADO repository or git tool of choice.

Setting up the pipeline

This is where we finally bring everything together. By setting up the pipeline to trigger on new commits, your packer templates will be rebuilt every time a new commit is pushed to the master branch.

The pipeline yaml file is also uploaded to my GitHub repository. To set it up, open Azure Devops, and go to Pipelines – Pipelines. Next, click on the new pipeline button and select Azure Repos Git (if you’ve followed along that is).

Select the repository you created and hit next. If you’ve uploaded the yaml file that I posted, you can hit Existing Azure Pipelines YAML file. If not, hit starter pipeline.

In the end, your pipeline should look like this

    - master
    - VMware/Windows/*

- repo: self

  name: Default


- stage: RunPacker
  - job: CreateImage
    timeoutInMinutes: 0
    - task: [email protected]
      displayName: 'Use Packer Latest'

    - pwsh: |
        packer init .
      workingDirectory: '$(build.sourcesDirectory)/VMware/Windows/'
      failOnStderr: true
      displayName: "Packer init"
      name: "PackerInit"

    - pwsh: |
        packer validate .
      displayName: "Packer validate"
      workingDirectory: '$(build.sourcesDirectory)/VMware/Windows/'
      failOnStderr: true
      name: "PackerValidate"
    - pwsh: |
        packer build .
      displayName: "Packer build"
      workingDirectory: '$(build.sourcesDirectory)/VMware/Windows/'
      failOnStderr: true
      name: "PackerBuild"

A couple of things to note, the pipeline uses PackerTool to get the latest version of Packer downloaded to the agent. You can add it to Azure Devops from the marketplace. If you add templates for other operating systems in the future, make sure you add them to the paths include at the top.

Packer templates

My packer templates are all built using the same windows.pkr.hcl file. In there, I’ve defined 3 sources; 1 for Server 2019, 1 for Server 2022, and another one for Server 2022 Core. The 3 different operating systems are defined as separate sources, but the actual config of those sources is largely the same.

This structure made it easier for me to support multiple operating systems that have the same config. For every OS, there’s a separate subfolder that contains the autounattend.xml file. I just found out you can use packer variables in there as well. This will allow me to consolidate everything into 1 autounattend file.

PowerShell scripts

To be able to run the PowerShell scripts, you will need to set up a webserver, that’s reachable from the machine being built. When I first got started, I looked at a lot of examples. While trying to wrap my head around the concepts of Packer, I didn’t want to spend a lot of time learning some additional config management tool.

I noticed in Mark Brookfield’s examples that he had a simple structure to get applications installed through PowerShell. I liked the simplicity of it and decided to follow his example. Check out his blog and Packer repo on Github. They helped me a lot and I’m sure they’ll be a great resource for you!

Why YOU should become a vExpert

The end of the year is one of my favourite times of the year. Not just because it’s my birthday or the holiday season but also because it’s time to fill in your vExpert application. It’s a good time to reflect on the past year and what you accomplished.

So what is the vExpert program all about

Unlike what some people may think, the vExpert program is not a certification that you get when you pass an exam. Nor is it necessarily a reflection of your skill as a VMware engineer. At its core, the vExpert program is a marketing and advocacy program, designed to help you, as an individual contributor in getting your messaging about VMware and its products out the door.

What’s more important though is that its a tight knit community of likeminded individuals that all have a shared passion.

Okay that sounds cool but what’s in it for me?

If you’re awarded the vExpert title, you also get a lot of benefits. One of the best things is that you get access to pretty much all the software that VMware has to offer with licenses. This will make it easier to try out new stuff in the lab and get new content out the door.

You also get access to NDA content and briefings by VMware and their partners.

But by far the best resource you get access to, is the vExpert slack channel. There’s a lot of great conversations about VMware topics but also about certification, homelabs, career and just random chit chat. It gives you access to people in the community that can help you out if you’re troubleshooting an issue in the lab or if you want another opinion on something you’re proposing, or whatever.

I’ve been in the program since 2016 and I can honestly say that I’ve made a lot of new connections this way and even built out new friendships. You can usually find a lot of the vExperts in the blogger zone at VMworld, which is great for getting those real world connections.

Speaking of VMworld, as a vExpert, you get an invite to the vExpert party at VMworld US & EU. When I went to the vExpert party in Barcelona in 2019, Pat Gelsinger popped in. A memorable moment for sure!

Belgian vExperts with Pat Gelsinger

That’s great but how do I become a vExpert

In order to become a vExpert, you need to put yourself out there. There’s really a lot you can do:

  • Write blogs
  • Speak at VMUG/events
  • Build a training course
  • Organise a community event
  • Respond to posts on VMTN
  • Write cloud
  • Publish videos

As you can see there’s a lot of things you can do with a very low barrier to entry. By far the easiest way to get started is by starting to blog. Did you come across an issue that you spent hours figuring out, and you couldn’t find any good resolution online? Well, write about the problem you faced, what things you tried to get it resolved, why it failed and what resolved the issue in the end.

Or just start writing about things you do at your job or in your homelab. Use your blog as your personal documentation of how you deployed a certain product. This helps yourself but also others that might be wanting to do the same thing.

I want to apply but I need some help

Filling out your application for the first time can be daunting. But don’t worry, the vExpert program has your back. A few years ago, the vExpert PRO was introduced. A vExpert PRO is someone in your local community that is actively trying to build out the community. They can also help you with filling out your application and giving you pointers on how to improve it.

I hope this gives you some good information about what the vExpert program is and how you can apply to get in. Still have more questions or need help filling in your application? Don’t hesitate to contact me!

Deploying vIDM through vRealize Suite Lifecycle Manager

One of the reasons I got my homelab was to test out stuff that I don’t necessarily have access to at work. Recently, vIDM has peaked my intrest so I decided to deploy it. I had already deployed vRealize Suite Lifecycle Manager so why not use that for the deploy!

Getting everything ready

Before we can actually start installing vIDM, we need to get the binaries onto vRealize Suite Lifecycle Manager (vRSLCM). This can be done in three ways; connect your MyVMware account and download everything onto the appliance, copy the binaries to the appliance yourself through WinSCP, or connect vRSLCM to an NFS share. We’ll be using the last option here.

When you first login to vRSLCM, click on the Lifecycle Operations card which will bring you to your environments, datacenters, etc.

Click on the setting link on the left hand side.

Then, near the bottom, click on the Binary Mapping card. On the net screen, just hit the add binaries button.

Hit the NFS radio button and enter the path to your NFS share. I did run into some issues here because I didn’t realize my Synology also includes the volume name in the NFS share path. As soon as you hit the discover button, you’ll get a list of all the binaries that vRSLCM detects. Import the ones you need, in our case, it’s the OVA file for identity manager.

Creating a new environment

Now that we have the vIDM binary imported, it’s time to prepare for the actual deployment. On the left hand side, click the Create Environment link. On the next screen, click the slide button. This will tell vRSLCM that we’re going to install vIDM. You’ll notice that the environment name changes to globalEnvironment.

Select a default password from your locker, or create a new one and select it. This password will be set on the default admin user in vIDM. Select the datacenter where you want to deploy vIDM to or add a vCenter connection if you haven’t already done so.

On the next page, you can select what product is going to be installed. Notice that you can only select vIDM here. Select the correct version and deployment type. The only 2 deployment types you can choose are Standard and Cluster. For my lab, I’m selecting standard install.

You’ll be presented with the EULA on the next page which you’ll read through entirely before accepting of course… πŸ™‚ Now it’s time to select a certificate, if you’ve got one imported already you can just select that one. I don’t have one available yet so I will generate a new one for the lab and replace it later, once my PKI has been set up.

The next page is all about the infrastructure itself, select your vCenter, datastore, network and all the usual stuff for your environment and hit next.

Enter the network settings for your environment and hit the Edit server selection button.

By default the same DNS servers, that are configured on the appliance will be shown. If you want to add additional ones, you can do so by clicking add new server in previous screen. Select the DNS servers you want and configure them in the correct order

The final page ties everything together and lets you set the final bits of configuration. Review that the certificate and password are the correct ones. Additionally you can select the node size here as well. I selected the medium size here, which deploys the VM with 8 CPUs and 16 GB RAM. I also entered an admin email and default username.

At the bottom of the page, I entered the VM name, FQDN and IP address.

Review and deploy vIDM

Now that all the details have been entered, you can review everything and run through the prechecks. Several checks will be done here, whether the DNS records exists or not, if the IP address is actually free, etc. Correct any errors you get and read through the warnings to make sure you can continue.

As soon as you hit deploy, you will be taken to the request details where you can follow along with every step of the way. I actually quite like looking at this as it looks great.

Once everything is done, you can go to the environments again. You’ll see you have a completed globalenvironment with vIDM in it.

You can just open up a new browser window and go to the FQDN you entered during setup, you should be see the following screen and login with the credentials you set during setup.

That concludes this guide, I hope you found it useful! If you have any more questions or comments just hit me up on twitter or in the comments below!

How to change iLO IP from ESXi

I ran into an issue at work, where the iLO lost all its settings after a hardware intervention. The host was running fine and the iLO NIC was connected, but the IP configuration was missing. Because this is a remote site, it would be easy if the configuration could be set from within ESXi. Luckily, this is a possibility!

Installing the tools

The first thing we need to do is download the VIB from the HPE website. You can find the tools if you search for “HPE Utilities Offline Bundle”. Download the latest zip file, for your specific version of ESXi, and upload it somewhere on your ESXi host.

Next, install the bundle using this command;

esxcli software vib install -d /tmp/

The result will look something like this

Installation result

Reboot the host after the installation. Once the host is back, you can verify that it is installed by running the following command. The output should look like the picture below.

esxcli software vib list | grep hpon*
Verify if software is installed

Listing iLO configuration

Now that we have the tool available, we can check the current configuration of the iLO. First, change the directory to /opt/tools, which is where the tool is installed to. Next, run the following command to export the active config to a txt file.

./hponcfg -w /tmp/ilocfg.txt

Looking at the exported configuration, you can immediately see what’s wrong, in our case the iLO configuration is completely empty.

iLO configuration

Changing the iLO configuration

To change the configuration, we can just make the required changes to the exported configuration file. Just use vi to make the required changes.

Once you’re satisfied with the new config, you can apply it by running this command

./hponcfg -f /tmp/ilocfg.txt

You should be able to reach the iLO on the configured IP address. There’s just one more step to do, and that is to reset the administrator password.

To do this, we’ll create a new xml file with the config to reset the admin password. Just create a new file and give it a name that you want, here we’re using admin.xml. The contents of the file look like this

<ribcl VERSION="2.0">
    <login USER_LOGIN="Administrator" PASSWORD="SuperS3curePass!>
        <user_INFO MODE="write">
            <mod_USER USER_LOGIN="Administrator">
                <password value="SuperS3curePass!" />

Now we just apply the config file, like we did before;

./hponcfg -f /tmp/admin.xml

Once the configuration is successfully applied, verify that you can access iLO with the new password. If it works, make sure you delete the admin.xml file as this contains the password in clear text!

Change NSX-T password with API

Last night, I logged into the NSX-T manager in my lab and was greeted with the following message.

In the past, I would SSH into the edge nodes and change the password like that. But since NSX-T is suggesting to use the API, I figured I would try it. This would be a lot easier to do for the 3 users on both my edge nodes than having to type out all the commands.

Connecting to the API

Whenever I want to explore an API, I fire up Postman to see what I can learn. To connect to NSX-T, you can just enter the URL in the address bar. In this case, I’m going to explore https://s-bi-nsxmgr1/api/v1/node/users . Where s-bi-nsxmgr1 is the hostname of one of my NSX-T managers, and /api/v1/node/users is the API endpoint I’m querying.

Before we can hit the send button, we need to provide some credentials in the Authorization tab. Here in my lab, I’m just using basic auth and the admin user.

Getting the users

Once you’ve specified credentials to authorize onto NSX-T, you can hit send and get a list of all users that exist in your NSX-T environment. Here in my lab, only 5 users exist, 2 of which are not activated. This will probably be the same in many labs, I only have the root, admin, and audit user configured.

For each user, you also get some additional attributes like the last time the password was changed and the required password change frequency.

Changing the password

So let’s go ahead and change the password of the audit user. Note down the userid of the audit user from the previous API call. In my case, it’s 10002. Now that we’ve got the userid, we can verify if it’s actually the user we want. Type the userid in the URL so it looks like this; https://s-bi-nsxmgr1/api/v1/node/users/10002 and hit enter again. You’ll see all the details of the audit user.

Now to actually change the password, we need to send a body along with our API call. In the body, we need to define 2 attributes; password and old_password. Where password is the new password and old_password is, well it’s the old password of course! πŸ™‚

In postman, go to the Body tab and make sure the radio button is set to raw and the type to JSON. Also change the call type from GET to PUT, since we’re putting a new password.

In the body you can set the following, change the values accordingly;

    "password": "UltraHighS3cur!ty123!",
    "old_password": "SuperSecurePassword2"

If all goes well you should get the following output

HTTP code 200 indicates a success and you can see the last_password_change attribute has been reset to 0, indicating it was just changed.

So you can see, changing the password via the API is not something to be scared of, in fact it’s dead simple and can save you a lot of time. Just make sure to clean up the postman tabs you used, so no-one can get to the new password.

Homelab 2021

Homelabs…It’s a topic that gets a lot of interest everywhere. We’re all geeks at heart who like to tinker with hardware and play around and break things. I haven’t had a homelab since I sold my Supermicro server back in 2016. Back then, I had access to a lab at my employer and felt that I could no longer justify the cost of running that thing in my basement. Fast forward to 2020 and I find myself really missing that lab.

Why a homelab

Earlier this year, I decided I would start to learn some more about NSX. My entire VMworld schedule was built around getting as much NSX content as I could. With the VMworld VCP Exam voucher, I tried to get my VCP-NV but ultimately failed.

I ended up scoring 275, which caused a great deal of frustration because I was so close. With the questions that I got, I did feel that it would have been a pass if I had some hands-on experience with the product. So, I decided it was time to take the plunge and invest in a new homelab.

Whenever I talk to other people, in or outside of the industry, about homelabs, the cost is always a big issue that comes up. “Why would you run something in your basement that sits there eating power”, “You’re going to spend how much on some computers?”, … These are statements we’ve all probably heard before. But you need to look at it as an investment in yourself, remember that you have to always keep learning. If you stop learning, you’ll eventually miss out on opportunities that could mean the next step in your career.


Before just going out and buying random gear, I figured this exercise is no different than making a new design for a customer. Eventually, I would also like to give VCDX a shot, so I need the practice. So I decided that I would approach this like I would a normal customer interaction.


The first step in the design process is to ask myself what the requirements were. As this is a homelab, some requirements here are not what you would see during a typical engagement. The lab will be running in my basement so noise and power consumption are things that become important. I didn’t really want jet engines running in my basement.

I also wanted something that I could expand later on when other use cases or requirements present themselves. This is the list of requirements that I came up with;

  • Support nested virtualization
  • System must be silent
  • System must be relatively low power
  • System should support 10 GbE for future-proofing
  • System should support NVMe drives
  • Solution should be expandable
  • Solution should be able to support GPUs in the future


As with any design, there were some constraints that limited my choices;

  • Budget isn’t unlimited
  • Limited 1 GbE ports available on existing switch
  • Limited storage capacity on existing Synology
  • No 10 GbE capabilities present at the moment
  • Since this lab is being housed in my basement, WAF is definitely a thing


As I could verify a lot of things, I only had 1 real assumption left in my design;

  • Existing NAS capacity is sufficient for backups

This is both an assumption and a risk, the risk being that the capacity is not enough. In a normal customer engagement, I would try to mitigate this risk. But for this scenario, I’m willing to accept the risk, the mitigation here is that I need to purchase additional hardware to provide more capacity.

The design I wrote has a lot more information and decisions in there, but I’m going to keep those for some more blog posts about AMPRS πŸ™‚

So what did I end up with?

In the end, I decided to build a 4-node all-flash vSAN cluster. Since I’m pretty familiar with Intel CPUs and power consumption was also a thing, I went with an AMD EPYC (Rome) CPU. This also helped with the future proof requirement as this platform supports PCI-e Gen 4 already.

Each host has 8 cores, 128 GB of RAM and 2 NVMe SSDs (500 GB and 2 TB). This should be plenty to test some DCV stuff and to take my first steps with NSX.

Bill of Materials

This is the part most of you have probably been waiting for, the full BoM can be found below. All prices are in euro and links point to the dutch website

QuantityItemPrice / ItemTotal price
4AMD Epyc 7252 Boxed€ 375,66€ 1502,64
4Asrock Rack ROMED8-2T€ 574,45€ 2.297,80
4Fractal Design Define R5 Black€ 106€ 424
4Samsung FIT Plus 64 GB€ 16,95€ 67,80
2Netgear Prosafe XS708T€ 519€ 1.038
4Noctua NH-U9 TR4-SP3€ 67,91€ 271,64
12Noctua NF-A14 FLX 140 mm€ 20,18€ 242,16
16Micron MTA36ASF4G72PZ-3G2J3€ 188,38€ 3.014,08
4Seasonic Focus-PX750€ 161,14€ 644,56
4Gigabyte Aorus Gen 4 2 TB€ 309€ 1.236
4Samsung 980 Pro 500 GB€ 117,90€ 471,60
Total€ 11.210

I hope this blog can help you in your search for the perfect homelab for your needs. Be sure to check out William Lam’s collection of homelabs on Github for more ideas and resources.

Clear DNS cache VCSA/PhotonOS

The last few weeks I’ve had to do a couple of IP changes on ESXi hosts. This always goes without a lot of issues but it can be annoying when you have to wait for the new IP address to be updated in DNS and then for it to be visible in vCenter. The quickest way to get the new DNS records, is to clear the DNS cache in VCSA. Since VCSA is based on PhotonOS, this will also work on other PhotonOS VMs.

PhotonOS uses dnsmasq as a local DNS cache/proxy. So all we have to do is restart that service to clear the cache.

First, open up an SSH session to VCSA and enter the bash shell.

Run the following systemctl command to restart the service.

systemctl restart dnsmasq.service

Now, we just have to check that the service is up and running again. We can use systemctl for that as well.

systemctl status dnsmasq.service

If all is well you should see the output like it’s shown above. Run a quick ping to check that the DNS record is resolving to the correct IP and you’re done!

This was a quick post but I kept having to google it. Hope this helps !

vCenter update fails: vCenter server is non-operational

Today, while I was updating vCenter in my lab, I ran into a strange issue. The update I was installing failed, when I wanted to try again, this ominous error message popped up.

vCenter being non-operational left me a bit of a doom and gloom feeling but, thankfully, the fix is rather easy.

Fixing the issue

Open up an SSH session to your vCenter server and run the command below to remove the software_update_state.conf file.

rm /etc/applmgmt/appliance/software_update_state.conf

You can check the contents of the file by opening it with vi, an example is shown below.


Turns out that the state “INSTALL_FAILED” is checked by the python script that performs the installation. The script can be found in /usr/lib/applmgmt/update/py/vmware/appliance/update/

Removing the file will make this check pass and the installation will continue.