Deploying vIDM through vRealize Suite Lifecycle Manager

One of the reasons I got my homelab was to test out stuff that I don’t necessarily have access to at work. Recently, vIDM has peaked my intrest so I decided to deploy it. I had already deployed vRealize Suite Lifecycle Manager so why not use that for the deploy!

Getting everything ready

Before we can actually start installing vIDM, we need to get the binaries onto vRealize Suite Lifecycle Manager (vRSLCM). This can be done in three ways; connect your MyVMware account and download everything onto the appliance, copy the binaries to the appliance yourself through WinSCP, or connect vRSLCM to an NFS share. We’ll be using the last option here.

When you first login to vRSLCM, click on the Lifecycle Operations card which will bring you to your environments, datacenters, etc.

Click on the setting link on the left hand side.

Then, near the bottom, click on the Binary Mapping card. On the net screen, just hit the add binaries button.

Hit the NFS radio button and enter the path to your NFS share. I did run into some issues here because I didn’t realize my Synology also includes the volume name in the NFS share path. As soon as you hit the discover button, you’ll get a list of all the binaries that vRSLCM detects. Import the ones you need, in our case, it’s the OVA file for identity manager.

Creating a new environment

Now that we have the vIDM binary imported, it’s time to prepare for the actual deployment. On the left hand side, click the Create Environment link. On the next screen, click the slide button. This will tell vRSLCM that we’re going to install vIDM. You’ll notice that the environment name changes to globalEnvironment.

Select a default password from your locker, or create a new one and select it. This password will be set on the default admin user in vIDM. Select the datacenter where you want to deploy vIDM to or add a vCenter connection if you haven’t already done so.

On the next page, you can select what product is going to be installed. Notice that you can only select vIDM here. Select the correct version and deployment type. The only 2 deployment types you can choose are Standard and Cluster. For my lab, I’m selecting standard install.

You’ll be presented with the EULA on the next page which you’ll read through entirely before accepting of course… 🙂 Now it’s time to select a certificate, if you’ve got one imported already you can just select that one. I don’t have one available yet so I will generate a new one for the lab and replace it later, once my PKI has been set up.

The next page is all about the infrastructure itself, select your vCenter, datastore, network and all the usual stuff for your environment and hit next.

Enter the network settings for your environment and hit the Edit server selection button.

By default the same DNS servers, that are configured on the appliance will be shown. If you want to add additional ones, you can do so by clicking add new server in previous screen. Select the DNS servers you want and configure them in the correct order

The final page ties everything together and lets you set the final bits of configuration. Review that the certificate and password are the correct ones. Additionally you can select the node size here as well. I selected the medium size here, which deploys the VM with 8 CPUs and 16 GB RAM. I also entered an admin email and default username.

At the bottom of the page, I entered the VM name, FQDN and IP address.

Review and deploy vIDM

Now that all the details have been entered, you can review everything and run through the prechecks. Several checks will be done here, whether the DNS records exists or not, if the IP address is actually free, etc. Correct any errors you get and read through the warnings to make sure you can continue.

As soon as you hit deploy, you will be taken to the request details where you can follow along with every step of the way. I actually quite like looking at this as it looks great.

Once everything is done, you can go to the environments again. You’ll see you have a completed globalenvironment with vIDM in it.

You can just open up a new browser window and go to the FQDN you entered during setup, you should be see the following screen and login with the credentials you set during setup.

That concludes this guide, I hope you found it useful! If you have any more questions or comments just hit me up on twitter or in the comments below!


Homelab 2021

Homelabs…It’s a topic that gets a lot of interest everywhere. We’re all geeks at heart who like to tinker with hardware and play around and break things. I haven’t had a homelab since I sold my Supermicro server back in 2016. Back then, I had access to a lab at my employer and felt that I could no longer justify the cost of running that thing in my basement. Fast forward to 2020 and I find myself really missing that lab.

Why a homelab

Earlier this year, I decided I would start to learn some more about NSX. My entire VMworld schedule was built around getting as much NSX content as I could. With the VMworld VCP Exam voucher, I tried to get my VCP-NV but ultimately failed.

I ended up scoring 275, which caused a great deal of frustration because I was so close. With the questions that I got, I did feel that it would have been a pass if I had some hands-on experience with the product. So, I decided it was time to take the plunge and invest in a new homelab.

Whenever I talk to other people, in or outside of the industry, about homelabs, the cost is always a big issue that comes up. “Why would you run something in your basement that sits there eating power”, “You’re going to spend how much on some computers?”, … These are statements we’ve all probably heard before. But you need to look at it as an investment in yourself, remember that you have to always keep learning. If you stop learning, you’ll eventually miss out on opportunities that could mean the next step in your career.

Design

Before just going out and buying random gear, I figured this exercise is no different than making a new design for a customer. Eventually, I would also like to give VCDX a shot, so I need the practice. So I decided that I would approach this like I would a normal customer interaction.

Requirements

The first step in the design process is to ask myself what the requirements were. As this is a homelab, some requirements here are not what you would see during a typical engagement. The lab will be running in my basement so noise and power consumption are things that become important. I didn’t really want jet engines running in my basement.

I also wanted something that I could expand later on when other use cases or requirements present themselves. This is the list of requirements that I came up with;

  • Support nested virtualization
  • System must be silent
  • System must be relatively low power
  • System should support 10 GbE for future-proofing
  • System should support NVMe drives
  • Solution should be expandable
  • Solution should be able to support GPUs in the future

Constraints

As with any design, there were some constraints that limited my choices;

  • Budget isn’t unlimited
  • Limited 1 GbE ports available on existing switch
  • Limited storage capacity on existing Synology
  • No 10 GbE capabilities present at the moment
  • Since this lab is being housed in my basement, WAF is definitely a thing

Assumptions

As I could verify a lot of things, I only had 1 real assumption left in my design;

  • Existing NAS capacity is sufficient for backups

This is both an assumption and a risk, the risk being that the capacity is not enough. In a normal customer engagement, I would try to mitigate this risk. But for this scenario, I’m willing to accept the risk, the mitigation here is that I need to purchase additional hardware to provide more capacity.

The design I wrote has a lot more information and decisions in there, but I’m going to keep those for some more blog posts about AMPRS 🙂

So what did I end up with?

In the end, I decided to build a 4-node all-flash vSAN cluster. Since I’m pretty familiar with Intel CPUs and power consumption was also a thing, I went with an AMD EPYC (Rome) CPU. This also helped with the future proof requirement as this platform supports PCI-e Gen 4 already.

Each host has 8 cores, 128 GB of RAM and 2 NVMe SSDs (500 GB and 2 TB). This should be plenty to test some DCV stuff and to take my first steps with NSX.

Bill of Materials

This is the part most of you have probably been waiting for, the full BoM can be found below. All prices are in euro and links point to the dutch website tweakers.net.

QuantityItemPrice / ItemTotal price
4AMD Epyc 7252 Boxed€ 375,66€ 1502,64
4Asrock Rack ROMED8-2T€ 574,45€ 2.297,80
4Fractal Design Define R5 Black€ 106€ 424
4Samsung FIT Plus 64 GB€ 16,95€ 67,80
2Netgear Prosafe XS708T€ 519€ 1.038
4Noctua NH-U9 TR4-SP3€ 67,91€ 271,64
12Noctua NF-A14 FLX 140 mm€ 20,18€ 242,16
16Micron MTA36ASF4G72PZ-3G2J3€ 188,38€ 3.014,08
4Seasonic Focus-PX750€ 161,14€ 644,56
4Gigabyte Aorus Gen 4 2 TB€ 309€ 1.236
4Samsung 980 Pro 500 GB€ 117,90€ 471,60
Total€ 11.210

I hope this blog can help you in your search for the perfect homelab for your needs. Be sure to check out William Lam’s collection of homelabs on Github for more ideas and resources.


Install VIB on VMware ESXi

Today I got my shiny new host for the homelab. After installing ESXi, I noticed the 10 Gbit NICs weren’t being detected. After looking around a bit I found the drivers in the form of a VIB.

Start by uploading the VIB to a datastore that is accessible to the host. Turn on the SSH service in the security section and fire up a session.

Once connected run

esxcli software vib install -v /vmfs/volumes/NAS1-iSCSI/net-ixgbe_4.4.1-1OEM.600.0.0.2159203.vib

If everything goes well you should get an output like this

Installation Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: INT_bootbank_net-ixgbe_4.4.1-1OEM.600.0.0.2159203
   VIBs Removed: VMware_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.600.0.0.2494585
   VIBs Skipped:

Once all is done, reboot the host and you’ll see the installed device pop up, in my case the 10 Gbit NICs