Archives: November 12, 2021

Deploying vIDM through vRealize Suite Lifecycle Manager

One of the reasons I got my homelab was to test out stuff that I don’t necessarily have access to at work. Recently, vIDM has peaked my intrest so I decided to deploy it. I had already deployed vRealize Suite Lifecycle Manager so why not use that for the deploy!

Getting everything ready

Before we can actually start installing vIDM, we need to get the binaries onto vRealize Suite Lifecycle Manager (vRSLCM). This can be done in three ways; connect your MyVMware account and download everything onto the appliance, copy the binaries to the appliance yourself through WinSCP, or connect vRSLCM to an NFS share. We’ll be using the last option here.

When you first login to vRSLCM, click on the Lifecycle Operations card which will bring you to your environments, datacenters, etc.

Click on the setting link on the left hand side.

Then, near the bottom, click on the Binary Mapping card. On the net screen, just hit the add binaries button.

Hit the NFS radio button and enter the path to your NFS share. I did run into some issues here because I didn’t realize my Synology also includes the volume name in the NFS share path. As soon as you hit the discover button, you’ll get a list of all the binaries that vRSLCM detects. Import the ones you need, in our case, it’s the OVA file for identity manager.

Creating a new environment

Now that we have the vIDM binary imported, it’s time to prepare for the actual deployment. On the left hand side, click the Create Environment link. On the next screen, click the slide button. This will tell vRSLCM that we’re going to install vIDM. You’ll notice that the environment name changes to globalEnvironment.

Select a default password from your locker, or create a new one and select it. This password will be set on the default admin user in vIDM. Select the datacenter where you want to deploy vIDM to or add a vCenter connection if you haven’t already done so.

On the next page, you can select what product is going to be installed. Notice that you can only select vIDM here. Select the correct version and deployment type. The only 2 deployment types you can choose are Standard and Cluster. For my lab, I’m selecting standard install.

You’ll be presented with the EULA on the next page which you’ll read through entirely before accepting of course… 🙂 Now it’s time to select a certificate, if you’ve got one imported already you can just select that one. I don’t have one available yet so I will generate a new one for the lab and replace it later, once my PKI has been set up.

The next page is all about the infrastructure itself, select your vCenter, datastore, network and all the usual stuff for your environment and hit next.

Enter the network settings for your environment and hit the Edit server selection button.

By default the same DNS servers, that are configured on the appliance will be shown. If you want to add additional ones, you can do so by clicking add new server in previous screen. Select the DNS servers you want and configure them in the correct order

The final page ties everything together and lets you set the final bits of configuration. Review that the certificate and password are the correct ones. Additionally you can select the node size here as well. I selected the medium size here, which deploys the VM with 8 CPUs and 16 GB RAM. I also entered an admin email and default username.

At the bottom of the page, I entered the VM name, FQDN and IP address.

Review and deploy vIDM

Now that all the details have been entered, you can review everything and run through the prechecks. Several checks will be done here, whether the DNS records exists or not, if the IP address is actually free, etc. Correct any errors you get and read through the warnings to make sure you can continue.

As soon as you hit deploy, you will be taken to the request details where you can follow along with every step of the way. I actually quite like looking at this as it looks great.

Once everything is done, you can go to the environments again. You’ll see you have a completed globalenvironment with vIDM in it.

You can just open up a new browser window and go to the FQDN you entered during setup, you should be see the following screen and login with the credentials you set during setup.

That concludes this guide, I hope you found it useful! If you have any more questions or comments just hit me up on twitter or in the comments below!


How to change iLO IP from ESXi

I ran into an issue at work, where the iLO lost all its settings after a hardware intervention. The host was running fine and the iLO NIC was connected, but the IP configuration was missing. Because this is a remote site, it would be easy if the configuration could be set from within ESXi. Luckily, this is a possibility!

Installing the tools

The first thing we need to do is download the VIB from the HPE website. You can find the tools if you search for “HPE Utilities Offline Bundle”. Download the latest zip file, for your specific version of ESXi, and upload it somewhere on your ESXi host.

Next, install the bundle using this command;

esxcli software vib install -d /tmp/HPE-Utility-Component_6.5.0.10.7.1-8.zip

The result will look something like this

Installation result

Reboot the host after the installation. Once the host is back, you can verify that it is installed by running the following command. The output should look like the picture below.

esxcli software vib list | grep hpon*
Verify if software is installed

Listing iLO configuration

Now that we have the tool available, we can check the current configuration of the iLO. First, change the directory to /opt/tools, which is where the tool is installed to. Next, run the following command to export the active config to a txt file.

./hponcfg -w /tmp/ilocfg.txt

Looking at the exported configuration, you can immediately see what’s wrong, in our case the iLO configuration is completely empty.

iLO configuration

Changing the iLO configuration

To change the configuration, we can just make the required changes to the exported configuration file. Just use vi to make the required changes.

Once you’re satisfied with the new config, you can apply it by running this command

./hponcfg -f /tmp/ilocfg.txt

You should be able to reach the iLO on the configured IP address. There’s just one more step to do, and that is to reset the administrator password.

To do this, we’ll create a new xml file with the config to reset the admin password. Just create a new file and give it a name that you want, here we’re using admin.xml. The contents of the file look like this

<ribcl VERSION="2.0">
    <login USER_LOGIN="Administrator" PASSWORD="SuperS3curePass!>
        <user_INFO MODE="write">
            <mod_USER USER_LOGIN="Administrator">
                <password value="SuperS3curePass!" />
            </mod_USER>
        </user_INFO>
    </login>
</ribcl> 

Now we just apply the config file, like we did before;

./hponcfg -f /tmp/admin.xml

Once the configuration is successfully applied, verify that you can access iLO with the new password. If it works, make sure you delete the admin.xml file as this contains the password in clear text!


Change NSX-T password with API

Last night, I logged into the NSX-T manager in my lab and was greeted with the following message.

In the past, I would SSH into the edge nodes and change the password like that. But since NSX-T is suggesting to use the API, I figured I would try it. This would be a lot easier to do for the 3 users on both my edge nodes than having to type out all the commands.

Connecting to the API

Whenever I want to explore an API, I fire up Postman to see what I can learn. To connect to NSX-T, you can just enter the URL in the address bar. In this case, I’m going to explore https://s-bi-nsxmgr1/api/v1/node/users . Where s-bi-nsxmgr1 is the hostname of one of my NSX-T managers, and /api/v1/node/users is the API endpoint I’m querying.

Before we can hit the send button, we need to provide some credentials in the Authorization tab. Here in my lab, I’m just using basic auth and the admin user.

Getting the users

Once you’ve specified credentials to authorize onto NSX-T, you can hit send and get a list of all users that exist in your NSX-T environment. Here in my lab, only 5 users exist, 2 of which are not activated. This will probably be the same in many labs, I only have the root, admin, and audit user configured.

For each user, you also get some additional attributes like the last time the password was changed and the required password change frequency.

Changing the password

So let’s go ahead and change the password of the audit user. Note down the userid of the audit user from the previous API call. In my case, it’s 10002. Now that we’ve got the userid, we can verify if it’s actually the user we want. Type the userid in the URL so it looks like this; https://s-bi-nsxmgr1/api/v1/node/users/10002 and hit enter again. You’ll see all the details of the audit user.

Now to actually change the password, we need to send a body along with our API call. In the body, we need to define 2 attributes; password and old_password. Where password is the new password and old_password is, well it’s the old password of course! 🙂

In postman, go to the Body tab and make sure the radio button is set to raw and the type to JSON. Also change the call type from GET to PUT, since we’re putting a new password.

In the body you can set the following, change the values accordingly;

{
    "password": "UltraHighS3cur!ty123!",
    "old_password": "SuperSecurePassword2"
}

If all goes well you should get the following output

HTTP code 200 indicates a success and you can see the last_password_change attribute has been reset to 0, indicating it was just changed.

So you can see, changing the password via the API is not something to be scared of, in fact it’s dead simple and can save you a lot of time. Just make sure to clean up the postman tabs you used, so no-one can get to the new password.


Homelab 2021

Homelabs…It’s a topic that gets a lot of interest everywhere. We’re all geeks at heart who like to tinker with hardware and play around and break things. I haven’t had a homelab since I sold my Supermicro server back in 2016. Back then, I had access to a lab at my employer and felt that I could no longer justify the cost of running that thing in my basement. Fast forward to 2020 and I find myself really missing that lab.

Why a homelab

Earlier this year, I decided I would start to learn some more about NSX. My entire VMworld schedule was built around getting as much NSX content as I could. With the VMworld VCP Exam voucher, I tried to get my VCP-NV but ultimately failed.

I ended up scoring 275, which caused a great deal of frustration because I was so close. With the questions that I got, I did feel that it would have been a pass if I had some hands-on experience with the product. So, I decided it was time to take the plunge and invest in a new homelab.

Whenever I talk to other people, in or outside of the industry, about homelabs, the cost is always a big issue that comes up. “Why would you run something in your basement that sits there eating power”, “You’re going to spend how much on some computers?”, … These are statements we’ve all probably heard before. But you need to look at it as an investment in yourself, remember that you have to always keep learning. If you stop learning, you’ll eventually miss out on opportunities that could mean the next step in your career.

Design

Before just going out and buying random gear, I figured this exercise is no different than making a new design for a customer. Eventually, I would also like to give VCDX a shot, so I need the practice. So I decided that I would approach this like I would a normal customer interaction.

Requirements

The first step in the design process is to ask myself what the requirements were. As this is a homelab, some requirements here are not what you would see during a typical engagement. The lab will be running in my basement so noise and power consumption are things that become important. I didn’t really want jet engines running in my basement.

I also wanted something that I could expand later on when other use cases or requirements present themselves. This is the list of requirements that I came up with;

  • Support nested virtualization
  • System must be silent
  • System must be relatively low power
  • System should support 10 GbE for future-proofing
  • System should support NVMe drives
  • Solution should be expandable
  • Solution should be able to support GPUs in the future

Constraints

As with any design, there were some constraints that limited my choices;

  • Budget isn’t unlimited
  • Limited 1 GbE ports available on existing switch
  • Limited storage capacity on existing Synology
  • No 10 GbE capabilities present at the moment
  • Since this lab is being housed in my basement, WAF is definitely a thing

Assumptions

As I could verify a lot of things, I only had 1 real assumption left in my design;

  • Existing NAS capacity is sufficient for backups

This is both an assumption and a risk, the risk being that the capacity is not enough. In a normal customer engagement, I would try to mitigate this risk. But for this scenario, I’m willing to accept the risk, the mitigation here is that I need to purchase additional hardware to provide more capacity.

The design I wrote has a lot more information and decisions in there, but I’m going to keep those for some more blog posts about AMPRS 🙂

So what did I end up with?

In the end, I decided to build a 4-node all-flash vSAN cluster. Since I’m pretty familiar with Intel CPUs and power consumption was also a thing, I went with an AMD EPYC (Rome) CPU. This also helped with the future proof requirement as this platform supports PCI-e Gen 4 already.

Each host has 8 cores, 128 GB of RAM and 2 NVMe SSDs (500 GB and 2 TB). This should be plenty to test some DCV stuff and to take my first steps with NSX.

Bill of Materials

This is the part most of you have probably been waiting for, the full BoM can be found below. All prices are in euro and links point to the dutch website tweakers.net.

QuantityItemPrice / ItemTotal price
4AMD Epyc 7252 Boxed€ 375,66€ 1502,64
4Asrock Rack ROMED8-2T€ 574,45€ 2.297,80
4Fractal Design Define R5 Black€ 106€ 424
4Samsung FIT Plus 64 GB€ 16,95€ 67,80
2Netgear Prosafe XS708T€ 519€ 1.038
4Noctua NH-U9 TR4-SP3€ 67,91€ 271,64
12Noctua NF-A14 FLX 140 mm€ 20,18€ 242,16
16Micron MTA36ASF4G72PZ-3G2J3€ 188,38€ 3.014,08
4Seasonic Focus-PX750€ 161,14€ 644,56
4Gigabyte Aorus Gen 4 2 TB€ 309€ 1.236
4Samsung 980 Pro 500 GB€ 117,90€ 471,60
Total€ 11.210

I hope this blog can help you in your search for the perfect homelab for your needs. Be sure to check out William Lam’s collection of homelabs on Github for more ideas and resources.