Posted by Maarten Van Driessen on

Quick post: Bug in Veeam Quick Migration

Last week I was moving some VMs for a client using Veeam Quick Migration. During the migration, I happened to stumble upon some strange behavior.

The source VMs make up an Oracle RAC cluster, using shared VMDKs. One of the requirements for setting up shared VMDKs, using the multi-writer option, is that they are thick eager zeroed disks. When the VMs got to the other side, they wouldn’t boot. I figured something went wrong during the migration so I tried it again. Once the second run completed, the VMs still wouldn’t boot so I started digging around some more.

The error message I was getting was rather vague; “Incompatible device backing specified for device ‘0’.”. After verifying the config of both nodes I eventually decided to look at the disk type on the destination side. That’s when I noticed the disk type was thick provisioned lazy zeroed. Ahah, that’s why they didn’t want to boot! After manually inflating the disks, they were up and running again. I’m starting to suspect that this is a bug.

Running some tests

I started building some more test VMs just to prove that this was, in fact, a bug. One of the options you can set during the Quick Migration wizard, is the disk type. You can explicitly select each of the types, or you can have Veeam use the same format as on the source side. Explicitly selecting thick provisioned eager zeroed or using the same as source also produced a VM with lazy zeroed disks. Time to submit a ticket!

As usual, Veeam support was very helpful and investigated the issue. A couple days later they came back to me and confirmed this was indeed a bug that will be fixed in an upcoming version.


This bug is a minor inconvenience since there is an easy workaround. You can login to an ESXi server using SSH and convert the VMDK using the command

More info on how to convert a VMDK can be found in this VMware KB.

Posted by Maarten Van Driessen on

Publicly share Office 365 room calendar

A customer asked me if it was possible to have a room mailbox automatically accept meeting requests from external parties. They would also like to publish the calendar of that specific room publicly.

Accept meetings from external parties

Let’s start with the first question. By default, resource mailboxes only accept requests from internal senders. As you might guess, you can’t change this behavior through the GUI, Powershell to the rescue!

Since I didn’t know the cmdlet that would let me change this behavior, the first thing I did was look for all “Calendar cmdlets”. After connecting to the Office 365 PowerShell, I ran this command

Seems like there are a few cmdlets concerning calendars, good info for the second question! The Get-CalendarProcessing cmdlet looks promising, let’s try it out!

As you can see on the highlighted line, this is exactly the property we were looking for. Let’s change it so we get the desired behavior. In the get-command output, I saw a cmdlet Set-CalendarProcessing, this seems like the right one.

This change will only affect new meeting requests, requests that have already been refused won’t be automatically accepted.

Publish calendar publicly

In the cmdlets we got earlier, there wasn’t really one that stood out as a “possible match” so let’s look at the attributes of the calendar itself. In essence, the calendar is just a folder inside of a mailbox object. Let’s query that folder directly.

That’s everything we need and more! As you can see, we can set the PublishEnabled attribute to true but we can do so much more. You can choose the detail level and even set how far back and forth the published calendar needs to go.

Let’s publish the calendar and run the Get-MailboxCalendarFolder cmdlet again to get the URL.

All done! Now you can browse to the URL and verify everything is being displayed as you’d expect.

Posted by Maarten Van Driessen on

Cannot get extent connection. Failed to restore file from local backup

When I got into the office this morning, I noticed that on particular copy job hadn’t done its` job over the weekend. This particular job copies the daily restore points to a separate scale-out repository and enforces the GFS scheme that’s been set.

The job report displayed

Not that much to go on if you ask me. First, I checked to see if all extents in the repository still had enough room, this was the case. While I was doing that, I verified that all my proxies were still up and running. Before heading to my good friend Google, I decided to remove the copy job restore points from the configuration.

After this, I did a rescan of the repository and retried the job. It ran without a hitch, a nice and easy fix 🙂 I hope this won’t become a common thing, time will tell.

Posted by Maarten Van Driessen on

Looking forward to the coming year

Looking forward to the coming year

It’s been fairly quiet on the blog front lately, with this post I’m trying to pick it back up again 🙂 I decided to put the blog in a fresh new theme, I really like how it turned out! It looks a lot cleaner now.

2016 in review

When I started this blog last year, I made a goals page that lists everything I wanted to get done that year. A quick review:

  • Earn the VCP6-DCV certificate –> FAIL. I only managed to get part 1/2 done, I haven’t gotten to part 2 yet, but it is one of the top priorities this year.
  • Become CCNA  –> FAIL. I will be taking this one off the list. When I have some more time, I might pick it up. But for the moment, I feel my knowledge is good enough to get by.
  • Learn PowerShell –> PASS. Last year I started performing routine tasks using PowerShell. Eventually, I was able to automate some of the things that I had to do frequently. Over the past year, I’ve gotten a pretty good feel for the language and I’m constantly discovering new things! I will be continuing with this until I feel that I have mastered it.
  • Publish code to GitHub –> PASS. I published 2 scripts that I created. I also made my first ever pull request and added some tests to the awesome Vester project (If you don’t know it, check it out here!). When I get some more free time, I will be looking for some more projects to contribute to.
  • Attend VMUG(s) –> PASS. I managed to go to both the Belgian as the Dutch VMUGs and I’m hoping to do it this year as well.

2016 was also a big year on a personal level. In august, I changed jobs and started working for Realdolmen as a system engineer. This was one of the best decisions I have made recently, working here gives me the chance to interact with some of the smartest people I know. I get to work with complex and interesting environments and I’m learning new things every day!

But the most important thing I did, was asking my girlfriend to marry me. She said yes and we’re getting married this coming May, I’m very much looking forward to it!

Looking forward to 2017

Obviously the biggest thing for me this year is my wedding. Shortly after that, 29 colleagues and I will be climbing the legendary Mont Ventoux by bike. I’m riding a lot again and found the joy of cycling again.

On a professional level, I will also be setting a few goals for the coming year.

  • Earn the VCP6-DCV certificate –> This is the first thing I want to get done education wise this year. It’s possible I won’t have time to do this until the summer, though.
  • Continue learning PowerShell –> There’s a ton I don’t know yet, and a lot that I can do better. I’ve started to put most of my code in functions and I will be looking into building some modules where I can.
  • Upgrade my MCSA to 2016 –> With the release of Server 2016, it’s time to upgrade my MCSA. I don’t want to let it expire, which would mean I would have to take the first 3 exams again.
  • Keep the blog more active this year –> Changing jobs in the summer, starting cycling again and preparing for our wedding has eaten up most of my free time since august. I’m hoping to find some more time to keep this blog going with some new content!
  • Wildcard –> I’m keeping this one open for something else to do the coming year. I’m not entirely sure what it is yet, it will all depend on the amount of free time I have and how the other goals have come this year.

A short list this year, but with a lot going on in my personal life, this feels reasonable.

Posted by Maarten Van Driessen on

Deleting an Office 365 mailbox in hybrid deployment

When running Office 365 in a hybrid deployment, it is possible to have a mailbox both on premises as on Office 365. This can happen when you assign the user an Exchange Online license before the mailbox has been migrated to Office 365.

If the user’s outlook is still configured to use the on-premises mailbox, this can create some funky issues. For example, sent items will be in the on-premises mailbox but new items will arrive in Office 365.

Verifying the mailbox on Office 365

The first thing you have to do is verify if the user has a mailbox on 365, Powershell lets us do this. This command returns all

Unsyncing user from Office 365

Once you’ve identified the user, that has a mailbox in Office 365, you have to remove the object from Office 365. You can do this by moving the user to an OU that is not synced with AADConnect. After that, run another AD Sync or wait until the next time the scheduled task runs. This sync will remove the user from Azure AD and flag the mailbox as “SoftDeleted”. That puts the mailbox in the recycle bin, there it will stay for 30 days before it will be automatically deleted.

Removing the mailbox from the recycle bin

In order to delete the user from the cloud recycle bin, you can use the following Powershell commands

Because of the distributed nature of Office 365, it can take up to 15 minutes for the changes to replicate. Now we can hard delete the mailbox

Finishing up

As a final step, move the user back to an OU that is synced. The next time the scheduled task runs, the user will be recreated in Azure AD and will appear in the Office 365 admin center.

Posted by Maarten Van Driessen on

How to grant access to deleted user’s Onedrive for Business library

A couple of weeks ago we had an issue which resulted in some users getting deleted from Office 365. Because of a sync tool we use, we had to recreate the user accounts instead of restoring them. As a result of this (different SID), the new users also got a new Onedrive for Business library and could no longer access their existing library.

Listing all disabled user profiles

In the Office 365 admin center, go to the SharePoint admin center and click User profiles and Manage User Profiles.


While using various searches, I noticed that every user had i:0#.f|membership in front of their UPN. So I decided to use that as the search term, turns out this lists all users. After changing the view to Profiles missing from import, I got a list of all the disabled users.


Changing Onedrive for Business library permissions

Now that we can see the profiles, we can change the permissions. Click the three dots behind the profile and select Manage Site collection owners. 


There you can set the new user as a Site collection administrator. Once you click OK, the users can access the old library using the web version of Onedrive for Business.


There is one caveat to all this, the old profiles will be deleted after 30 days, as part of the Office 365 automated cleanup.

Posted by Maarten Van Driessen on

Get exclusions for all Veeam jobs

This will be another short one, but I figured someone else will have run into this.

While I was doing a rework for a Veeam implementation, I noticed on several jobs that there were exclusions set inside the jobs. I wanted a list of all jobs with their respective exclusions, time for Powershell!

The script starts by getting a list of all Veeam jobs. Next, it will go through all jobs and look for objects that have the type “Exclude” set. What follows is a bit of code to match the job name to the different exclusions and dump everything into a CSV. I struggled a bit with getting the contents of the array listed properly in the CSV, I kept getting the array listed as “System Object[]“. Turns out I just needed to put the $VMExclusions variable between quotes.

The CSV will look something like this, the job name is listed on the left and the excluded objects on the right.


One last note, this script needs to be run from the Veeam server itself.

As always, you can find the most recent version of the script on Github. The initial version can be found below.



Posted by Maarten Van Driessen on

Remove Logs

A couple of our web servers were running into some issues with disk space. Turns out the logs weren’t being cleaned up properly.
In order to remediate this, I wrote a function that can be reused anywhere.

The function accepts 3 parameters:

  • FilePath: The directory where you want to remove the logs from
  • CutOff; Specifies the age (in days) that a file must have before being deleted.
  • LogPath; This parameter specifies the directory where you want the CSV log file. If this is left open, no CSV will be saved.


As an example, I’m going to remove all files, older than 30 days, from the folder “C:\temp\W3SVC2030036971\W3SVC2030036971\”. I want a CSV log to be written in the “C:\temp” folder.


A view from the Powershell window

As you can see, you always get feedback in your window, even if you don’t specify a log path.

The CSV looks something like this:



You can find the script code below. This is provided as-is. You can find the most recent version of the code on GitHub.

Posted by Maarten Van Driessen on

Install VIB on VMware ESXi

Today I got my shiny new host for the homelab. After installing ESXi, I noticed the 10 Gbit NICs weren’t being detected. After looking around a bit I found the drivers in the form of a VIB.

Start by uploading the VIB to a datastore that is accessible to the host. Turn on the SSH service in the security section and fire up a session.

Once connected run

If everything goes well you should get an output like this

Once all is done, reboot the host and you’ll see the installed device pop up, in my case the 10 Gbit NICs

Posted by Maarten Van Driessen on

How to upgrade an IRF stack – incompatible firmware

When upgrading HP Comware switches that are part of an IRF stack, you want to use In Service Software Upgrade (ISSU). This allows you to do a rolling upgrade of the firmware while limiting the impact to just 1 switch in the stack.
ISSU will reboot 1 switch in the stack, wait for it to come back up and then moves on to the next switch.

Types of ISSU

There are 3 types of ISSU:

  • Compatible: This means 2 different software versions can co-exist in the same IRF stack. This is usually for smaller releases, in the same major release
  • Incompatible: Typically you will see this when moving between major releases. Only 1 version can exist in the IRF stack
  • Unknown

Upgrading incompatible firmware

Checking configuration

Before starting the upgrade, you have to check your configuration. Doing an incompatible upgrade means you will create a “split-brain” situation since only 1 version can exist in the same IRF stack.

To prevent this, you have to make sure MAD is configured. You can check this by running the command

This will give you an output like this, in this example MAD BFD is used.

Also, make sure your IRF stack is configured correctly and as desired by running the command

Uploading the image

After you verified that everything is configured correctly, it’s time to upload the image. You can do this using any number of methods. Once the image is uploaded to the master node, you need to copy it to all slave nodes. Increment the slot number if you have more than 2 switches in the stack.

copy new5500.bin slot2#flash:

Once the image is uploaded, you can check the compatibility

Starting the upgrade

It’s time to actually start upgrading. When the command is issued, the slave node will be loaded with the new firmware. The node will then reboot but will not join the IRF stack (remember, only one version can exist), this will create a split-brain situation that will be resolved by MAD. MAD leaves the IRF ports active but will shut down all outgoing interfaces. You might wonder why the IRF interfaces aren’t shut down, this is done to allow you to perform the switchover and continue the upgrade.

Once the slave node has rebooted, you can perform the switchover. This will reboot the current master node and perform the upgrade to the new firmware version. The command will also enable all external interfaces on the node that’s already been upgraded, the new master.

After the node has rebooted, the IRF stack will be fully operational.

If you want the IRF master to be the same as before the upgrade, you can reboot the current master.