Event ID 1000 rundll32.exe_aepdu.dll

While doing my morning check of our monitoring system I came across a strange issue…Two of our servers had been unavailable during the night, this had also happened the night before. Time to dig a bit deeper!

Both servers are domain controllers running Server 2012 R2, no updates were installed in the last few weeks and nothing had been changed either. Pulling up the event log of both servers, I found the exact same event logged on both machines at the time they were unavailable to our monitoring system.

Faulting application name: rundll32.exe_aepdu.dll, version: 6.3.9600.17415, time stamp: 0x54504eb8
Faulting module name: aeinv.dll, version: 6.3.9600.17415, time stamp: 0x54504e38
Exception code: 0xc0000005
Fault offset: 0x000000000003078b
Faulting process id: 0x26b8
Faulting application start time: 0x01d417ffa469e4cf
Faulting application path: C:\WINDOWS\system32\rundll32.exe
Faulting module path: C:\WINDOWS\system32\aeinv.dll
Report Id: 32e94662-83f4-11e8-815d-0050568a21ba
Faulting package full name: 
Faulting package-relative application ID:

After I quick search, I found out that aeinv.dll is part of the Microsoft Customer Experience Improvement Program (CEIP). If you’re signed up to the CEIP, which you are by default when you install Server 2012 R2, three scheduled tasks will run each night to upload your anonymized date to Microsoft.

  • AitAgent
  • Microsoft compatibility Appraiser
  • ProgramDataUpdater

We had the unavailability after the first and third task ran.

Luckily you can easily opt out of the CEIP by opening Server Manager, going to Local Server and clicking on the link behind Customer Experience Improvement Program.

There you will see that the radio button, Yes, I want to participate in the CEIP is selected. Click on No, I don’t want to participate and click OK.

You’re now opted out and the scheduled tasks will no longer run.

UPDATE 11/07: Turns out that just opting out of the CEIP does not actually prevent the scheduled tasks from running. In order to fully disable them, you need to disable them through the task scheduler or, even better, disable them through PowerShell

Get-ScheduledTask -TaskPath "\Microsoft\Windows\Application Experience\" | Disable-ScheduledTask

Remove Logs

A couple of our web servers were running into some issues with disk space. Turns out the logs weren’t being cleaned up properly.
In order to remediate this, I wrote a function that can be reused anywhere.

The function accepts 3 parameters:

  • FilePath: The directory where you want to remove the logs from
  • CutOff; Specifies the age (in days) that a file must have before being deleted.
  • LogPath; This parameter specifies the directory where you want the CSV log file. If this is left open, no CSV will be saved.

Example

As an example, I’m going to remove all files, older than 30 days, from the folder “C:\temp\W3SVC2030036971\W3SVC2030036971\”. I want a CSV log to be written in the “C:\temp” folder.

remove-logs1

A view from the Powershell window

As you can see, you always get feedback in your window, even if you don’t specify a log path.

The CSV looks something like this:

remove-logs2

Code

You can find the script code below. This is provided as-is. You can find the most recent version of the code on GitHub.

#Requires -Version 3.0

Function Remove-Logs{ 

<#
.SYNOPSIS
  Delete old log files
.DESCRIPTION
  This function will delete all .log and .txt files older than a certain number of days.
  You have the ability to specify log path, extension (log or txt) and path.
.NOTES
  Author:  Maarten Van Driessen
.PARAMETER FilePath
  Specify the path where you want to delete the logs from.
.PARAMETER CutOff
  Specify how many days you want to go back.
.PARAMETER LogPath
  Where do you want to store the log file. Path must end with a \
.PARAMETER FileExtension
  Specify whether you want to delete *.log, *.etl or *.txt files. If left blank, the function will delete both.
  Must be written as *.log, *.etl or *.txt
.EXAMPLE
  Remove-Logs -FilePath C:\inetpub\ -Cutoff 30 -LogPath c:\temp\
  
  Delete all txt and log files older than 30 days from the c:\inetpub folder and write the log to c:\reports
.EXAMPLE
  Remove-Logs -FilePath C:\inetpub\ -Cutoff 20 -LogPath c:\temp\ -Filter *.txt
  
  Delete all txt files older than 20 days from the c:\inetpub folder and write the log to c:\reports
#>

[Cmdletbinding()]
#Parameters
Param
(
    #Check to see if there are any invalid characters in the path
    [Parameter(Mandatory=$true)][ValidateScript({
            If ((Split-Path $_ -Leaf).IndexOfAny([io.path]::GetInvalidFileNameChars()) -ge 0) {
                Throw "$(Split-Path $_ -Leaf) contains invalid characters!"
            } Else {$True}
        })][string]$FilePath,
    #You can only delete logs older than 1 day
    [Parameter(Mandatory=$true)][ValidateRange(1,365)][int]$Cutoff,
    #Check to see if there are any invalid characters in the path
    [ValidateScript({
            If ((Split-Path $_ -Leaf).IndexOfAny([io.path]::GetInvalidFileNameChars()) -ge 0) {
                Throw "$(Split-Path $_ -Leaf) contains invalid characters!"
            } Elseif(-Not ($_.EndsWith('\'))){
                Throw "Logpath must end with \ !"
            } Else {$True}
        })]$LogPath="",  
    
    #you can only delete log and txt files
    [ValidateSet("*.log","*.txt","*.etl")][string]$FileExtension ="*.*"
    
)

    #Check if the file & log path exist
    if(-not (Test-Path $FilePath))
    {
        throw "Invalid file path! Please make sure you enter a valid path."
    }
    #If statement to handle optional parameter
    elseif($LogPath -eq "") {}
    elseif(-not (Test-Path $LogPath))
    {
        throw "Log path does not exist! Please make sure you enter a valid path."
    }

    $CutOffDate = (Get-Date).AddDays(-$Cutoff)

    #Get all items inside the path
    $LogFiles = Get-ChildItem -Path $FilePath -Filter $FileExtension | Where-Object{$_.LastWriteTime -lt $CutOffDate}
    $DeletedItems = @()
    if($LogFiles)
    {
        foreach($LogFile in $LogFiles)
        {
            #Remove all items
            $DeletedItems += $LogFile
            $LogFile | Remove-Item
            Write-Host "Removing file $LogFile" -ForegroundColor Green
        }
        if($LogPath)
        {
            $LogName = Get-Date -Format "yyyy-MM-dd"          
            $DeletedItems | select Name,creationtime,lastwritetime | Export-Csv -path "$LogPath\DeletedLogs - $LogName.csv" -Delimiter ";" -NoTypeInformation -Append
        }
    }
    else 
    {
        Write-Host "No files were found." -ForegroundColor Red
    }
}

When DFS Replication goes wrong…And how to fix it

A while back a client of ours had some issues with DFS replication between 2 nodes. One of the members reported a backlog of over 1 million files!
Upon further investigation, I found the DFS database on that node got corrupted which caused it start the initial sync again.

 

We let the process run it’s course over a couple of days but noticed the progress was incredibly slow. We were averaging about 3000 files / hour. After some quick math, it’d take about 16 days before the sync completed. And that’s not taking into account anyone working on the file servers.

 

That is when I stumbled upon this very detailed blog post by Ned Pyle. There he explains in great detail how the process can go faster, using a DFS database clone export and import.

 

DFS state of affairs

One thing to pay attention to, you can’t import a DFS database on a node that already has a pre-existing database. In order to make this strategy work, we had to make sure all traces of the existing DFS database were gone.

 

This isn’t as simple as just removing the member and adding it again. When you delete a member from a replication group, that member isn’t actually deleted from the database. The member is marked with a 30-day tombstone flag. If you change any files on that member and then re-add it, the tombstone flag is removed and the files on that member are used and replicated to the other members. You can imagine this can get really ugly if you decide to delete files on that “removed member”. The key takeaway here is to make sure you have a backup before you make any changes!

 

We decided the easiest and safest way to go ahead was to delete all replication groups and recreate them. Simply removing the replication group isn’t enough. You also have to make sure there are no remnants of the database in the “System Volume Information” folder. The easiest way to do this is by running these commands

 

net stop dfsr
icacls "D:\System Volume Information" /grant "Domain Admins":F
cd "D:\System Volume Information"
move DFSR D:\DFSR_backup
icacls "D:\System Volume Information" /remove:g "Domain Admins"
net start dfsr

This will move the contents of the folder to a DFSR_backup folder on the same volume. Note: make sure you delete the replication groups before running these commands. If you don’t do this, DFS will just recreate the database on that member.

Checking the nodes are in sync

First, we had to make sure that any files that had changed on the remote node were copied to the node on the main site.
Sounds like a job for robocopy! I found out that the database got corrupted on the 12th of December, so we only had to check the files modified after that date.
The command looks like this

 

robocopy \\serverb\share \\servera\share /e /XO /MAXAGE:20151211 /COPYALL /XF thumbs.db

This command will compare all files newer than the 11th of December and copy them over. The /XF parameter excludes the thumbs.db file from being copied in order to speed up the process. Our client works with a lot of images so there are a lot of thumbs.db files.

 

Moving on

To finish setting up your DFS-R, you can follow the steps outlined in the blog post I linked at the top of this page.