How do YOU debug your PowerShell code?

Is there a problem?

When you develop a complex module, a lot of variables come to play. Naturally, at some point, you probably would like to look at their state, to see how they have changed during the execution. One way do to it is to load module into an IDE and use a debugger during a test run. But what if the module requires an environment which is impossible to recreate at your machine? Or what if you want to keep an eye on it while it works in a production environment?
A log of executed commands/scriptblocks would be useful for this. How can we get such log? What does PowerShell have to offer here?
There are two options are available:
First is the Start-Transcript cmdlet,
Second – script tracing and logging.

Start-Transcript writes commands and host output into a plain, loosely structured, text file.
Script tracing and logging writes all commands into a standard Windows Event Log file.

Both methods suffer from the following problems:

  1. They do not log the content of variables. If a variable was populated by a call to something external, like Get-ChildItem, for example, you have no idea what does it contain.
  2. When you call a function/cmdlet with variables as parameter values, you cannot be sure what has been passed to the parameters, because the variables’ content was not logged anywhere.

Let’s see for ourselves…

…by creating a simple object:

Here’s what you will see in the event log:

Only one of these will contain a useful (for us) information – Event ID 4104:

Creating Scriptblock text (1 of 1):
$a = [pscustomobject]@{
Name = 'Alice'
Role = 'Role 1'
}

ScriptBlock ID: bfc67bba-cff9-444d-a231-80f9f4ee5b55
Path:

At the same time, in the transcript:

PS C:\> $a = [pscustomobject]@{
Name = 'Alice'
Role = 'Role 1'
}

OK, so far, so good – we can clearly see what command was executed in both the event log and the transcript. But we also see the first problem with Start-Transcript – it does not log time.

Now, let’s retrieve the object back:

Here’s what a new 4104 event contains:

Creating Scriptblock text (1 of 1):
$a

ScriptBlock ID: 639f6d2b-a75a-4042-9da4-8692cdffdf9e
Path:

And no, there’s no event which contains an output of that command.
Transcript log in the meantime recorded this:

PS C:\> $a

Name Role
---- ----
Alice Role 1

Just as seen in the console!

So, here we see the first flaw of the Event Log – it does not store output.

Let’s continue with something more complex

First, let’s define a function:

Then, we call this function as following:

Here’s what we got in return:
828
944
976

What does it mean? Did our function work correctly? Were there any other processes with PID less than 1000?
We don’t know. Because we have no idea of what was in the $Processes variable at the time. We do not know what was passed to the -Process parameter, each time the function was called. 😥

So, here are some pros and cons for each logging type which we’ve seen so far:

Script tracing and logging:
❌ Can not show content of variables.
✔ Shows at what time each scriptblock was executed.
❌ Does not show at what time each command was run (yup, scriptblocks only).
❌ Does not show what exactly was passed to function’s parameters.
❌ Logs only into a file.

Start-Transcript:
✔ Can show content of variables.
❌ Does not show at what time each scriptblock was executed.
❌ Does not show at what time each command was run.
❌ Does not show what exactly was passed to function’s parameters.
❌ Logs only into a file.

Write-Debug to the rescue!

In the end I stuck with a rather strange and obscure solution: Write-Debug. Why? Because this cmdlet solves all the problems:

  1. It send everything into another stream which prevents my pipeline untouched (it’s specifically designed to DEBUG code, duh).
  2. It is up to me which information send to the debug output.
  3. I am not constrained with an output format chosen for me by the language developers – I can change it with accordance to my needs.
  4. It logs into a stream, not into a file: you can send it to a screen or into a file or into another function! (see below)

But of course, Write-Debug has its downsides:

  1. It has no built-in knowledge of what line of code it should log – you have to, basically, write your code twice. This could be circumvented by executing everything through a special function which would log the line and then execute it, but it introduces additional requirements for your code to run, not everybody would have that additional module installed, and I want my code to be as much transferable as possible.
  2. There’s no built-in way to write the debug stream into a persistent file.

Here’s how that code, which we executed earlier, would look like if I debug-enabled it (see in-line comments for clarification):

Looks a bit ugly, I admit 😅

How do I Write-Debug into a file?

Earlier I said that I would like to have my debug log as a file, but Write-Debug cannot write into a file – it sends messages into the debug output. How can we create a log file, containing these debug messages? That’s where SplitOutput (yes, probably, not the best name, but whatever) module comes to play. Its single function (Split-Output) accepts objects, filters them and sends all filtered out objects into a scriptblock, passing all other objects down to the pipeline. You can use this function to filter out debug messages from the pipeline and send them into a function which writes a log file.

Since Split-Output picks up messages from the pipeline, our last challenge is to merge the debug stream into the output stream. Thankfully, PowerShell has a built-in functionality for this – redirection operators. To redirect the debug stream into the output stream, use the following construction: 5>&1.
MyFunction 5>&1 | Split-Output -ScriptBlock {Write-Log -Path C:\Log.txt} -Mode Debug

Note

The command in Split-Output’s -ScriptBlock parameter must accept objects through the pipeline.

Here I stand (corrected?)

As you can see, utilizing Write-Debug for hardcore logging is not an easy task – you have to copy/paste your code twice, you have to use external functions to log into a file, but it gives you a lot of flexibility and additional features which the other two existing logging methods cannot provide. So far, I find it the best solutions currently existing, to write a detailed log of how exactly my functions work, while they merrily interact with each other in a production environment.

Certainly, this task would be much easier if we could intercept commands inside PowerShell’s engine, but I am not aware of an interface which allows us to do just that — I tried to look at the source code, but did not understand much. Is there any? Maybe it would be a good idea to file an RFC for such functionality?🤔

Oh, by the way, you can see this debug pattern in a real module here: SCVMReliableMigration

PSA: Meltdown Patches (CVE-2017-5715, CVE-2017-5754) Could Cause Problems With Hyper-V Live Migration

Suppose you have two Hyper-V servers: On the first server (Server A) you installed both 2018-01 Rollup Update and an updated BIOS release. On the second server (Server B) you installed only the Rollup Update. You added FeatureSettingsOverride, FeatureSettingsOverrideMask, and MinVmVersionForCpuBasedMitigations registry keys to both hosts. Then you rebooted both machines.

On Server A you have a virtual machine. That VM was (re-)booted on that server after the BIOS update and the Rollup Update were installed. Get-SpeculationControlSettings shows that all mitigations are enabled for the machine.
You try to live-migrate the virtual machine from Server A to Server B.

In that case, live migration will complete successfully, but the VM will freeze and won’t be available neither via network, nor via the Hyper-V console.

Solution:

To resume normal VM work, you should either to:

  • Move the VM back from Server B to Server A. It should un-freeze automatically.
  • Forcefully restart the VM using Hyper-V Management snap-in or PowerShell cmdlets at Server B.

I tested it with Windows Server 2012 R2 only, the VM was Windows Server 2016. Not sure if it applies to Server 2016 hypervisors.

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 4 — AGPM

Previous part — Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 3 — Management Infrastructure

Introduction

When we talk about control version systems (CVS), the first thing comes to mind is, of course, program code. In the modern world, one cannot be a decent software developer if they do not use Git or TFS or Mercurial or Subversion etc. But this does not mean only developers benefit from the concept of CVS: Adobe provides designers with its own solutions to manage file versions.
What about us, IT administrators? Given the growing popularity of the infrastructure-as-a-code concept, many IT administrators have already adopted some kind of CVS to store scripts and configurations files.

Today I want to talk about version control for group policies. You probably know that group policies are not exactly text files, therefore, traditional CVSes are not the best choice here. That’s why Microsoft came up with its own solution, which allows us to track changes, compare GPO versions and quickly restore the previous ones: Advanced Group Policy Management (AGPM).

Interesting, that it is not just a CVS, but is also a tool to delegate group policies administration with built-in approval management mechanism.
But even if you work in a small team and do not need GPO management delegation, I still encourage you to use AGPM as a version control system.

AGPM is a part of Microsoft Desktop Optimization Pack, which is a free software set available to Microsoft Software Assurance subscribers. Here’s the official documentation where you can learn more about the product.

Warning

AGPM is NOT a substitute for a proper Active Directory backup.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 4 — AGPM

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 3 — Management Infrastructure

Previous part — Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 2 — Post-Config

Introduction

Sorry it took me so long — a lot has happened in the last six months, including moving into another country and changing jobs. Also, as you can see from several previous posts, I got a little bit carried away with PowerShell.
Another thing with which I got carried away is this post: I even had to split it, eventually. That’s why today, I present you not one, but two blogs at once! Find the next one here: Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 4 — AGPM

Up to this point, we were working on our servers interactively, locally. This is considered not the best practice because you consume server resources to support an interactive logon session. Also it might be inconvenient when you manage a fleet of servers.
In this article we will setup remote management administrative stations, which we will use to manage servers in the lab from then on.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 3 — Management Infrastructure

Function to download updates from Microsoft Catalog

Last week, I accidentally built a function which downloads update files from Microsoft Update Catalog. What is its real-life application?
Say you are building an OS installation image from scratch, which will then be deployed en masse. Common approach is to integrate all available updates into such image, to increase security and decrease post-installation tasks duration.
For Windows Server 2012 R2 and earlier OSes you have to download and integrate dozens of updates, but you are not sure which ones are required for your installation image (Windows Server 2012 R2 ISO-image has been rebuilt at least three times). The only way to determine which updates to integrate is to install a fresh computer from an ISO-image, connect it to the Internet and check for updates from Microsoft Update.
We can script it as follows:

$SearchResult object will contain a list of updates which your machine requires.

To integrate those updates, you need to receive their .msu or .cab files. Unfortunately there is no way (at least known to me) to extract download links from the $SearchResult object. Here’s where the new Get-WUFileByID function comes to help:
First, you have to get a list of KB IDs from the $SearchResult object:

Then you just pass each one of these to the function and it will download the files:

In the -SearchCriteria parameter, specify a product for which you need to download updates.

As the result, updates will be downloaded to the current catalog (you can redirect them with -DestinationDirectory parameter).
At last, integrrate the updates with help of Add-WindowsPackage cmdlet.

If you do not want to download any files, but wish to receive links only, -LinksOnly switch is here just for that!

The function is covered by tests and available at GitHub. Should you have any questions of suggestions, feel free to leave them in the comments section or raise an issue.

New features for the DNS Synchronization script

Hi guys,
Sorry for the delay: a huge change has happened in my life recently — I moved from Moscow, Russia to Limassol, Cyprus, where I now live and work.
I am still polishing the next part of the “Command-line Infrastructure” series, and today I am here to present you another significant update to the DNS Zones Synchronization script:

  1. Name query modes introduced. Right now, here are two modes:
    • “1” – The script will try to retrieve authoritative nameservers for a zone and then will you them to resolve records in that zone.
    • “0” – The script will use nameservers specified in the config file.

    Specify the required mode for each zone in the new column conveniently named “Mode”. Also, instead of specifying “0”, you can leave the column empty.

  2. This has led to another new feature: From now on, the script supports not only IP addresses as NS servers, but DNS names too. Therefore, “ExtIP” and “IntIP” columns in the configuration file have been renamed to “ExtNS” and “IntNS”.
  3. Even more: you can now leave external nameserver field (ExtNS) empty. In that case, the script will use default operating system DNS servers.

Here is a table for you to better understand how these new features work together:

Mode ExtNS Type Result
0 IP Address Names in the zone resolved using ExtNS IP address.
1 IP Address ExtNS IP address used to find authoritative NSes for the zone.
The first authoritative NS then used to resolve names in the zone.
0 DNS Name Default OS DNS servers used to resolve ExtNS DNS name to an IP address.
This IP address then used to resolve names in the zone.
1 DNS Name Default OS DNS servers used to resolve ExtNS DNS name to an IP address.
This IP address then used to find authoritative NSes for the zone.
The first authoritative NS then used to resolve names in the zone.
0 Empty Names in the zone resolved using default OS DNS servers.
1 Empty Default OS DNS servers used to find authoritative NSes for the zone.
The first authoritative NS then used to resolve names in the zone.

Note, that a query mode does not affect the internal name server at all. Here’s a table for the “IntNS” column as well:

IntNS Type Result
IP Address Requests are sent to this IP address.
DNS Name Default OS DNS servers used to resolve IntNS DNS name to an IP address.
Then requests are sent to this IP address.
Empty An new error is raised. Event ID 52.

As usual, grab the latest release of the script from GitHub!

Small update to my DNS Synchronizer script

I’ve just released a small improvement to my DNS Synchronizer script. The update includes:

  1. Corrected the issue where sub-records of a record prevented that record to be synchronized.
  2. Corrected the way how start and end times are formatted — now they both are formatted equally.
  3. But the most significant one for the end-user, is that the script has been renamed to Sync-DNSZones, in accordance with PowerShell best practices.

If you execute the Get-Verb cmdlet w/o parameters, you’ll see that there is no “Synchronize” verb in the output — that’s why I renamed the script. Do not forget to rename “-NS” an “-REC” files accordingly.

Function to test a date against different conditions

Several weeks ago, my friend Rich Mawdsley asked our Windows Admins Slack team, how to tell if today is the second Tuesday in the month? As we found out, there is no built-in way in PowerShell to determine that. That’s why I present you today a function built specifically to test dates against different conditions. The function can tell you:

  • If the date is a certain weekday in a month. 4th Monday, Second Thursday, last Sunday etc.
  • If the date belongs to a certain quarter of a year.
  • If the date is a start or an end of a quarter.
  • If the date is the last day of a month etc.

Mind, that the output is boolean: the function will not tell you much about the date object, but only does it meet conditions or does it not. It returns $true if the date meets the conditions and $false in all other cases.

Here’s the code of the function, and, of course, you can always find the latest version at my GitHub:

The function covered with tests (you can see the results here), but not completely — I shall certainly improve this in the future. And yes, those tests have already helped me to fix several bugs before the official release 😉

BTW, If you haven’t written tests for your PowerShell code, I found this Introduction to testing with Pester by Jakub Jares very useful — you will start writing tests in Pester before the end of the lecture.

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 2 — Post-Config

Previous part — Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation

Introduction

In this post we will perform two configurations on our Active Directory Domain Services instance: We’ll define security tiers which later become cornerstones of our privilege delegation principles and we’ll tune domain-joining parameters. Also a quick tweak for the DNS service.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 2 — Post-Config

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation

Introduction

Up to this day, Active Directory Domain Services (AD DS) has been the core of the Windows infrastructure. With each release of Windows Server, AD DS receives new features while keeping great backward compatibility. Windows Server 2016 brings following enhancements to AD DS:

In this blog we shall install the corner stone of our future infrastructure: a highly-available AD DS instance of two domain controllers. Our AD DS layout is going to be quite simple: two writable domain controllers in a single site.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation