Publish-PSArtifactUtility : Failed to generate the compressed file for module ‘C:\Program Files\dotnet\dotnet.exe failed to pack: error’

Recently I tried to publish a PowerShell module into the gallery from a relatively fresh Windows 10 installation, when the Publish-Module cmdlet failed with an error:

Running Process Monitor, I found the command line it tries to execute looks like this:
"C:\Program Files\dotnet\dotnet.exe" pack "C:\Users\ExampleUser\AppData\Local\Temp\2f2dd498-4ce3-4ca2-a081-c65b0af71090\Temp.csproj" /p:NuspecFile="C:\Users\ExampleUser\AppData\Local\Temp\1691004664\MyModule\MyModule.nuspec" --output "C:\Users\ExampleUser\AppData\Local\Temp\1691004664\MyModule"

My PowerShellGet installation was at the latest stable version, and the NuGet version clearly has nothing to do with it: there were no calls for “nuget.exe” in the Process Monitor log.

So what happens if I run this command manually?

Aha! A descriptive error finally!
Turns out, the pack command is a part of .NET SDK. And of course, since that was a new Windows installation, it did not have that SDK at the moment.

Thankfully, with winget, .NET SDK installation is a breeze: winget install Microsoft.DotNet.SDK.7

Now let’s check what we’ve got:

The error is different now and perfectly logical – I didn’t pass any project file to it.
And it helped – Publish-Module now works! Hooray!

PowerShell module to automate Slack w/o adding a new app to your workspace

Hi guys! Those of you, who work with Slack, sure know about PSSlack – a module to work with our beloved messaging system via the official API. That API (and the module) has a tiiiiiny downside: it requires you to add a new app to your workspace in order to get an API token.

And sometimes this new app must be approved by your workspace administrators. But what if you cannot get their approval? Well, I have something for you: SlackWeb, a new module to communicate with Slack from PowerShell which does NOT need any new apps added to the workspace.

This module uses the web-API: this API is primarily used by the official web-client and you know what’s neat? You already have a token for it!
Look at requests the web-client sends and you should see that it makes calls to different API endpoints. In those calls you’ll see a token starting with “xoxc-” – that’s your web-token.

But the web-token won’t work if you try to use it with regular API calls – it needs a counterpart: specifically, one of your cookies – a “d-cookie”.
If you switch to the “Headers” tab of an API request and look closely into the “cookie” section on this tab, you should find a cookie called just “d”. It has some super-long value and you need to save this value to use later along with your web-token.

When you load the module, first thing it does is to ask you for the values of the token and the cookie. Feed them to it and that’s pretty much what you need to start using the module – your team ID/URL are detected automatically.

Not so many features are available right now – I just start with this module. Currently, I cared mostly about messages export.
If you find that the module lack a feature you need, please let me know here or in issues on GitHub and I’ll try to integrate it.

Grab the module from the PowerShell Gallery right now! Just be aware, that it uses reverse-engineered API calls and might stop working any moment – please use on your own risk.

Free module to manage Group Policy Preferences with PowerShell

I’ve used Group Policy Preferences since it had been Policy Maker. I’ve always used it through the GUI but a couple of months ago I thought: Why don’t we use PowerShell for this?
Apparently, the answer is “because there is no cmdlets available”. Yes, there is Group Policy Automation Engine, but it is a paid close-source product. The built-in GroupPolicy module just does not have cmdlets for GP Preferences (only one section is covered by it – Registry).

All that lead me to starting an open-source project to PowerShellize GPP: https://github.com/exchange12rocks/PSGPPreferences
The module is at a beginning stage, but you can already install it through the Gallery: https://www.powershellgallery.com/packages/PSGPPreferences/

Currently the module allows you only to create / delete / manage groups (not even users), but I hope to add other sections relatively fast, since the foundation has been implemented.
Right now the most difficult, but crucial task is writing tests – w/o tests regressions are quite likely. That’s why it is next in the roadmap.

How to decrypt Plesk passwords on Windows

Plesk uses symmetrical encryption for many passwords in its internal MySQL database “psa”. There are several decryption scripts exist, but none for Plesk on Windows so far. This blog post is to finally change it.

You can find symmetrically encrypted passwords in these tables in Plesk’s “psa” database:

  • accounts (collumn password)
  • databaseservers (collumn admin_password)
  • dsn (collumn cstring)
  • longtaskparams (a record called oldBackupkey – a parameter for backup-encrypt-task (see the longtasks table))
  • misc (collumn aps_password)
  • servicenodeconfiguration (collumn value for the section MailGate / password)
  • smb_users (collumn password)

Symmetrically encrypted passwords look like this: “$AES-128-CBC$ABNK35ZcqnbTYT4Q3mbaEA$HmGDWmtym6K3+kJ8uBoJOg”:
They start with “$AES-128-CBC$”. Then between the second and the third dollar signs there is an AES initialization vector. After that, until the end of the string, we have the encrypted data itself.

In Linux the symmetric key, which Plesk uses to encrypt all these passwords, is located in /etc/psa/private/secret_key. In Windows they put it in registry: HKLM:\SOFTWARE\WOW6432Node\PLESK\PSA Config\Config\sym_key\sym_key

To retrieve an encrypted password, use your favorite MySQL tool to connect to the database and copy it from there.

Note

To learn how to connect to the “psa” database, see this and also here.

Copy a password you want to decrypt and pass in to the -EncryptedString parameter of the script below. Mind, that you must run the script on the same server where you have that instance of Plesk installed, otherwise it won’t be able to extract the symmetric key. If you want to decrypt passwords on a different machine, you need to pass the symmetric key manually to the script’s -SymmetricKey parameter.

See also:

https://gist.github.com/gnanet/99f5e284c0f71032498625368ba67659
https://www.besuchet.net/2016/06/plesk-11-encrypted-hashed-password-authentication-php-on-psa-database/
https://mor-pah.net/2014/03/05/decrypt-plesk-11-passwords/
https://codeforcontent.com/blog/using-aes-in-powershell/

How do YOU debug your PowerShell code?

Is there even a problem?

When you develop a complex module, a lot of variables come to play. Naturally, at some point, you probably would like to look at their state, to see how they have changed during the execution. One way do to it is to load module into an IDE and use a debugger during a test run. But what if the module requires an environment which is impossible to recreate at your machine? Or what if you want to keep an eye on it while it works in a production environment?
A log of executed commands/scriptblocks would be useful for this. How can we get such log? What does PowerShell have to offer here?
There are two options are available:
First is the Start-Transcript cmdlet,
Second – script tracing and logging.

Start-Transcript writes commands and host output into a plain, loosely structured, text file.
Script tracing and logging writes all commands into a standard Windows Event Log file.

Both methods suffer from the following problems:

  1. They do not log the content of variables. If a variable was populated by a call to something external, like Get-ChildItem, for example, you have no idea what does it contain.
  2. When you call a function/cmdlet with variables as parameter values, you cannot be sure what has been passed to the parameters, because the variables’ content was not logged anywhere.

Let’s see for ourselves…

…by creating a simple object:

Here’s what you will see in the event log:

Only one of these will contain a useful (for us) information – Event ID 4104:

Creating Scriptblock text (1 of 1):
$a = [pscustomobject]@{
Name = 'Alice'
Role = 'Role 1'
}

ScriptBlock ID: bfc67bba-cff9-444d-a231-80f9f4ee5b55
Path:

At the same time, in the transcript:

PS C:\> $a = [pscustomobject]@{
Name = 'Alice'
Role = 'Role 1'
}

OK, so far, so good – we can clearly see what command was executed in both the event log and the transcript. But we also see the first problem with Start-Transcript – it does not log time.

Now, let’s retrieve the object back:

Here’s what a new 4104 event contains:

Creating Scriptblock text (1 of 1):
$a

ScriptBlock ID: 639f6d2b-a75a-4042-9da4-8692cdffdf9e
Path:

And no, there’s no event which contains an output of that command.
Transcript log in the meantime recorded this:

PS C:\> $a

Name Role
—- —-
Alice Role 1

Just as seen in the console!

So, here we see the first flaw of the Event Log – it does not store output.

Let’s continue with something more complex

First, let’s define a function:

Then, we call this function as following:

Here’s what we got in return:
828
944
976

What does it mean? Did our function work correctly? Were there any other processes with PID less than 1000?
We don’t know. Because we have no idea of what was in the $Processes variable at the time. We do not know what was passed to the -Process parameter, each time the function was called.

So, here are some pros and cons for each logging type which we’ve seen so far:

Script tracing and logging:
❌ Can not show content of variables.
✔ Shows at what time each scriptblock was executed.
❌ Does not show at what time each command was run (yup, scriptblocks only).
❌ Does not show what exactly was passed to a function’s parameters.
❌ Logs only into a file.

Start-Transcript:
✔ Can show content of variables.
❌ Does not show at what time each scriptblock was executed.
❌ Does not show at what time each command was run.
❌ Does not show what exactly was passed to a function’s parameters.
❌ Logs only into a file.

Write-Debug to the rescue!

In the end I stuck with a rather strange and obscure solution: Write-Debug. Why? Because this cmdlet solves all the problems:

  1. It send everything into another stream which leaves my pipeline untouched (it’s specifically designed to DEBUG code, duh).
  2. It is up to me which information gets sent to the debug output.
  3. I am not constrained with an output format chosen for me by the language developers – I can change it with accordance to my needs.
  4. It logs into a stream, not into a file: you can send it to a screen or into a file or into another function! (see below)

But of course, Write-Debug has its downsides:

  1. It has no built-in knowledge of what line of code it should log – you have to, basically, write your code twice. This could be circumvented by executing everything through a special function which would log the line and then execute it, but it introduces additional requirements for your code to run, not everybody would have that additional module installed, and I want my code to be as much transferable as possible.
  2. There’s no built-in way to write the debug stream into a persistent file.

Here’s how that code, which we executed earlier, would look like if I debug-enabled it (see in-line comments for clarification):

Looks a bit ugly, I admit ?

How do I Write-Debug into a file?

Earlier I said that I would like to have my debug log as a file, but Write-Debug cannot write into a file – it sends messages into the debug output. How can we create a log file, containing these debug messages? That’s where SplitOutput (yes, probably, not the best name, but whatever) module comes to play. Its single function (Split-Output) accepts objects, filters them and sends all filtered out objects into a scriptblock, passing all other objects down to the pipeline. You can use this function to filter out debug messages from the pipeline and send them into a function which writes a log file.

Since Split-Output picks up messages from the pipeline, our last challenge is to merge the debug stream into the output stream. Thankfully, PowerShell has a built-in functionality for this – redirection operators. To redirect the debug stream into the output stream, use the following construction: 5>&1.
MyFunction 5>&1 | Split-Output -ScriptBlock {Write-Log -Path C:\Log.txt} -Mode Debug

Note

The command in Split-Output’s -ScriptBlock parameter must accept objects through the pipeline.

Here I stand (corrected?)

As you can see, utilizing Write-Debug for hardcore logging is not an easy task – you have to copy/paste your code twice, you have to use external functions to log into a file, but it gives you a lot of flexibility and additional features which the other two existing logging methods cannot provide. So far, I find it the best solutions currently existing, to write a detailed log of how exactly my functions work, while they merrily interact with each other in a production environment.

Certainly, this task would be much easier if we could intercept commands inside PowerShell’s engine, but I am not aware of an interface which allows us to do just that — I tried to look at the source code, but did not understand much. Is there any? Maybe it would be a good idea to file an RFC for such functionality??

Oh, by the way, you can see this debug pattern in a real module here: SCVMReliableMigration

PSA: Meltdown Patches (CVE-2017-5715, CVE-2017-5754) Could Cause Problems With Hyper-V Live Migration

Suppose you have two Hyper-V servers: On the first server (Server A) you installed both 2018-01 Rollup Update and an updated BIOS release. On the second server (Server B) you installed only the Rollup Update. You added FeatureSettingsOverride, FeatureSettingsOverrideMask, and MinVmVersionForCpuBasedMitigations registry keys to both hosts. Then you rebooted both machines.

On Server A you have a virtual machine. That VM was (re-)booted on that server after the BIOS update and the Rollup Update were installed. Get-SpeculationControlSettings shows that all mitigations are enabled for the machine.
You try to live-migrate the virtual machine from Server A to Server B.

In that case, live migration will complete successfully, but the VM will freeze and won’t be available neither via network, nor via the Hyper-V console.

Solution:

To resume normal VM work, you should either to:

  • Move the VM back from Server B to Server A. It should un-freeze automatically.
  • Forcefully restart the VM using Hyper-V Management snap-in or PowerShell cmdlets at Server B.

I tested it with Windows Server 2012 R2 only, the VM was Windows Server 2016. Not sure if it applies to Server 2016 hypervisors.

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 4 — AGPM

Previous part — Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 3 — Management Infrastructure

Introduction

When we talk about control version systems (CVS), the first thing comes to mind is, of course, program code. In the modern world, one cannot be a decent software developer if they do not use Git or TFS or Mercurial or Subversion etc. But this does not mean only developers benefit from the concept of CVS: Adobe provides designers with its own solutions to manage file versions.
What about us, IT administrators? Given the growing popularity of the infrastructure-as-a-code concept, many IT administrators have already adopted some kind of CVS to store scripts and configurations files.

Today I want to talk about version control for group policies. You probably know that group policies are not exactly text files, therefore, traditional CVSes are not the best choice here. That’s why Microsoft came up with its own solution, which allows us to track changes, compare GPO versions and quickly restore the previous ones: Advanced Group Policy Management (AGPM).

Interesting, that it is not just a CVS, but is also a tool to delegate group policies administration with built-in approval management mechanism.
But even if you work in a small team and do not need GPO management delegation, I still encourage you to use AGPM as a version control system.

AGPM is a part of Microsoft Desktop Optimization Pack, which is a free software set available to Microsoft Software Assurance subscribers. Here’s the official documentation where you can learn more about the product.

Warning

AGPM is NOT a substitute for a proper Active Directory backup.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 4 — AGPM

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 3 — Management Infrastructure

Previous part — Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 2 — Post-Config

Introduction

Sorry it took me so long — a lot has happened in the last six months, including moving into another country and changing jobs. Also, as you can see from several previous posts, I got a little bit carried away with PowerShell.
Another thing with which I got carried away is this post: I even had to split it, eventually. That’s why today, I present you not one, but two blogs at once! Find the next one here: Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 4 — AGPM

Up to this point, we were working on our servers interactively, locally. This is considered not the best practice because you consume server resources to support an interactive logon session. Also it might be inconvenient when you manage a fleet of servers.
In this article we will setup remote management administrative stations, which we will use to manage servers in the lab from then on.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 3 — Management Infrastructure

Function to download updates from Microsoft Catalog

Last week, I accidentally built a function which downloads update files from Microsoft Update Catalog. What is its real-life application?
Say you are building an OS installation image from scratch, which will then be deployed en masse. Common approach is to integrate all available updates into such image, to increase security and decrease post-installation tasks duration.
For Windows Server 2012 R2 and earlier OSes you have to download and integrate dozens of updates, but you are not sure which ones are required for your installation image (Windows Server 2012 R2 ISO-image has been rebuilt at least three times). The only way to determine which updates to integrate is to install a fresh computer from an ISO-image, connect it to the Internet and check for updates from Microsoft Update.
We can script it as follows:

$SearchResult object will contain a list of updates which your machine requires.

To integrate those updates, you need to receive their .msu or .cab files. Unfortunately there is no way (at least known to me) to extract download links from the $SearchResult object. Here’s where the new Get-WUFileByID function comes to help:
First, you have to get a list of KB IDs from the $SearchResult object:

Then you just pass each one of these to the function and it will download the files:

In the -SearchCriteria parameter, specify a product for which you need to download updates.

As the result, updates will be downloaded to the current catalog (you can redirect them with -DestinationDirectory parameter).
At last, integrrate the updates with help of Add-WindowsPackage cmdlet.

If you do not want to download any files, but wish to receive links only, -LinksOnly switch is here just for that!

The function is covered by tests and available at GitHub. Should you have any questions of suggestions, feel free to leave them in the comments section or raise an issue.

New features for the DNS Synchronization script

Hi guys,
Sorry for the delay: a huge change has happened in my life recently — I moved from Moscow, Russia to Limassol, Cyprus, where I now live and work.
I am still polishing the next part of the “Command-line Infrastructure” series, and today I am here to present you another significant update to the DNS Zones Synchronization script:

  1. Name query modes introduced. Right now, here are two modes:
    • “1” – The script will try to retrieve authoritative nameservers for a zone and then will you them to resolve records in that zone.
    • “0” – The script will use nameservers specified in the config file.

    Specify the required mode for each zone in the new column conveniently named “Mode”. Also, instead of specifying “0”, you can leave the column empty.

  2. This has led to another new feature: From now on, the script supports not only IP addresses as NS servers, but DNS names too. Therefore, “ExtIP” and “IntIP” columns in the configuration file have been renamed to “ExtNS” and “IntNS”.
  3. Even more: you can now leave external nameserver field (ExtNS) empty. In that case, the script will use default operating system DNS servers.

Here is a table for you to better understand how these new features work together:

Mode ExtNS Type Result
0 IP Address Names in the zone resolved using ExtNS IP address.
1 IP Address ExtNS IP address used to find authoritative NSes for the zone.
The first authoritative NS then used to resolve names in the zone.
0 DNS Name Default OS DNS servers used to resolve ExtNS DNS name to an IP address.
This IP address then used to resolve names in the zone.
1 DNS Name Default OS DNS servers used to resolve ExtNS DNS name to an IP address.
This IP address then used to find authoritative NSes for the zone.
The first authoritative NS then used to resolve names in the zone.
0 Empty Names in the zone resolved using default OS DNS servers.
1 Empty Default OS DNS servers used to find authoritative NSes for the zone.
The first authoritative NS then used to resolve names in the zone.

Note, that a query mode does not affect the internal name server at all. Here’s a table for the “IntNS” column as well:

IntNS Type Result
IP Address Requests are sent to this IP address.
DNS Name Default OS DNS servers used to resolve IntNS DNS name to an IP address.
Then requests are sent to this IP address.
Empty An new error is raised. Event ID 52.

As usual, grab the latest release of the script from GitHub!