All posts by Kirill Nikolaev

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 2 — Post-Config

Previous part — Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation

Introduction

In this post we will perform two configurations on our Active Directory Domain Services instance: We’ll define security tiers which later become cornerstones of our privilege delegation principles and we’ll tune domain-joining parameters. Also a quick tweak for the DNS service.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 2 — Post-Config

Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation

Introduction

Up to this day, Active Directory Domain Services (AD DS) has been the core of the Windows infrastructure. With each release of Windows Server, AD DS receives new features while keeping great backward compatibility. Windows Server 2016 brings following enhancements to AD DS:

In this blog we shall install the corner stone of our future infrastructure: a highly-available AD DS instance of two domain controllers. Our AD DS layout is going to be quite simple: two writable domain controllers in a single site.


Continue reading Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation

Building Highly-Available Windows Infrastructure: Command-line Style

Several months ago Windows Server 2016 was released. With this release Microsoft has made two significant changes in Windows Server installation options:

  • Nano Server was introduced
  • Server with a GUI now includes desktop experience features (it is even called “Server with Desktop Experience”) and there is no supported way to remove them. This should force IT Administrators to deploy Server Core more broadly.

Looks like now is a good time to stop thinking about Windows Server as a GUI based system and pivot your management approach to be more command-line.
This post is the first in a series of how to build your own highly-available Windows infrastructure using just PowerShell and some other command-line tools. I plan to discuss the following components:

  • Active Directory Domain Services,
  • Active Directory Certificate Services,
  • Desired State Configuration,
  • Key Management Services,
  • DHCP,
  • SCDPM,
  • SCCM,
  • Exchange Server,
  • And, possibly, S4B Server, as well.

I am not able to deploy Hyper-V hosts yet, as all my infrastructure is purely virtual and the host machine, sadly, is running Windows Server 2012 R2 and currently it is impossible for me to upgrade it.

All PowerShell code will not use any hardcoded values. Instead, at the beginning of each post, I shall include a set of variables which will allow you to easily recreate the infrastructure in your environment w/o any change in the code.

The first part is already here! Building Highly-Available Windows Infrastructure: Command-line Style. AD DS. Part 1 — Installation

Huge refactoring of Synchronize-DNSZones.ps1

Today I finished huge refactoring of my Synchronize-DNSZones script (see more about it here). The main reason to refactor was to introduce a support to synchronize zone-level records. To efficiently achieve this, I converted a huge pile of code into several smaller functions. I also improved code readability by PS 3.0 standards (apparently, it is also StrictMode-compatible now).
I improved error handling by introducing 3 new error events:
55 – DNS-zone creation is not yet supported. (And I don’t think I’ll ever support it)
72 – Function New-DnsRecord failed.
82 – Cannot import DNSClient PowerShell module.

I finally replaces unapproved verbs in function names to approved ones (see Get-Verb). I shall change the name of the script itself later.
Fixed an issue when Receive-DnsData sometimes returns empty response and.
Fixed typos and incorrect error handling and slightly enhanced comments.
And I also added forgotten definition of $SMTPCc variable.

In the future I plan to add ShouldProcess support (WhatIf).

Pull-requests / issues are welcome!

LAPS presentation

Last December I was presenting at Moscow IT Pro UG about Microsoft LAPS. Finally, the record of the presentation is processed and available at YouTube:

Do not forget to enable English subtitles (they are NOT auto-translated :D)

You may find PPTX-file here.

After the presentation, I was asked a couple of questions:
Q: Does the GUI tool send the password in plain text over the network?
A: No, LDAP connection is encrypted with SASL. But if you are going to access the password attribute in your own scripts/tools, you may accidentally expose the password if you set LDAP_OPT_ENCRYPT = 0 or will use ldap_simple_bind w/o TLS/SSL.

Q: Why do we even need those local accounts? Why not to disable them completely?
A: In case a machine has lost its connection to AD (due to network configuration change, for example), you may want to bring it back on line w/o disruptive actions such as reboot, offline password reset etc. In case if you hope that the machine keeps your offline password hash, you may find yourself in a stressful situation, when you find out that it, in fact, does not.

Google Glazier

WOW, Google has just released a tool to automate the installation of the Microsoft Windows. The tool is written in Python and is called Glazier. OS images are built using text-based YAML config files which are easy to store in a source control system — in that case you can easily review change history and comments, rollback, compare etc.

All data are transferred via HTTPS, which enables you to deliver them anywhere, utilize CDN caches etc.

The tool is distributed by Apache 2.0 license, pull requests are welcome.

Blogs to follow

Today I want to share with you a list of blogs which I personally read and follow. Almost all of them are maintained by recognized individual IT professionals, many of them hold Microsoft MVP and similar awards.
I highly recommend you to add these blogs into your RSS/news-reader.

Of course I read a significant part of the TechNet/MSDN fleet too, but you, for sure, already know all the important ones, so I’ll leave them out for now.

If you have similar lists — share them with us in the comments.

SCDPM in a tiered infrastructure

When a company has a secure infrastructure, usually there are several tiers of resources managed by different administrators (or, at least, by same administrators but using different user accounts). For example, one may separate sensitive servers, like PKI Certification Authorities, Hyper-V hosts or file servers containing PII, and mark them as Tier 1 servers, while marking all other servers as Tier 2. Then he sets up permissions in a way that each tier has its own local administrators, and you may even forbid cross-tier logon completely (except network logon – network logon is useful and doesn’t pose a security threat).

In the ideal world you would have separate management solutions for each tier. But we all live in real world and, sometimes, it is impossible to find additional resources to support your infrastructure. In that case, it is more appropriate to designate your management servers, including backup ones, as Tier 1 – this way more secure servers will be able to access resources residing on less secure servers but not vice versa.

What does this mean for SCDPM? DPM wasn’t designed to backup resources from another security tier, but we can bent it to our will.
After you install an SCDPM agent on a server in Tier 2, then you must attach it to an SCDPM server in Tier 1. At this step, a user, which you are using to attach the agent, must be a local administrator at both the server and the client. Considering our tiered infrastructure, this is impossible, as one user cannot be a member of local administrators on machines from different tiers.
Fear not! We shall grant required permissions granularly in two steps:

Step 1

Basically we need to allow following permissions for Tier 1 admin at Tier 2 server’s WMI root and propagate them through the tree:

  • Enable
  • MethodExecute
  • RemoteAccess
  • ReadSecurity

You may choose to assign these permissions either via GUI, using wmimgmt.msc, or using PowerShell.
For PowerShell way you may use this fixed version of Set-WmiNamespaceSecurity.ps1 script. Original, written by Steeve Lee, suffers from a bug which does not allow to set inheritance flag and throws an error: “Invoke-WmiMethod : Invalid parameter”.
Run PowerShell script at the Tier 2 client as follows:
Set-WmiNamespaceSecurity.ps1 -namespace 'root' -operation 'add' -account 'EXAMPLE\tier1-admin' -permissions 'Enable','MethodExecute','RemoteAccess','ReadSecurity' -allowInherit $true

If you are going to set permissions with a GUI, here’s how it should looks like:

Step 2

This is counter-intuitive one: As we know, SCDPM server requests the time zone from an agent and saves it in the database. Sometimes, somehow, step 1 is not enough for remote non-admin user to request computer’s time zone. As a workaround, execute following WMI query at an SCDPM client: select * from Win32_TimeZone. After that, remote non-admin user will be able to request TimeZone instances for some time.
To utilize PowerShell for the task, execute this: Get-WmiObject -Query 'select * from Win32_TimeZone'

After these two steps, you should be able to add Tier 2 agent under protection of Tier 1 SCDPM server. When you have finished, you may safely remove those permissions by running the following command:
Set-WmiNamespaceSecurity.ps1 -namespace root -operation delete -account 'EXAMPLE\tier1-admin'

Split-brain DNS Synchronizer

Many companies use the same domain name for both internal and external servers hosting. When an internal domain name is a name of AD DS domain, and internal users must access some of the servers by their external IP-addresses, the problem arises: somehow all these external names must exist in the internal zone and the record information in these internal records must be in correspondence with the external ones. There are several possible solutions to resolve such situation:

1. Blindly forward all external requests to AD DS controllers. In this case, domain controllers are the primary name servers for the zone.

Pros:

  • Single point of management.
  • No new software on domain controllers required.

Cons:

  • You cannot point internal and external users to different hosts for the same DNS-record.
  • Requires to allow external users access to internal servers, which might be impossible due to security policies. Possible exposure of internal infrastructure to an external malicious user.
2. Have two separated set of DNS-servers for internal and external zones.

Pros:

  • You can point internal and external users to different hosts for the same DNS-record.
  • External users do not access internal servers.
  • No new software on domain controllers required.

Cons:

  • Two different points of management. DNS-records may become outdated.
3. Use DNS policies – the new Windows Server 2016 functionality.

https://blogs.technet.microsoft.com/teamdhcp/2015/08/31/split-brain-dns-in-active-directory-environment-using-dns-policies/

Pros:

  • Single point of management.
  • You can point internal and external users to different hosts for the same DNS-record.

Cons:

  • Requires domain controllers migration to the latest software, which is not possible for some organizations.
  • Requires to allow external users access to internal servers, which might be impossible due to security policies. Possible exposure of internal infrastructure to an external malicious user.

 

Currently, I prefer the second method, with two sets of DNS servers. But in that case we have another challenge: How to ensure that all DNS-records, which must point to the same location for both external and internal users, are in sync?
Continue reading Split-brain DNS Synchronizer