Huge refactoring of Synchronize-DNSZones.ps1

Today I finished huge refactoring of my Synchronize-DNSZones script (see more about it here). The main reason to refactor was to introduce a support to synchronize zone-level records. To efficiently achieve this, I converted a huge pile of code into several smaller functions. I also improved code readability by PS 3.0 standards (apparently, it is also StrictMode-compatible now).
I improved error handling by introducing 3 new error events:
55 – DNS-zone creation is not yet supported. (And I don’t think I’ll ever support it)
72 – Function New-DnsRecord failed.
82 – Cannot import DNSClient PowerShell module.

I finally replaces unapproved verbs in function names to approved ones (see Get-Verb). I shall change the name of the script itself later.
Fixed an issue when Receive-DnsData sometimes returns empty response and.
Fixed typos and incorrect error handling and slightly enhanced comments.
And I also added forgotten definition of $SMTPCc variable.

In the future I plan to add ShouldProcess support (WhatIf).

Pull-requests / issues are welcome!

LAPS presentation

Last December I was presenting at Moscow IT Pro UG about Microsoft LAPS. Finally, the record of the presentation is processed and available at YouTube:

Do not forget to enable English subtitles (they are NOT auto-translated :D)

You may find PPTX-file here.

After the presentation, I was asked a couple of questions:
Q: Does the GUI tool send the password in plain text over the network?
A: No, LDAP connection is encrypted with SASL. But if you are going to access the password attribute in your own scripts/tools, you may accidentally expose the password if you set LDAP_OPT_ENCRYPT = 0 or will use ldap_simple_bind w/o TLS/SSL.

Q: Why do we even need those local accounts? Why not to disable them completely?
A: In case a machine has lost its connection to AD (due to network configuration change, for example), you may want to bring it back on line w/o disruptive actions such as reboot, offline password reset etc. In case if you hope that the machine keeps your offline password hash, you may find yourself in a stressful situation, when you find out that it, in fact, does not.

Google Glazier

WOW, Google has just released a tool to automate the installation of the Microsoft Windows. The tool is written in Python and is called Glazier. OS images are built using text-based YAML config files which are easy to store in a source control system — in that case you can easily review change history and comments, rollback, compare etc.

All data are transferred via HTTPS, which enables you to deliver them anywhere, utilize CDN caches etc.

The tool is distributed by Apache 2.0 license, pull requests are welcome.

Blogs to follow

Today I want to share with you a list of blogs which I personally read and follow. Almost all of them are maintained by recognized individual IT professionals, many of them hold Microsoft MVP and similar awards.
I highly recommend you to add these blogs into your RSS/news-reader.

Of course I read a significant part of the TechNet/MSDN fleet too, but you, for sure, already know all the important ones, so I’ll leave them out for now.

If you have similar lists — share them with us in the comments.

SCDPM in a tiered infrastructure

When a company has a secure infrastructure, usually there are several tiers of resources managed by different administrators (or, at least, by same administrators but using different user accounts). For example, one may separate sensitive servers, like PKI Certification Authorities, Hyper-V hosts or file servers containing PII, and mark them as Tier 1 servers, while marking all other servers as Tier 2. Then he sets up permissions in a way that each tier has its own local administrators, and you may even forbid cross-tier logon completely (except network logon – network logon is useful and doesn’t pose a security threat).

In the ideal world you would have separate management solutions for each tier. But we all live in real world and, sometimes, it is impossible to find additional resources to support your infrastructure. In that case, it is more appropriate to designate your management servers, including backup ones, as Tier 1 – this way more secure servers will be able to access resources residing on less secure servers but not vice versa.

What does this mean for SCDPM? DPM wasn’t designed to backup resources from another security tier, but we can bent it to our will.
After you install an SCDPM agent on a server in Tier 2, then you must attach it to an SCDPM server in Tier 1. At this step, a user, which you are using to attach the agent, must be a local administrator at both the server and the client. Considering our tiered infrastructure, this is impossible, as one user cannot be a member of local administrators on machines from different tiers.
Fear not! We shall grant required permissions granularly in two steps:

Step 1

Basically we need to allow following permissions for Tier 1 admin at Tier 2 server’s WMI root and propagate them through the tree:

  • Enable
  • MethodExecute
  • RemoteAccess
  • ReadSecurity

You may choose to assign these permissions either via GUI, using wmimgmt.msc, or using PowerShell.
For PowerShell way you may use this fixed version of Set-WmiNamespaceSecurity.ps1 script. Original, written by Steeve Lee, suffers from a bug which does not allow to set inheritance flag and throws an error: “Invoke-WmiMethod : Invalid parameter”.
Run PowerShell script at the Tier 2 client as follows:
Set-WmiNamespaceSecurity.ps1 -namespace 'root' -operation 'add' -account 'EXAMPLE\tier1-admin' -permissions 'Enable','MethodExecute','RemoteAccess','ReadSecurity' -allowInherit $true

If you are going to set permissions with a GUI, here’s how it should looks like:

Step 2

This is counter-intuitive one: As we know, SCDPM server requests the time zone from an agent and saves it in the database. Sometimes, somehow, step 1 is not enough for remote non-admin user to request computer’s time zone. As a workaround, execute following WMI query at an SCDPM client: select * from Win32_TimeZone. After that, remote non-admin user will be able to request TimeZone instances for some time.
To utilize PowerShell for the task, execute this: Get-WmiObject -Query 'select * from Win32_TimeZone'

After these two steps, you should be able to add Tier 2 agent under protection of Tier 1 SCDPM server. When you have finished, you may safely remove those permissions by running the following command:
Set-WmiNamespaceSecurity.ps1 -namespace root -operation delete -account 'EXAMPLE\tier1-admin'

Split-brain DNS Synchronizer

The latest version of the script available at GitHub.

Many companies use the same domain name for both internal and external servers hosting. When an internal domain name is a name of AD DS domain, and internal users must access some of the servers by their external IP-addresses, the problem arises: somehow all these external names must exist in the internal zone and the record information in these internal records must be in correspondence with the external ones. There are several possible solutions to resolve such situation:

1. Blindly forward all external requests to AD DS controllers. In this case, domain controllers are the primary name servers for the zone.

Pros:

  • Single point of management.
  • No new software on domain controllers required.

Cons:

  • You cannot point internal and external users to different hosts for the same DNS-record.
  • Requires to allow external users access to internal servers, which might be impossible due to security policies. Possible exposure of internal infrastructure to an external malicious user.
2. Have two separated set of DNS-servers for internal and external zones.

Pros:

  • You can point internal and external users to different hosts for the same DNS-record.
  • External users do not access internal servers.
  • No new software on domain controllers required.

Cons:

  • Two different points of management. DNS-records may become outdated.
3. Use DNS policies – the new Windows Server 2016 functionality.

https://blogs.technet.microsoft.com/teamdhcp/2015/08/31/split-brain-dns-in-active-directory-environment-using-dns-policies/

Pros:

  • Single point of management.
  • You can point internal and external users to different hosts for the same DNS-record.

Cons:

  • Requires domain controllers migration to the latest software, which is not possible for some organizations.
  • Requires to allow external users access to internal servers, which might be impossible due to security policies. Possible exposure of internal infrastructure to an external malicious user.

 

Currently, I prefer the second method, with two sets of DNS servers. But in that case we have another challenge: How to ensure that all DNS-records, which must point to the same location for both external and internal users, are in sync?
Continue reading Split-brain DNS Synchronizer

Active Directory Replication Status Tool has been re-released

Microsoft has just re-released the on-premise tool to monitor AD replication: “Active Directory Replication Status Tool” — it works again!

In case you are not aware of the tool itself and the situation which was uncovering around it for the last couple of months, here’s a quick explanation:
This tool is a little GUI helper, useful for almost every AD admin – it quickly shows the replication status of an AD forest. Close to “repadmin”, but easier to use because of the GUI. Microsoft decided to shut down the tool this February and force all its users to migrate to their new cloud monitoring solution Microsoft Operations Management Suite. To achieve this, MS published an “updated” version of the tool which only “enhancement” was a countdown timer which prevented the tool from running after the February 1st.

The mistake MS made was that they didn’t think that the two mentioned tools serve completely different purposes: one use OMS to continuously monitor their IT infrastructure, but this is not the case when you use AD Replication Status Tool — you spin it up when you need to check AD replication right now, instantly: evaluating new Active Directory forest, troubleshooting replication etc. Also, Microsoft often don’t think about internet-disconnected environments — it’s just technically impossible to use Operations Management Suite in such infrastructures. And, for the last, some organizations just don’t want, or couldn’t by law, give their sensitive data to MS, especially via Internet.

All these misunderstandings lead to the creation of this thread on UserVoice where many IT-specialists expressed their anger and explained why OMS cannot replace the on-premise tool in many ways.

Finally, today, our voices has been heard — on-premise tool is back and we all can lean back and relax, but…

They just won’t leave us so easily:

Anyway, I want to thank everyone in the Operational Insights Team for the understanding of the IT-community needs, Ryan Ries for raising this issue and creating the original thread at UserVoice and, of course, all this would not be possible without you — thanks to all sysadmins who raised their voice and voted, argued and explained their needs to the OMS team.