Split-brain DNS Synchronizer

The latest version of the script available at GitHub.

Many companies use the same domain name for both internal and external servers hosting. When an internal domain name is a name of AD DS domain, and internal users must access some of the servers by their external IP-addresses, the problem arises: somehow all these external names must exist in the internal zone and the record information in these internal records must be in correspondence with the external ones. There are several possible solutions to resolve such situation:

1. Blindly forward all external requests to AD DS controllers. In this case, domain controllers are the primary name servers for the zone.

Pros:

  • Single point of management.
  • No new software on domain controllers required.

Cons:

  • You cannot point internal and external users to different hosts for the same DNS-record.
  • Requires to allow external users access to internal servers, which might be impossible due to security policies. Possible exposure of internal infrastructure to an external malicious user.
2. Have two separated set of DNS-servers for internal and external zones.

Pros:

  • You can point internal and external users to different hosts for the same DNS-record.
  • External users do not access internal servers.
  • No new software on domain controllers required.

Cons:

  • Two different points of management. DNS-records may become outdated.
3. Use DNS policies – the new Windows Server 2016 functionality.

https://blogs.technet.microsoft.com/teamdhcp/2015/08/31/split-brain-dns-in-active-directory-environment-using-dns-policies/

Pros:

  • Single point of management.
  • You can point internal and external users to different hosts for the same DNS-record.

Cons:

  • Requires domain controllers migration to the latest software, which is not possible for some organizations.
  • Requires to allow external users access to internal servers, which might be impossible due to security policies. Possible exposure of internal infrastructure to an external malicious user.

 

Currently, I prefer the second method, with two sets of DNS servers. But in that case we have another challenge: How to ensure that all DNS-records, which must point to the same location for both external and internal users, are in sync?
Continue reading Split-brain DNS Synchronizer

Active Directory Replication Status Tool has been re-released

Microsoft has just re-released the on-premise tool to monitor AD replication: “Active Directory Replication Status Tool” — it works again!

In case you are not aware of the tool itself and the situation which was uncovering around it for the last couple of months, here’s a quick explanation:
This tool is a little GUI helper, useful for almost every AD admin – it quickly shows the replication status of an AD forest. Close to “repadmin”, but easier to use because of the GUI. Microsoft decided to shut down the tool this February and force all its users to migrate to their new cloud monitoring solution Microsoft Operations Management Suite. To achieve this, MS published an “updated” version of the tool which only “enhancement” was a countdown timer which prevented the tool from running after the February 1st.

The mistake MS made was that they didn’t think that the two mentioned tools serve completely different purposes: one use OMS to continuously monitor their IT infrastructure, but this is not the case when you use AD Replication Status Tool — you spin it up when you need to check AD replication right now, instantly: evaluating new Active Directory forest, troubleshooting replication etc. Also, Microsoft often don’t think about internet-disconnected environments — it’s just technically impossible to use Operations Management Suite in such infrastructures. And, for the last, some organizations just don’t want, or couldn’t by law, give their sensitive data to MS, especially via Internet.

All these misunderstandings lead to the creation of this thread on UserVoice where many IT-specialists expressed their anger and explained why OMS cannot replace the on-premise tool in many ways.

Finally, today, our voices has been heard — on-premise tool is back and we all can lean back and relax, but…

They just won’t leave us so easily:

Anyway, I want to thank everyone in the Operational Insights Team for the understanding of the IT-community needs, Ryan Ries for raising this issue and creating the original thread at UserVoice and, of course, all this would not be possible without you — thanks to all sysadmins who raised their voice and voted, argued and explained their needs to the OMS team.

How to allow users to join their computers into AD domain

Imagine, that half of your company users are local administrators at their machines. They pretty often reinstall operating systems and request IT Service Desk to join that newly installed OSes into corporate Active Directory domain. One usually has two options to help the users: either to come to the user’s workstation and use one’s credentials to join a PC into domain, or to recreate workstation’s computer account, at the same time allowing employee’s user account to join the computer into a domain by him-/herself. In the first case, ServiceDesk employee needs to walk to a user’s desk, which may be quite exhausting, especially for remote locations, in the second case, computer’s group membership will be probably lost and a new account must be added into all appropriate groups manually.
Is it possible to decrease time and effort put into resolution of such requests? Yes, absolutely!

The main trick is to assign a user with following permissions to his computer’s account:

  • Validated write to DNS host name
  • Validated write to service principal name
  • List the children of an object
  • Read
  • Read security information
  • List the object access
  • Control access right
  • Delete an object and all of its children
  • Delete
  • Write to the following properties:
    • sAMAccountName
    • displayName
    • description
    • Logon Information
    • Account Restrictions

User with abovementioned permissions will be able to join their PC into an AD domain without any assistance from Service Desk.

I made this little script to automate the permissions assigning. Please look into the help section (or use Get-Help cmdlet) to find out about its syntax and usage examples.

In case you use Windows 8.1/Server 2012 R2, you might need to install KB 3092002, either way, only member of the “Domain Admins” group will be able to execute the script. This is due to a bug in the Set-Acl cmdlet. The fix for Windows 10 is included in the latest RSAT package.

If you unsure how to use the script or experience any errors, please leave a comment below or contact me directly.

How to simplify SCDPM servers maintenance

Every DPM administrator, ever tried to perform a regular maintenance onto a set of SCDPM servers (monthly updates, for example), knows that the only one way to do it correctly, i.e. without interruption of backup jobs, is to gracefully shutdown the server. To achieve this, you need to disable active SCDPM agents, connected to this server, and wait till every running job will be completed.

The only problem is — there is no quick way to get a list of every computer with an active agent connected to a SCDPM server. One may say, I’m wrong here and there IS a quick way — just use Get-DPMProductionServer cmdlet with “ServerProtectionState -eq ‘HasDatasourcesProtected'” filter. But you forgot about cluster nodes: If we protect clustered resource, but not cluster nodes themselves, we will not see them in an output of Get-DPMProductionServer (with abovementioned filter applied, of course). In addition, the output will contain clustered resources, which are useless in our task to stop every active protection agent.

That’s why I want to present you with a solution to quickly get a list of only real computers with an active SCDPM-agent installed. Just pass names of your SCDPM-servers to it (or don’t pass anything for localhost) and you’ll receive a collection of ProtectedServers in response. You may then pass that collection directly to Enable/Disable-DPMProductionServer cmdlets.

“setspn -x” is case-insensitive now

As you probably know, duplicate SPNs cause Kerberos authentication errors in AD DS domains. You may notice it by looking for KRB_AP_ERR_MODIFIED errors and Event ID 11 in system logs. With Windows Server 2008, Microsoft released a largely improved version of setspn, which includes “-x” switch to help you proactively monitor your infrastructure for duplicate SPNs. Combined with “-f” switch, setspn output contains duplicate SPNs not only from a single domain, but from a whole AD DS forest. Many companies rely on a result of “setspn -x -f” command as a data source for monitoring systems.

Today I found, that all these years almost nobody noticed that “setspn -x” command compares SPNs case-sensitively, i.e. following SPNs will be considered different and will not be shown in the output:

  • HOST/SERVERNAME
  • HOST/ServerName

Starting from Windows 10, Microsoft changed behavior of setspn to case-insensitive, and, from now on, every duplicated SPN will be displayed in setspn output, disregarding its case.

While Microsoft asserts, that Windows is case-insensitive to SPNs, not every Microsoft product agrees: for example, Shane Young found that you must pay attention to SPNs used by SharePoint accounts.

As a conclusion, I suggest every AD DS administrator to check their infrastructure with the setspn tool shipped with Windows 10, at least once. It allows you to find TRULY EVERY duplicate SPN (I did found a couple, myself ;).

How to limit a number of PowerShell Jobs running simultaneously

Recently, I received a task which required me to run a particular command on a several thousands of servers. Since execution of this command takes some time, it is just logical to run them in a parallel mode. And PowerShell Background Jobs are the first thing comes in mind.
But resources of my PC are limited — it cannot run more than a 100 jobs simultaneously. Unfortunately, PowerShell doesn’t have a built-in functionality for limiting background jobs yet.

Though, I’m not the first one who stuck with the same problem: official “Hey, Scripting Guy!” blog has introduced us a queue based on .NET Framework objects. But I couldn’t manage this solution to work and needed something simpler. After all, all we need are:

  • a loop
  • a counter, for how much jobs are active and running
  • a variable, allowing next job in queue to start

Eventually, I came up with a piece of code like this:

Whereas my version is similar to a solution proposed at StackOverflow (which I only found AFTER completing my own version, of course), the SO version suffers from a bug where some items in queue may be skipped.

While PS Jobs are so easy to play with, Boe Prox claims that runspaces work much and much faster. Go, check it out, how you can use them to accelerate your job queue in his blog.

Some other queueing technics in PowerShell:

AuthenticationSilo claim is not issued

You setup an Active Directory Authentication Policy and use a membership in Authentication Policy Silo as an access control condition. Next you setup Authentication Policy Silo to use the abovementioned Authentication Policy for appropriate principal types. You set the silo into “audit-only” mode.

In that case, AuthenticationSilo claim is not issued for your security principals.

Why does this happen?

As described in 3.1.1.11.2.18 GetAuthSiloClaim section of Active Directory Technical Specification, AuthenticationSilo claim is issued only when policies in Authentication Silo are enforced:
/*
Check if user is assigned to an enforced silo.
*/
assignedSilo := pADPrincipal!msDS-AssignedAuthNPolicySilo
if (assignedSilo = NULL ||
assignedSilo!msDS-AuthNPolicySiloEnforced = FALSE)
return NULL
endif

Resolution

I’ve found no option to modify this behavior yet. Just keep it in mind while you are testing your Authentication Policies configuration.