Things to check when taking over an Active Directory Domain

This blog is a work in progress but I will be keeping track of things to check when taking over an Active Directory Domain. This is not an all inclusive list but a list of common things to check.

ms-DS-MachineAccountQuota

By default in Active Directory this value is set to “10” which allows ANY user in Active Directory to bind ten machines to the domain. In the beginning stages of Active Directory maybe there was a need for this but now its just a big security risk.

Recommendation: Set this to zero

Protected Users Group

Look at highly privileged accounts and add them to the Protected Users Group if they are compatible with the protections that this group provides.

Recommendation: Add any real user accounts that are at Domain Admin or higher. (Enterprise Admins, Schema Admins, etc.)

krbtgt Account Password

Check the last time this password was changed and if it wasn’t changed in the last 180 days, change it.

Recommendation: Make sure you setup a schedule to recycle this password twice a year. This account holds two passwords so when you change the password you should change it twice, ideally 24 hours apart. That is a total of four password changes a year.

Verify SSL Certificates Exist on the Domain Controllers

Verify that valid certificates are in place for LDAPS calls over port 636. As part of this process investigate and try to remove any traffic that is talking on LDAP port 389.

Recommendation: Use either an internal PKI or public facing certificates to make sure all ldaps traffic is talking Active Directory over a secure connection.

Check if Exchange ExtensionAttributes were installed

If “ExtensionAttribute1” -> “ExtensionAttribute15” are in the schema of User/Computer/Group objects, check to see if any of them are in use.

Recommendation: If they are try to document what they are being used for and which ones are free and able to be used.

Check the Default Computer/User Bind OU

The default container for computer objects is (CN=Computers,DC=DOMAIN,DC=COM). This container cannot have group policy applied to it and objects should be set to write to another OU that can be better managed.

Excerpt from Microsoft here: Redirect users and computers containers – Windows Server | Microsoft Learn

“In a default installation of an Active Directory domain, user, computer, and group accounts are put in CN=objectclass containers instead of a more desirable OU class container. Similarly, the accounts that were created by using earlier-version APIs are put in the CN=Users and CN=computers containers.”

“Some applications require specific security principals to be located in default containers like CN=Users or CN=Computers. Verify that your applications have such dependencies before you move them out of the CN=users and CN=computes containers.”

Recommendation: If these items can be re-directed, redirect them to a different OU and make sure proper OU security is set.

Check Tombstone Lifetime / AD Recycle Bin

If the Active Directory Recycle Bin is not enabled, enable it!

The following PowerShell code can be used to see what the current Tombstone Lifetime is:

Write-Output “Get Tombstone Setting `r”
Import-Module ActiveDirectory

$ADForestconfigurationNamingContext = (Get-ADRootDSE).configurationNamingContext
$DirectoryServicesConfigPartition = Get-ADObject -Identity “CN=Directory Service,CN=Windows NT,CN=Services,$ADForestconfigurationNamingContext” -Partition $ADForestconfigurationNamingContext -Properties *
$TombstoneLifetime = $DirectoryServicesConfigPartition.tombstoneLifetime

Write-Output “Active Directory’s Tombstone Lifetime is set to $TombstoneLifetime days `r “

Note that no value returned means the tombstone lifetime setting is set to 60 days (default for AD forests installed with Windows 2003 or older).

Recommendation: If it is not set to 180 days, set it to 180 days. If the AD recycle bin is not enabled, enable it!

Check to see if Sysmon is installed on the Domain Controllers

Recommendation: If Sysmon is not installed, work on getting it installed and configured on the Domain Controllers at a bare minimum

Check Domain Controller Firewall Settings

This may require a conversation with your Information Security team to understand how the Firewalls are configured that sit in front of the Domain Controllers. You want to make sure the bare minimum number of ports are enabled for client traffic and that admin ports are only able to be accessed by admins.

Recommendation: Review security with Information Security Team, and enable the Windows Firewall on all Domain Controllers and manage it with Group Policy. This also acts as an East/West traffic block so if someone gets into one server on the prod network they don’t automatically have RDP access per say to another DC on the same network segment. Setup monitoring for any RDP sessions, successful ones, and failures (including firewall logs). This will verify anyone RDP’ing to the DCs is legit, and will also help track down threat actors on the network. One of the first things threat actors will try to do is see if they have RDP access to the Domain Controllers, this is good information to send to the SOC or InfoSec.

Check to see if RPC Ports are restricted

Recommendation: If RPC Ports have not been limited on the Domain Controllers, limit them to a few ports, say 100, or 1,000, and then make the associated changes to firewall rules.

Check Time Settings on the Domain Controller running PDC Emulator

Many people know that the clients that talk to the Domain Controllers have to have the correct time, but it is super important that you are pulling a correct time source for your Domain Controllers.

Recommendation: Make sure the Domain Controllers, more specifically the Domain Controller with the PDC FSMO role is pulling its time from a trusted source. It might also be worth writing a script to monitor this as well.

Verify FSMO Role Holder(s), Global Catalog Servers, & Backups

Verify who is running the FSMO roles for your Domain(s). Make all DCs a Global Catalog if you have a single-domain forest. Verify how AD is being backed up.

Recommendation: Depending on your specific situation you may not be able to run all FSMO roles on one DC. In my jobs I have been able to. This allows you to target this DC as the DC to be backed up, snapshotted, etc. If you are running a single-domain forest, make sure all DCs are a global catalog.

Check Trusts

Check to see if there are any trusts configured for the domain.

Recommendation: If there are any trusts, figure out if they are still needed, and make sure there is documentation on why these trusts are setup and when they can be unconfigured.

Check Sites & Services for IP Configuration

Check AD Sites & Services for configuration of IP ranges. Make yourself familiar with how this setup and why it is setup the way it is.

Recommendation: Take notes on if Sites & Services is being used. If it is being used understand the network ranges and why it is configured the way it is. If priority is being given, understand why.

Site Statistics for 2023

I had a lot of blogs that didn’t get posted that I wanted to get published this year. This was a year of relaxation. I took a lot of time for myself this year. Here’s to hoping I can spend more time to get more quirky issues posted in 2024.

I had 29,000 visitors visit my blog this year! That’s a lot of people from all over the world swinging by to say “Hi”. Hopefully you all found something useful here.

As far as days of the week go, Tuesdays are the day this site got the most hits this year.

Top Visits by Country this year:

  1. United States
  2. Germany
  3. United Kingdom
  4. Canada
  5. India
  6. France
  7. Australia
  8. Netherlands
  9. Sweden
  10. Spain

Automating Windows Server Patching with SCCM and Custom PowerShell Scripts

One of my major tasks when I started my new job was to automate our Windows Server patching so I wouldn’t have to be up two nights every month to deal with patching.

All of our Windows Servers have SCCM clients on them, and we manage the patching software push and the maintenance windows for the servers to reboot with MECM. These servers are members of AD groups that are used to correlate to Device Collection in MECM. This is important because these same AD groups are used for custom scripts below.

This blog is not going to go over approving and downloading patches, setting maintenance windows, or deployment settings.

Things Done Before Automated Patch Run:

  1. We set the Installation Deadline of patches to 2 hours before the Maintenace window (reboot).
  2. We run a custom script on all boxes through SCCM before patching to clear the CCM Cache folder. This helps prevent running out of storage space in Windows. Our OS disks are not very big.
  3. Set the Maintenance Schedule in Operations Manager (SCOM), so the boxes will not report issues during the patch window.
  4. Set the custom scripts on our scripting server to force patch installs and check services.
  5. Set time aside to manually patch the boxes that cannot be automated (i.e. Domain Controllers, etc.)

Custom Scripts For Patching:

These two scripts are pinned in Task Scheduler and uses and an AD Service account that has local admin on the servers. (This is not meant for Domain Controllers, DA would be needed).

A template for the script we use to force servers to install patches is located here:

Windows-Servers/Force_MECM_Patch_Install.ps1 at main · paularquette/Windows-Servers (github.com)

This script will basically make sure that the servers start installing patches at the time we have it run in Task Scheduler from our scripting server (Typically a few minutes after the 2 hours prior that we schedule). This requires WinRM ports open from the scripting server to the server endpoints that are being updated.

A template for the script we use to monitor the patch deployment, force MECM check-in and auto start services is here:

Windows-Servers/MECM_Patch_Monitoring.ps1 at main · paularquette/Windows-Servers (github.com)

This script we schedule to run about 20 minutes after the maintenance window starts and have it continue to run every 20 minutes until the maintenance window completes. This is a LOT of e-mails and I have not edited the script yet to just write to a log file but that would be easy enough to do. This script will report back last boot, and any attempts to either force SCCM check-in, or restart any services that failed to start.

By default if the box restarted over 25 minutes ago but not over 4 hours ago it will force an MECM check-in so the patch compliance report the next morning is accurate.

By default if the box restarted over 25 minutes ago but not over 4 hours ago, any Automatic services that are not started will be attempted to be restarted. Services can be whitelisted, so the script does not act on them.

Screenshot of Last Boot Report:

Screenshot of Services Not Started Report:

Mac Sonoma 14.2.1 SMB issues with Synology NAS

Over the holiday break here I wiped and upgraded my personal Macbook which is still an Intel Mac to the latest OS, currently Sonoma 14.2.1.

Upon trying to set up a TimeMachine backup location on my synology NAS I noticed I could no longer authenticate to my Synology NAS.

In Finder, I was using Connect to Server, smb://ipaddress

It would recognize something was there and prompt for credentials but it would not take the correct username and password I was inputting it just kept prompting.

Edit:

This was a freshly wiped Mac with no settings on it at all, setup as a new Mac. The DSM version I was running was DSM 7.1-42661 Update 4. I have a DS920+.

I have since updated my DSM version to the latest (DSM 7.2.1-69057 Update 1). I can no longer replicate the issue, but not sure if after you authenticate for the first time you can’t replicate it anymore? Very weird.

Resolution:

I found on another article somewhere on the web to capitalize a letter in the username. I thought this sounded funny but it actually worked. I capitalized the first letter in my username, and then it allowed me to connect.

Creating Quick E-mail Reports From PowerShell

One of the tasks as a Sysadmin you may need or want to do is kick out an e-mail with data that you have written in PowerShell.

I find myself doing this a lot in my day job as more tasks get automated but you still need reporting on what is happening.

I have created an E-mail Report Template that is the basis for what I use when I need to kick out e-mail reports.

You can find this template on my github here: https://github.com/paularquette/Active-Directory/blob/main/Email_Report_Template

How To Use The Script:

If you want to iterate through data, I’m using $Var3 in this template for that purpose. You can see under Global Variables & Input Files I’m creating an empty array for $Var3 ($Var3 = @())

To see this script work, you would just need to have data to iterate through and add it to a PSObject as you can see in my commented out line under “Start Script Programming”

For Example:

ForEach ($i in $computers)
{
     $computername = $i.name
     $OULocation = $i.ou
     $Var3 += New-Object PSObject -Property @{ComputerName=$computername;Location=$OULocation}
}

You can then set out your sorting of the variable for the e-mail script, which you will see commented out in the Email Section.

For Example:

$emailResponse = $Var3 |Select-Object ComputerName,Location |Sort-Object ComputerName |Convert-To-Html -Head $style

Screenshot of a Patch Report E-mail I Use:

WordPress stops working after Ubuntu 20.04LTS upgrade to 22.04LTS

I just recently upgraded my blog from Ubuntu 20.04 LTS to 22.04 LTS and my main blog WordPress site would not load.

The php code was showing in the browser rather than being processed by php.

<?php
/**
* Front to the WordPress application. This file doesn’t do anything, but loads
* wp-blog-header.php which does and tells WordPress to load the theme.
...

First, make sure that the libapache2-mod-php8.1 module is installed:

sudo apt install libapache2-mod-php8.1

Next browse to /etc/apache2/mods-enabled, and do an ls:

You will probably see two old php symlinks that don’t go anywhere anymore. If you look in the mods-available directory you will see they don’t exist.

We need to create two new symlinks for php8, do an ls in mods-available to make sure you see two php files

Create the symlinks in mods-enabled for the two in mods-available and restart apache2. This was done in the mods-enabled directory.

sudo service apache2 restart

If this works feel free to delete the old symlinks with just the “rm” command.

Happy Upgrading!

vcenter server ext4-fs error aborted journal

I ran into an issue with vcsa appliance running version 8 in my home lab. I was able to fix it by using Option1 of this article from vmware.

https://kb.vmware.com/s/article/2149838

  1. Reboot the virtual appliance, and immediately after the OS starts, press e to open the GNU GRUB Edit Menu.
  2. Locate the line that begins with the word linux.
  3. Option 1
    1. At the end of the line, add fsck.repair=yes then reboot the appliance. This will force the default filesystem check to auto-resolve any issues, and does not require emergency mode.

SQL Server Failover Cluster Instance Install Failed Permissions

I’ve seen a lot of posts out there for the error message we had but no actual solutions for our particular issue.

If you are attempting to install a new instance of SQL Server on your failover cluster, do make sure you are not installing into the Root folder of C:\ClusterStorage\<symlink>. You must make sure you create another directory underneath (We disabled inheritance too).

You probably landed here due to Googling this:

The following error has occurred:

Updating permission setting for folder ‘C:\ClusterStorage\<symlink>\Data\MSSQL13.<DBNAME>\MSSQL\DATA’ failed. The folder permission setting we supposed to be set to ‘D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-xxxxxxx)’.

Click ‘Retry’ to retry the failed action or click ‘Cancel’ to cancel this action and continue setup.

Resolution:

The resolution to this problem for us was super super simple. We were not running the installer “As Administrator”. If you are running into this issue try running the installer As Administrator.

Check to see if ExtensionAttributes are in use for Active Directory objects

If you are taking over an Active Directory or just trying to run cleanup on one that you currently manage, one of the tasks you will probably want to perform is to check to see which of the built-in schema ExtensionAttributes are in use.

If you don’t have ExtensionAttributes 1-15 in your On Premises Active Directory you will need to extend your schema for Exchange Server.

The script below has also been added to my github.

https://github.com/paularquette/Active-Directory

#Check Computers
$i = 1
while ($i -lt 16)
{
$exAtrib = "extensionAttribute"
$exAtrib = $exAtrib + "$i"
Write-Host "Checking Computers for $exAtrib"
$inUse = Get-ADComputer -Properties $exAtrib -Filter "$exAtrib -like '*'" |Select Name,$exAtrib

if ($inUse)
{
     Write-Host "Computer Check - $exAtrib is in use"
} else {
     Write-Host "Computer Check - $exAtrib is NOT in use"
}

$i = $i + 1
}
############################################
#Check Groups
$i = 1
while ($i -lt 16)
{
$exAtrib = "extensionAttribute"
$exAtrib = $exAtrib + "$i"
Write-Host "Checking Groups for $exAtrib"
$inUse = Get-ADGroup -Properties $exAtrib -Filter "$exAtrib -like '*'" |Select Name,$exAtrib

if ($inUse)
{
     Write-Host "Group Check - $exAtrib is in use"
} else {
     Write-Host "Group Check - $exAtrib is NOT in use"
}

$i = $i + 1
}
############################################
#Check Users
$i = 1
while ($i -lt 16)
{
$exAtrib = "extensionAttribute"
$exAtrib = $exAtrib + "$i"
Write-Host "Checking Users for $exAtrib"
$inUse = Get-ADUser -Properties $exAtrib -Filter "$exAtrib -like '*'" |Select Name,$exAtrib

if ($inUse)
{
     Write-Host "User Check - $exAtrib is in use"
} else {
     Write-Host "User Check - $exAtrib is NOT in use"
}

$i = $i + 1
}

Mac SMB can’t connect to Server 2016 (File Server) Microsoft Failover Clustering Services

We ran into an issue with Macs connecting to our file services while attempting an upgrade on a Microsoft Clustering Services File Services on Server 2012R2.

Current Environment:

Two 2012R2 Servers/Two 2016 Servers, with the following Roles/Features Installed:

ROLES – File and Storage Services:

  • File Server
  • DFS Namespaces
  • DFS Replication
  • File Server Resource Manager

FEATURES

  • Failover Clustering

Testing:

Two Virtual Machines running 2012R2, with Microsoft Clustering Services, with multiple File Server Roles. Everything works with the Macs connecting to these Clustered File Services while running 2012R2. The cluster level is also 2012R2.

However, after adding a 2016 Server into this Microsoft Cluster, and failing over one of the file server roles to it, the Macs can no longer connect to that file server. They receive a message stating:

There was a problem connecting to the server “”. Check the server name or IP address, and then try again. If you continue to have problems, contact your system administrator.

If you migrate the file server role back to a server running 2012R2 the Mac can once again connect.

Resolution:

I plan to come back to this blog to post a more detailed writeup. I was passed on a lot of information that I haven’t seen but I will try to best to explain what I believe is happening.

When a 2016 Server is added to a 2012R2 only cluster, the cluster moves into “Mixed Mode” to allow both Operating Systems to function. Now Microsoft states you should not stay in this mode very long, from what I’ve seen thrown around no more than 4 weeks.

This is hearsay from packet captures but when a Mac tries to connect to the File Services running on 2016 Server while in mixed mode it supposedly connects on SMB 3.1.1, but then something in the network stack wants to downgrade the connection to SMB 2.0, and the Macs cannot follow it and therefore cannot connect to the server.

However, after removing the 2012R2 servers, and then upgrading the Cluster Level to 2016, the Macs can then connect again.

I’m still doing some troubleshooting and this post will be updated.