I recently came across a need to review the Storage Policies in use within a vCenter environment and how many objects or virtual machines where using each policy.
I saw this as an excuse to refresh my PowerShell skills and wrote a quick function.
Source code can be found on my GitHub, here. Check there for any updates but below is the code at the time of writing.
function Get-vSANSPSummary {
<#
.SYNOPSIS
Export vSAN Storage Policy Information.
.DESCRIPTION
Export vSAN Storage Policies from vCenter showing FTT & Stripe information and amount of amount of VM's using each.
.PARAMETER ExportPath
Path the export the output HTML file.
.NOTES
Tags: VMware, vCenter, SPBM, PowerCLI, API
Author: Stephan McTighe
Website: stephanmctighe.com
.EXAMPLE
PS C:\> Get-vSANSPSummary -ExportPath "C:\report\vSAN-Storage-Policy-Summary.html"
Outputs a HTML file containing the Storage Policy Information for vSAN Storage Policies to a specified location.
#>
#Requires -Modules VMware.VimAutomation.Storage
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string] $ExportFilePath)
Begin {}
Process {
try {
$Output = @()
$vSANstoragepolicies = Get-SpbmStoragePolicy -Namespace "VSAN"
$SPBM = $vSANstoragepolicies | Select-Object Name, AnyOfRuleSets
ForEach ($SP in $SPBM) {
$Attributes = @( $SP | ForEach-Object { $_.AnyOfRuleSets } | Select-Object -ExpandProperty AllofRules)
$object = [PSCustomObject]@{
SPName = $SP.Name
ObjectCount = $ObjectCount = (Get-SpbmEntityConfiguration -StoragePolicy "$($SP.name)").count
VMCount = $VMCount = (Get-SpbmEntityConfiguration -StoragePolicy "$($SP.Name)" | Where-Object {$_.Entity -notlike "hard*"}).count
RAID = $attributes | Where-Object { $_.Capability -like "*VSAN.replicaPreference*" } | Select-Object -ExpandProperty Value
FTT = $attributes | Where-Object { $_.Capability -like "*VSAN.hostFailuresToTolerate*" } | Select-Object -ExpandProperty Value
SubFTT = $attributes | Where-Object { $_.Capability -like "*VSAN.subFailuresToTolerate*" } | Select-Object -ExpandProperty Value
Stripes = $attributes | Where-Object { $_.Capability -like "*VSAN.stripeWidth*" } | Select-Object -ExpandProperty Value
ForceProvision = $attributes | Where-Object { $_.Capability -like "*VSAN.forceProvisioning*" } | Select-Object -ExpandProperty Value
StorageType = $attributes | Where-Object { $_.Capability -like "*VSAN.storageType*" } | Select-Object -ExpandProperty Value
IOPSLimit = $attributes | Where-Object { $_.Capability -like "*VSAN.iopsLimit*" } | Select-Object -ExpandProperty Value
}
$Output += $object
}
$Output | ConvertTo-Html -Property SPName, VMCount, ObjectCount, RAID, FTT, SubFTT, Stripes, ForceProvision, StorageType, IOPSLimit | Out-File $ExportFilePath
}
catch {
Write-Host "An error occurred!" -ForegroundColor Red
Write-Host $_ -ForegroundColor Red
}
}
}
Output currently as a basic HTML table but you could change this to add some ‘HTMLness’ or output to CSV.
As always, thanks for reading and I hope this has been useful to someone.
If you like my content, consider following me on Twitter so you don’t miss out!
Ever since starting out on my learning journey with Packer and writing my ‘Getting Started‘ blog series, I have not stopped learning and developing my templates. I have also learnt a lot from other members of the tech community, such as @mpoore, as well as discovering this repository – vmware-samples. I really wish I had found this sooner than I did, as its a great resource! It was especially useful to me for Linux examples. That said, its been great taking my own learning journey.
Since writing the series, I have made numerous changes to my template code, structure and added additional functionality and OS’s. But I have also spent some time working with Azure DevOps Pipelines for another piece of work. This got me thinking…
In this blog post I want to show you something that I have put together using Azure DevOps Pipelines and Packer.
Overview
This solution makes use of Azure DevOps Pipelines, Azure Key Vault and HashiCorp Packer to schedule and orchestrate the building of new virtual machine templates in VMware vSphere.
Azure Pipelines will be used to orchestrate the secure retrieval of secrets from Azure Key Vault using the native integration and executing the Packer commands to build the required template. By using these together, we can ensure all secrets are securely utilised within the build.
I will be using a Self Hosted DevOps agent as part of this to allow communication between Azure DevOps and the private networks in my on-premises lab. This is instead of using a Microsoft Hosted DevOps agent which is in a public shared address space.
As mentioned, Azure Key Vault is going to be used to store the secret values for things like service accounts for vSphere access and administrator passwords for the Guest OS etc. These can retrieved within a pipeline, which is granted access to these secrets and made available as variables to be consumed.
Each template will have its own pipeline. This means individual templates can be called via API allowing for some other interesting use cases and automation.
As is the case in the blog series, all templates are uploaded to the vSphere Content Library which can then be subscribed too from other vCenter Servers.
Components
GitHub Repository (Packer Code)
DevOps Project
DevOps Pipeline
On-Prem DevOps Agent (Virtual Machine)
Prerequisites
GitHub Repository with your Packer code (Example here)
A functioning vSphere environment
An Azure & DevOps Subscription / Account
An Azure Key Vault (With appropriate networking configured)
A Virtual Machine (Windows 2022 Core in this example)
AD User Account (To run the DevOps Agent as a service) *can use the system local account if you wish.
Packer Code
If you aren’t familiar with Packer, I would suggest taking a look at my blog series on Packer here, but I will briefly go through some key differences of the newer code that you can find here, which the blog is based on. At the time of writing I have only added Server 2019 & 2022 but I will be adding to this over time.
Firstly the file structure is now a little different. This was inspired by the vmware-samples repository linked earlier, and some of my own preferences from actively using Packer.
Shared answer file templates with parameters for all Windows Operating Systems to reduce repeating files.
Single .pkrvars.hcl for each Operating System which includes both Standard & Datacenter Editions as well as Core and Desktop options.
The Build file includes a dynamic creation of the answer file based on variables from a template file. (this is great!)
Cleaner variable naming.
The Windows Update Provisioner is now controlled using the required plugin parameters.
Another key difference is how sensitive values such as usernames, passwords and keys are now passed into the configuration. These are now retrieved from Azure Key Vault by a Pipeline task and passed into environment variables (PowerShell) which are then consumed as any other variable would be. The key benefit is that the secrets are securely stored and accessed by the pipeline!
Check out the Azure Key Vault section later in the post for more information on secrets and their consumption.
DevOps Project
First lets create a DevOps Project by heading over to dev.azure.com and clicking on New Project.
Provide a name for the project and select the Private option.
Now time to create the first pipeline. As mentioned earlier, we will be using a pipeline per operating system.
Select Create Pipeline.
You will then be asked to select the location of your code. I will be selecting GitHub as that is where I keep my code.
Followed by the repository that contains your Packer Code.
Next you need to provide and approve access for Azure Pipelines to the repository you selected.
Now to create the first pipeline YAML file. Select Starter Pipeline.
First of all rename the file to the name of the template you are going to build. In this example lets call it ‘windows-server-2022-standard-core.yml’. You can do this by clinking the existing name.
Now you want to add the code for this template build. You can use the examples from here.
You could of course take the examples from my GitHub and select ‘Existing Azure Pipelines YAML file’ rather than ‘Starter pipeline’ if you wish.
Here we start by referencing a different central repository which contains reusable code. A good resource to understand this bit is linked here.
- checkout: self
- checkout: ps-code-snippets
We also have a ‘checkout’ references. These instruct the pipeline to checkout not only the source repository, but also the additional one that contains reusable code.
This section is setting a schedule to run at midnight every 15th of the month. This can be adjusted to suit your needs. More information about setting cron schedules are here.
pr: none
trigger: none
As we want to run the Pipelines either on a schedule or manually, we want to disable the CI/CD integration. We do this by setting the pull request (pr) and trigger options to ‘none’.
This section defines a couple of parameters for the job. Firstly the name of the job as well as the name of the On-Prem DevOps agent pool we will be using (See the next section). Finally a timeout value. By default this is 60 minutes for self hosted agent jobs which isn’t quite long enough for the Desktop Edition of the OS in my lab. There is also a reference to a variable group. These are groups of variables that can be consumed by any Pipeline within the DevOps Project.
Next we are using a built in Pipeline task to retrieve secrets from an Azure Key Vault. I am then filtering it to the specific secrets required. You could replace this with ‘*’ if you don’t wish to filter them. Access to these are secured using RBAC later.
Now we move onto the more familiar Packer and PowerShell code (if you are already a user of Packer). This sets a couple of variables to use in the log files, enables logging and initiates the build. It then begins to populate a variable that has taken the information from the log file and cleaned it up to consume in an email notification in the final steps.
Something you may need to adjust is the Set-Location path. Its using a built in variable which is the root of the GitHub Repository: $(System.DefaultWorkingDirectory). Make sure you adjust the remain path to match to location of your Packer configuration.
$EmailBody = ('<HTML><H1><font color = "#286334"> Notification from The Small Human Cloud - Packer Virtual Machine Templates</font></H1><BODY><p><H3><font color = "#286334">Build Name:</H3></font></p><p><b>$(BuildVersion)</b></p><p><H3><font color = "#286334">Pipeline Status:</H3></font></p><p><b>Build Reason:<b> $(Build.Reason)</p><p><b>Build Status:<b> $(Agent.JobStatus)</p><p><H3><font color = "#286334">Packer Log:</H3></font></p><p>Please Review the logfile below for the build and take appropriate action if required.</p>')+("<p>$EmailContent</p>")
Set-Location $(System.DefaultWorkingDirectory)
. '.\ps-code-snippets\Send-Email.ps1'
Send-Email -TenantId "$(PipelineNotificationsTenantID)" -AppId "$(PipelineNotificationsAppID)" -AppSecret "$(PipelineNotificationsAppSecret)" -From "$(From)" -Recipient "$(Recipient)" -Subject "$(Subject)" -EmailBody $EmailBody
This final section makes use of a PowerShell function based on the Azure Graph API that you can find details on here, to send an email notification via O365. It is taking the content of the function from a separate repository and loading it into the session to then run.
Now select the drop down next to ‘Save and run’ and click Save.
We want to rename the actual Pipeline to the template name. Head back to the Pipelines menu, click the 3 dots and select ‘Rename/Move’. Give it the same name as your YAML file for consistency.
Variable Groups and Pipeline Variables
We mentioned earlier the reference to a variable group. These are configured per DevOps Project and can be used by multiple Pipelines. I am using one specifically for the values used for email notifications. They are a great way to reduce duplicating variable declarations.
You can set these by heading to Pipelines > Library and then clicking ‘+ Variable group’. You can see my group called ‘Notifications’ already created.
variables:
- group: Notifications
We then need to grant the Pipeline permissions to this variable group. You will need to add any Pipeline you want to have access to these variables.
There is another way of providing variables to a Pipeline and that is a Pipeline Variable. These are configured per Pipeline and are not available to other Pipelines. I am using this to create a ‘Build Version’ variable that is used for the log file name.
Azure DevOps Agent
We need to build our self hosted DevOps Agent that we referenced in the ‘pool’ parameter in our configuration earlier. This is going to be a virtual machine on my on-premises vSphere environment. I will be using a Windows Server 2022 Standard Core VM called ‘vm-devops-02’ that I have already built on a dedicated VLAN.
To start the config, we need to create an Agent Pool. From the Project page, select ‘Project Settings’ in the bottom left.
From the tree on the left under Pipelines, select Agent Pools.
Now, select Add Pool, and complete the required field as below, editing the name as desired, but you will need to match it when you reference the pool in your YAML.
Now to add the agent to our on-premises VM. Select ‘New Agent’
Download the agent using the Download button and then copy the ZIP file to the VM to a directory of choice. You can use PowerShell for this:
Now before we run the configuration file, we need to create a PAT (Personal Access Token) for use during the install only, it doesn’t need to persist past the install.
You will then need to make a note of this token for use later!
Now run the configuration script:
.\config.cmd
You will then be presented with a set of configuration questions (Detailed instructions here):
You will need your DevOps Organisation URL, PAT Token and an AD (Active Directory) account to run the Agent service under. As mentioned, you can use the NETWORK SERVICE if you wish.
Now if we head back over to the Project’s Agent Pool, you will see its active!
I am using service accounts within the Pipeline to access the vSphere environment etc, so I don’t need to give the agent service account any specific permissions. More information can be found here.
Depending on your environment you may need to configure a web proxy or firewall access for the agent to communicate with Azure DevOps.
Finally, you will need to ensure the Packer executable is available on the DevOps agent server. See my past blog for more information.
That’s the Agent setup completed.
Authorizations
Now we need to authorise the DevOps project to access the Key Vault we plan on using. The quickest and easiest way is to do this is to edit the Pipeline and use the Azure Key Vault Task Wizard to authorise, but this isn’t the cleanest way.
You can create the Service Connection manually. This allows for further granularity when you have multiple pipelines within the same project that require different secret access.
You can do this by heading to into the Project Settings and then Service Connections.
When selecting new, choose the Azure Resource Manager type, followed by Service principal (automatic).
You then need to select your Subscription and provide it a name.
Now head over to Azure to match the name of the Service Principal in Azure with the Service Connection from DevOps. To do this select the Service Connection, and then Manage:
You are going to need the Application ID of the service connection to be able to assign permissions to secrets using PowerShell. Grab the Application ID from the Overview tab as well as your subscription ID for use with the New-AzRoleAssignment cmdlet.
Now back over to the DevOps portal, we can give permissions to each template pipeline to use this service connection. First, click on security.
We can then add the pipelines required.
Azure Key Vault Secrets
Adding Secrets
This Packer configuration consumes a number of secrets within the Pipeline. We will be storing the username and password for the vSphere Service account and Guest OS admin accounts for accessing vSphere as well as building and configuring the VM and the autounattend.xml file. I will go into more detail further down, but here is a link describing how to add a secret to a Vault.
RBAC
To ensure a Pipeline only has access to the secrets it needs, we will be using RBAC permissions per secret using the IAM interface rather than Access Policies.
To configure this, select a secret and then open the IAM interface. Select the ‘Key Vault Secret User’ Role and then click members.
Click’ Select Members’, search for the require service principal and select it, followed by the select option at the bottom.
Now click ‘Review + Assign’
Repeat for all secrets required.
You can also use the PowerShell command ‘New-AzRoleAssignment’ rather than using the portal to assign the permissions.
We are granting the ‘Key Vault Secrets User’ role to the Application ID, for each of the required secrets:
The Pipeline makes use of a custom email notification PowerShell function which uses the Graph API’s. See my recent blog post on how to set this up.
Running the Pipeline
We are now ready to run the Pipeline! To kick it off manually, hit the ‘Run Pipeline’ button when in the Pipeline.
Increase the playback quality if the auto settings aren’t allowing you to see the detail!
Now you can head over to your content library and you will see your template. Below are both my Windows Server 2022 builds.
You can tell I use Server 2022 Core to test… Version 41!
Notification Email
Here is a snippet of the notification email that was sent on completion.
And there you have it. I personally enjoyed seeing how I could make use of both Packer and Azure DevOps to deliver vSphere templates. I hope it helps you with your templating journey!
As always, thanks for reading!
If you like my content, consider following me on Twitter so you don’t miss out!
I recently began playing with Azure DevOps Pipelines as a way to automate various aspects of my lab. As part of this I wanted to send email notifications that I could customize the content of, which I couldn’t get from the built-in notification capability.
Since the PowerShell cmdlet; Send-Mail Message is obsolete, I began investigating alternatives which is when I came across this article and decided to give it ago and share!
Overview
This solution is based on the usage of Microsoft’s Graph API (Send.Mail) and App Registrations being leveraged with PowerShell. Details of the API can be found here.
An Application or Service can call the email sending functionality by passing required data as parameters into the PowerShell script (or Function!) to provide a reusable approach to sending email.
You will need a couple of things for this to work, lets get started.
Configuration
Create a Shared mailbox
You will need a ‘From’ mailbox to use as the sending address. I will be using a Shared mailbox in O365 via my Business Basic subscription (Around £4.50 per user per month). Shared mailbox’s don’t require a license (Under 50GB), hence not costing you anything additional!
Head over to your Exchange Admin Center and select Recipients, Mailboxes, and select Add a shared mailbox.
Provide a Display Name and Email Address, as well as selecting a domain.
That’s the mailbox ready to go!
Create App Registration
Next we need to set up an App Registration in your tenancy.
In the Azure Portal, search for and select Azure Active Directory followed by App registrations.
Click New Registration…
…and provide your application a name. I have also stuck with the default single tenant option.
Once created, you will be able to see information that you will need later. Specifically the Application ID and the Tenant ID. You will need a third piece of information, a Secret Key. You can generate one by clicking client credentials.
Click Client secrets and select New client secret
Provide a meaningful name and select the duration you want the secret to be valid for.
You will then see your secret key.
You will need to take a copy of this key now and store it securely as you wont be able to get the key again without creating a new one.
We now need to provide some permissions. In this case we are wanting to be able to send an email.
Firstly, click API permissions and then Add a permission.
Select Graph API.
Select Application permissions and scroll down until you see Mail and select the Mail.Send option, and finally click Add permission at the bottom.
You will then notice that it require Admin consent. Click the Grant consent for ‘org’ option and confirm the prompt.
Things to consider!
An application that uses this will have access to send an email from any mailbox. You need to carefully consider the risks and mitigations.
You can limit which recipients can be sent to, by applying an Application Access Policy. More information here. Note for shared mailboxes, you need to add them to a mail enabled security group and reference that with the PolicyScopeGroupID parameter.
There are 2 main sections, the first being the acquisition of an authorization token. Using the 3 values called out in the App Registration section earlier, we need to populate the TenantID, AppID and App Secret variables. Ideally you would want to be retrieving this from a secure location such as an Azure Key Vault (Look out of a future post on this!).
The second section is the collation of values required to send the email. Again you need to populate the variable values for the From, Recipient, Subject and Message Body which are then passed into a Invoke-RestMethod cmdlet with the URI of the API.
If you are using this as part of an automated solution, you aren’t going to be manually entering values, you are likely to be passing the values in from the rest of your code or pipeline.
I have also put together a PowerShell Function that can be used as part of a larger piece of code. This way you are able to utilize it in a more efficient and reusable way.
Function Send-Email {
<#
.SYNOPSIS
Send emails via O365.
.DESCRIPTION
Send emails via O365 using the Send.Mail Graph API. Parameter values are expected to be variables.
.PARAMETER TenantId
Tenant ID found in Azure.
.PARAMETER AppId
ID of the App Registration.
.PARAMETER AppSecret
App Registration client key.
.PARAMETER From
Email sender address.
.PARAMETER Recipient
Recipient address, user, group or shared mailbox etc.
.PARAMETER Subject
Email Subject value.
.PARAMETER Body
Email body value.
.LINK
https://github.com/smctighevcp
.EXAMPLE
PS C:\> Send-Email -TenantId $value -AppId $value -AppSecret $value -From $value -Recipient $value -Subject $value -Body $value
Takes the variable input and send a email to the specified recipients.
.NOTES
Author: Stephan McTighe
Website: stephanmctighe.com
Created: 10/03/2022
Change history:
Date Author V Notes
10/03/2022 SM 1.0 First release
#>
#Requires -Version 5.1
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string] $TenantId,
[Parameter(Mandatory)]
[string] $AppId,
[Parameter(Mandatory)]
[string] $AppSecret,
[Parameter(Mandatory)]
[string] $From,
[Parameter(Mandatory)]
[string] $Recipient,
[Parameter(Mandatory)]
[string] $Subject,
[Parameter(Mandatory)]
[string] $EmailBody
)
Begin {
#
$uri = "https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token"
$body = @{
client_id = $AppId
scope = "https://graph.microsoft.com/.default"
client_secret = $AppSecret
grant_type = "client_credentials"
}
$tokenRequest = Invoke-WebRequest -Method Post -Uri $uri -ContentType "application/x-www-form-urlencoded" -Body $body -UseBasicParsing
#
$token = ($tokenRequest.Content | ConvertFrom-Json).access_token
$Headers = @{
'Content-Type' = "application\json"
'Authorization' = "Bearer $Token"
}
}
Process {
# Create & Send Message
$MessageSplat = @{
"URI" = "https://graph.microsoft.com/v1.0/users/$From/sendMail"
"Headers" = $Headers
"Method" = "POST"
"ContentType" = 'application/json'
"Body" = (@{
"message" = @{
"subject" = $Subject
"body" = @{
"contentType" = 'HTML'
"content" = $EmailBody
}
"toRecipients" = @(
@{
"emailAddress" = @{"address" = $Recipient }
} )
}
}) | ConvertTo-JSON -Depth 6
}
Invoke-RestMethod @MessageSplat
}
end {
}
}
Keep an eye out for a future blog post on how I am using this as part of an Azure DevOps Pipeline! This will include passing in variables within the pipeline as well as retrieving secrets from an Azure Key Vault.
If you like my content, consider following me on Twitter so you don’t miss out!
Having recently had to do some work with RDM perennial reservations I looked into ways to make this less of a manual headache. There are plenty of examples out there for doing this, which I took as a basis to make a PowerShell function. If anything it was a great way to refresh my PowerShell skills and an opportunity to learn some new skills.
Note: Although this has been tested in my environment, please make sure you test it appropriately before running against a production environment!
Lets take a look…
Get-PerennialReservation
This function targets a vSphere cluster, gets all RDM disks that are connected to VM’s and then queries each host in the cluster to check if the disk/storage device is perennially reserved or not.
There are multiple ways to use it, whether that is by specifying the target cluster using the -Cluster parameter or by piping it from Get-Cluster. You can also specify a specific canonical name or a comma separated string of them, if you just want the status of a single/select disk(s) using the -CanonicalName parameter. There is also an Export flag to export the results to CSV, if you wish to make use of the data outside of PowerShell. You can get the full usage information by running the following command once you have loaded the function into your PowerShell session:
This function again targets a vSphere cluster, gets all RDM disks that are connected to VM’s and sets the IsPerenniallyReserved flag too ‘True’ on all hosts.
There are multiple ways to use it like the Get function; specifying the target cluster using the -Cluster paramater or by piping it from Get-Cluster. You can still specify a specific canonical name or a comma separated string of them, if you just want to set the flag of a single/select disk(s) using the -CanonicalName parameter. There is still an Export function that will provide you an output to CSV. You can get the full usage information by running the following command once you have loaded the function into your PowerShell session:
To complete the set there is a Remove function. This function again targets a vSphere cluster, but this time you need to pass in the canonical name you wish to set the IsPerenniallyReserved flag too ‘False’ for.
To use this one, you need to specify the target cluster using the -Cluster paramater and specify a specific canonical name or a comma separated string of them, using the -CanonicalName parameter. There is still an Export function that will provide you an output to CSV. You can get the full usage information by running the following command once you have loaded the function into your PowerShell session:
I recently assisted a friend who had an issue with DFS Namespaces following an Active Directory Upgrade from 2008R2 to 2012R2. They were faced with not being able to access the NameSpace following the demotion of the last 2008R2 controller and promotion of the final 2012R2 controller.
Upon opening the DFS NameSpace management console, the following error was displayed when selecting the required NameSpace – “The namespace cannot be queried. Element not found.”
After looking in the FRS (File Replication Service) and DFSR (Distributed File System Replication) event logs, I came to realise that the forest was using FRS for replication! This isn’t supported after 2008R2. Ideally, you would have completed the migration from FRS to DFSR before upgrading the domain controllers.
Note: Always make sure you have a backup, snapshot or other reliable rollback method in place before doing anything in your live environment. This worked for me, it doesn’t guarantee it will work for you!
With FRS being the likely cause, I needed to confirm this. I ran the following command to confirm the status –
Dfsrmig /getglobalstate
It returned the following result confirming that FRS was still in fact being used.
Current DFSR global state: 'Start'
Succeeded.
Before being able to look at the DFS NameSpace issue, this needed addressing. Luckily you can still remediate this after upgrading the domain controllers. I would still advise confirming all the prerequisites are in place BEFORE upgrading!
Now onto the migration from FRS to DFSR.
Firstly, run the following command to move the state to the second of the four states. The four states being; Start, Prepared, Redirected and Eliminated.
Dfsrmig /setglobalstate 1
You will then want to run a directory sync to speed things up, especially if you have a large replication interval!
Run the following RepAdmin command to get things moving.
Repadmin /syncall /AdeP
You can then monitor the progress by running –
Dfsrmig /getmigrationstate
You will then see any remaining domain controllers that are yet to have synchronized the new state. Eventually you will see that all domain controllers have migrated to the second state; Prepared.
Now time to move to the Redirected state. Same process as the previous set but this time specifying ‘setglobalstate 2’
Again run the RepAdmin to get replication moving and monitor using the ‘getmigrationstate’ command. As in the previous step, you will eventually see that all domain controllers have migrated to the third state; Redirected.
Last one! Same as before, but this time you want to use ‘setglobalstate 3 –
Once complete you will get confirmation that you have reached the final state; Eliminated.
You will now be able to run the ‘net share’ command to see that the SYSVOL share has been moved to ‘C:\Windows\SYSVOL_DFSR\sysvol’ and that the FRS Windows service is stopped and set to disabled.
Output of the ‘net share’ command
File Replication Service (FRS) service
This should now give you a correctly functioning directory again! You will want to now check the Directory Services, File Replication and DFSR Logs in Windows Event Viewer to ensure you have no further errors.
Now onto repairing the NameSpace. I read a few different blogs and guides for this, some included deleting the NameSpace via ADSI Edit others didn’t.
I found I didn’t need to delete anything, bonus.
The get the NameSpace accessible again I found that right clicking the NameSpace and removing it, followed by recreating it using the ‘New NameSpace Wizard’ did the trick.
Upon recreating it, all of the folders reappeared and were accessible again with no additional configuration required. (these screenshots are of my lab, not the live environment as it was not appropriate)
Recently I needed to build out some test Active Directory Forests that resemble production in order to complete some testing. One of the forests contained a significant amount of OU’s that I had no intention of manually recreating.
To run the New-ADOrganizationalUnit cmdlet, you need to provide the OU name and the Path where you want to create it. However, Get-ADOrganizationalUnit doesn’t provide the path, so you need to determine it from the DistinguishedName.
After a number of google searches, I couldn’t find exactly what I needed, so I began piecing together various bits of Powershell that I found. I ended up learning a bit of Regex in the process! Powerful tool if you know how to use it.
I came up with two versions in the end, you can see both below with the differences highlighted.
The first one effectively takes everything up to the first ‘,’ and replaces it with nothing, effectively removing the OU Name. The second one captures everything after the first ‘,’ and replaces the whole string with what was captured. Both have produced the same result in my scenario, but it was useful to understand both methods for future use of Regex.
Both also have a property called ‘OUNum’, this property counts how many time ‘OU=’ appears in the original DistigushedName string. OU’s need to be created in order, so that the parent OU exists before the child. This orders the OU’s in ‘tiers’ before exporting them to CSV. All OU’s in the root of the directory will get a value of 1, OU’s within these will get a value of 2 and so on. Credit to ‘David Z’ for this bit!
Once you have your data, you may or may not need to modify the domain. If you are importing it into a different domain, you’ll need to. In my case it was simple enough to do a find and replace in a text editor (eg. DC=lab,DC=local to DC=lab2,DC=local). You could look at using concepts from above to achieve this before exporting the data if you so wish.
Now you have your data, you need to import it. You can run the following in the target domain.