I recently came across a need to review the Storage Policies in use within a vCenter environment and how many objects or virtual machines where using each policy.
I saw this as an excuse to refresh my PowerShell skills and wrote a quick function.
Source code can be found on my GitHub, here. Check there for any updates but below is the code at the time of writing.
function Get-vSANSPSummary {
<#
.SYNOPSIS
Export vSAN Storage Policy Information.
.DESCRIPTION
Export vSAN Storage Policies from vCenter showing FTT & Stripe information and amount of amount of VM's using each.
.PARAMETER ExportPath
Path the export the output HTML file.
.NOTES
Tags: VMware, vCenter, SPBM, PowerCLI, API
Author: Stephan McTighe
Website: stephanmctighe.com
.EXAMPLE
PS C:\> Get-vSANSPSummary -ExportPath "C:\report\vSAN-Storage-Policy-Summary.html"
Outputs a HTML file containing the Storage Policy Information for vSAN Storage Policies to a specified location.
#>
#Requires -Modules VMware.VimAutomation.Storage
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[string] $ExportFilePath)
Begin {}
Process {
try {
$Output = @()
$vSANstoragepolicies = Get-SpbmStoragePolicy -Namespace "VSAN"
$SPBM = $vSANstoragepolicies | Select-Object Name, AnyOfRuleSets
ForEach ($SP in $SPBM) {
$Attributes = @( $SP | ForEach-Object { $_.AnyOfRuleSets } | Select-Object -ExpandProperty AllofRules)
$object = [PSCustomObject]@{
SPName = $SP.Name
ObjectCount = $ObjectCount = (Get-SpbmEntityConfiguration -StoragePolicy "$($SP.name)").count
VMCount = $VMCount = (Get-SpbmEntityConfiguration -StoragePolicy "$($SP.Name)" | Where-Object {$_.Entity -notlike "hard*"}).count
RAID = $attributes | Where-Object { $_.Capability -like "*VSAN.replicaPreference*" } | Select-Object -ExpandProperty Value
FTT = $attributes | Where-Object { $_.Capability -like "*VSAN.hostFailuresToTolerate*" } | Select-Object -ExpandProperty Value
SubFTT = $attributes | Where-Object { $_.Capability -like "*VSAN.subFailuresToTolerate*" } | Select-Object -ExpandProperty Value
Stripes = $attributes | Where-Object { $_.Capability -like "*VSAN.stripeWidth*" } | Select-Object -ExpandProperty Value
ForceProvision = $attributes | Where-Object { $_.Capability -like "*VSAN.forceProvisioning*" } | Select-Object -ExpandProperty Value
StorageType = $attributes | Where-Object { $_.Capability -like "*VSAN.storageType*" } | Select-Object -ExpandProperty Value
IOPSLimit = $attributes | Where-Object { $_.Capability -like "*VSAN.iopsLimit*" } | Select-Object -ExpandProperty Value
}
$Output += $object
}
$Output | ConvertTo-Html -Property SPName, VMCount, ObjectCount, RAID, FTT, SubFTT, Stripes, ForceProvision, StorageType, IOPSLimit | Out-File $ExportFilePath
}
catch {
Write-Host "An error occurred!" -ForegroundColor Red
Write-Host $_ -ForegroundColor Red
}
}
}
Output currently as a basic HTML table but you could change this to add some ‘HTMLness’ or output to CSV.
As always, thanks for reading and I hope this has been useful to someone.
If you like my content, consider following me on Twitter so you don’t miss out!
Ever since starting out on my learning journey with Packer and writing my ‘Getting Started‘ blog series, I have not stopped learning and developing my templates. I have also learnt a lot from other members of the tech community, such as @mpoore, as well as discovering this repository – vmware-samples. I really wish I had found this sooner than I did, as its a great resource! It was especially useful to me for Linux examples. That said, its been great taking my own learning journey.
Since writing the series, I have made numerous changes to my template code, structure and added additional functionality and OS’s. But I have also spent some time working with Azure DevOps Pipelines for another piece of work. This got me thinking…
In this blog post I want to show you something that I have put together using Azure DevOps Pipelines and Packer.
Overview
This solution makes use of Azure DevOps Pipelines, Azure Key Vault and HashiCorp Packer to schedule and orchestrate the building of new virtual machine templates in VMware vSphere.
Azure Pipelines will be used to orchestrate the secure retrieval of secrets from Azure Key Vault using the native integration and executing the Packer commands to build the required template. By using these together, we can ensure all secrets are securely utilised within the build.
I will be using a Self Hosted DevOps agent as part of this to allow communication between Azure DevOps and the private networks in my on-premises lab. This is instead of using a Microsoft Hosted DevOps agent which is in a public shared address space.
As mentioned, Azure Key Vault is going to be used to store the secret values for things like service accounts for vSphere access and administrator passwords for the Guest OS etc. These can retrieved within a pipeline, which is granted access to these secrets and made available as variables to be consumed.
Each template will have its own pipeline. This means individual templates can be called via API allowing for some other interesting use cases and automation.
As is the case in the blog series, all templates are uploaded to the vSphere Content Library which can then be subscribed too from other vCenter Servers.
Components
GitHub Repository (Packer Code)
DevOps Project
DevOps Pipeline
On-Prem DevOps Agent (Virtual Machine)
Prerequisites
GitHub Repository with your Packer code (Example here)
A functioning vSphere environment
An Azure & DevOps Subscription / Account
An Azure Key Vault (With appropriate networking configured)
A Virtual Machine (Windows 2022 Core in this example)
AD User Account (To run the DevOps Agent as a service) *can use the system local account if you wish.
Packer Code
If you aren’t familiar with Packer, I would suggest taking a look at my blog series on Packer here, but I will briefly go through some key differences of the newer code that you can find here, which the blog is based on. At the time of writing I have only added Server 2019 & 2022 but I will be adding to this over time.
Firstly the file structure is now a little different. This was inspired by the vmware-samples repository linked earlier, and some of my own preferences from actively using Packer.
Shared answer file templates with parameters for all Windows Operating Systems to reduce repeating files.
Single .pkrvars.hcl for each Operating System which includes both Standard & Datacenter Editions as well as Core and Desktop options.
The Build file includes a dynamic creation of the answer file based on variables from a template file. (this is great!)
Cleaner variable naming.
The Windows Update Provisioner is now controlled using the required plugin parameters.
Another key difference is how sensitive values such as usernames, passwords and keys are now passed into the configuration. These are now retrieved from Azure Key Vault by a Pipeline task and passed into environment variables (PowerShell) which are then consumed as any other variable would be. The key benefit is that the secrets are securely stored and accessed by the pipeline!
Check out the Azure Key Vault section later in the post for more information on secrets and their consumption.
DevOps Project
First lets create a DevOps Project by heading over to dev.azure.com and clicking on New Project.
Provide a name for the project and select the Private option.
Now time to create the first pipeline. As mentioned earlier, we will be using a pipeline per operating system.
Select Create Pipeline.
You will then be asked to select the location of your code. I will be selecting GitHub as that is where I keep my code.
Followed by the repository that contains your Packer Code.
Next you need to provide and approve access for Azure Pipelines to the repository you selected.
Now to create the first pipeline YAML file. Select Starter Pipeline.
First of all rename the file to the name of the template you are going to build. In this example lets call it ‘windows-server-2022-standard-core.yml’. You can do this by clinking the existing name.
Now you want to add the code for this template build. You can use the examples from here.
You could of course take the examples from my GitHub and select ‘Existing Azure Pipelines YAML file’ rather than ‘Starter pipeline’ if you wish.
Here we start by referencing a different central repository which contains reusable code. A good resource to understand this bit is linked here.
- checkout: self
- checkout: ps-code-snippets
We also have a ‘checkout’ references. These instruct the pipeline to checkout not only the source repository, but also the additional one that contains reusable code.
This section is setting a schedule to run at midnight every 15th of the month. This can be adjusted to suit your needs. More information about setting cron schedules are here.
pr: none
trigger: none
As we want to run the Pipelines either on a schedule or manually, we want to disable the CI/CD integration. We do this by setting the pull request (pr) and trigger options to ‘none’.
This section defines a couple of parameters for the job. Firstly the name of the job as well as the name of the On-Prem DevOps agent pool we will be using (See the next section). Finally a timeout value. By default this is 60 minutes for self hosted agent jobs which isn’t quite long enough for the Desktop Edition of the OS in my lab. There is also a reference to a variable group. These are groups of variables that can be consumed by any Pipeline within the DevOps Project.
Next we are using a built in Pipeline task to retrieve secrets from an Azure Key Vault. I am then filtering it to the specific secrets required. You could replace this with ‘*’ if you don’t wish to filter them. Access to these are secured using RBAC later.
Now we move onto the more familiar Packer and PowerShell code (if you are already a user of Packer). This sets a couple of variables to use in the log files, enables logging and initiates the build. It then begins to populate a variable that has taken the information from the log file and cleaned it up to consume in an email notification in the final steps.
Something you may need to adjust is the Set-Location path. Its using a built in variable which is the root of the GitHub Repository: $(System.DefaultWorkingDirectory). Make sure you adjust the remain path to match to location of your Packer configuration.
$EmailBody = ('<HTML><H1><font color = "#286334"> Notification from The Small Human Cloud - Packer Virtual Machine Templates</font></H1><BODY><p><H3><font color = "#286334">Build Name:</H3></font></p><p><b>$(BuildVersion)</b></p><p><H3><font color = "#286334">Pipeline Status:</H3></font></p><p><b>Build Reason:<b> $(Build.Reason)</p><p><b>Build Status:<b> $(Agent.JobStatus)</p><p><H3><font color = "#286334">Packer Log:</H3></font></p><p>Please Review the logfile below for the build and take appropriate action if required.</p>')+("<p>$EmailContent</p>")
Set-Location $(System.DefaultWorkingDirectory)
. '.\ps-code-snippets\Send-Email.ps1'
Send-Email -TenantId "$(PipelineNotificationsTenantID)" -AppId "$(PipelineNotificationsAppID)" -AppSecret "$(PipelineNotificationsAppSecret)" -From "$(From)" -Recipient "$(Recipient)" -Subject "$(Subject)" -EmailBody $EmailBody
This final section makes use of a PowerShell function based on the Azure Graph API that you can find details on here, to send an email notification via O365. It is taking the content of the function from a separate repository and loading it into the session to then run.
Now select the drop down next to ‘Save and run’ and click Save.
We want to rename the actual Pipeline to the template name. Head back to the Pipelines menu, click the 3 dots and select ‘Rename/Move’. Give it the same name as your YAML file for consistency.
Variable Groups and Pipeline Variables
We mentioned earlier the reference to a variable group. These are configured per DevOps Project and can be used by multiple Pipelines. I am using one specifically for the values used for email notifications. They are a great way to reduce duplicating variable declarations.
You can set these by heading to Pipelines > Library and then clicking ‘+ Variable group’. You can see my group called ‘Notifications’ already created.
variables:
- group: Notifications
We then need to grant the Pipeline permissions to this variable group. You will need to add any Pipeline you want to have access to these variables.
There is another way of providing variables to a Pipeline and that is a Pipeline Variable. These are configured per Pipeline and are not available to other Pipelines. I am using this to create a ‘Build Version’ variable that is used for the log file name.
Azure DevOps Agent
We need to build our self hosted DevOps Agent that we referenced in the ‘pool’ parameter in our configuration earlier. This is going to be a virtual machine on my on-premises vSphere environment. I will be using a Windows Server 2022 Standard Core VM called ‘vm-devops-02’ that I have already built on a dedicated VLAN.
To start the config, we need to create an Agent Pool. From the Project page, select ‘Project Settings’ in the bottom left.
From the tree on the left under Pipelines, select Agent Pools.
Now, select Add Pool, and complete the required field as below, editing the name as desired, but you will need to match it when you reference the pool in your YAML.
Now to add the agent to our on-premises VM. Select ‘New Agent’
Download the agent using the Download button and then copy the ZIP file to the VM to a directory of choice. You can use PowerShell for this:
Now before we run the configuration file, we need to create a PAT (Personal Access Token) for use during the install only, it doesn’t need to persist past the install.
You will then need to make a note of this token for use later!
Now run the configuration script:
.\config.cmd
You will then be presented with a set of configuration questions (Detailed instructions here):
You will need your DevOps Organisation URL, PAT Token and an AD (Active Directory) account to run the Agent service under. As mentioned, you can use the NETWORK SERVICE if you wish.
Now if we head back over to the Project’s Agent Pool, you will see its active!
I am using service accounts within the Pipeline to access the vSphere environment etc, so I don’t need to give the agent service account any specific permissions. More information can be found here.
Depending on your environment you may need to configure a web proxy or firewall access for the agent to communicate with Azure DevOps.
Finally, you will need to ensure the Packer executable is available on the DevOps agent server. See my past blog for more information.
That’s the Agent setup completed.
Authorizations
Now we need to authorise the DevOps project to access the Key Vault we plan on using. The quickest and easiest way is to do this is to edit the Pipeline and use the Azure Key Vault Task Wizard to authorise, but this isn’t the cleanest way.
You can create the Service Connection manually. This allows for further granularity when you have multiple pipelines within the same project that require different secret access.
You can do this by heading to into the Project Settings and then Service Connections.
When selecting new, choose the Azure Resource Manager type, followed by Service principal (automatic).
You then need to select your Subscription and provide it a name.
Now head over to Azure to match the name of the Service Principal in Azure with the Service Connection from DevOps. To do this select the Service Connection, and then Manage:
You are going to need the Application ID of the service connection to be able to assign permissions to secrets using PowerShell. Grab the Application ID from the Overview tab as well as your subscription ID for use with the New-AzRoleAssignment cmdlet.
Now back over to the DevOps portal, we can give permissions to each template pipeline to use this service connection. First, click on security.
We can then add the pipelines required.
Azure Key Vault Secrets
Adding Secrets
This Packer configuration consumes a number of secrets within the Pipeline. We will be storing the username and password for the vSphere Service account and Guest OS admin accounts for accessing vSphere as well as building and configuring the VM and the autounattend.xml file. I will go into more detail further down, but here is a link describing how to add a secret to a Vault.
RBAC
To ensure a Pipeline only has access to the secrets it needs, we will be using RBAC permissions per secret using the IAM interface rather than Access Policies.
To configure this, select a secret and then open the IAM interface. Select the ‘Key Vault Secret User’ Role and then click members.
Click’ Select Members’, search for the require service principal and select it, followed by the select option at the bottom.
Now click ‘Review + Assign’
Repeat for all secrets required.
You can also use the PowerShell command ‘New-AzRoleAssignment’ rather than using the portal to assign the permissions.
We are granting the ‘Key Vault Secrets User’ role to the Application ID, for each of the required secrets:
The Pipeline makes use of a custom email notification PowerShell function which uses the Graph API’s. See my recent blog post on how to set this up.
Running the Pipeline
We are now ready to run the Pipeline! To kick it off manually, hit the ‘Run Pipeline’ button when in the Pipeline.
Increase the playback quality if the auto settings aren’t allowing you to see the detail!
Now you can head over to your content library and you will see your template. Below are both my Windows Server 2022 builds.
You can tell I use Server 2022 Core to test… Version 41!
Notification Email
Here is a snippet of the notification email that was sent on completion.
And there you have it. I personally enjoyed seeing how I could make use of both Packer and Azure DevOps to deliver vSphere templates. I hope it helps you with your templating journey!
As always, thanks for reading!
If you like my content, consider following me on Twitter so you don’t miss out!
Whilst continuing to prepare for my VMware Certified Advanced Professional Deploy Exam, I have been configuring vSphere Auto Deploy. As with my blog post on vCenter Profiles, I am covering Auto Deploy as its not something I have ever used in any great depth. As always, I used the official VMware documentation to guide me.
Lets get started!
In my lab I am using vCenter 7.0 Update 3c and using nested hosts for the Auto Deploy hosts. I am also using a RHEL server for the TFTP requirement along with DHCP provided by my layer 3 switch.
Depending on whether you are using BIOS or UEFI you will need to set the appropriate DHCP options; 66 & 67.
I am using EFI, therefore using UEFI DHCP Boot File Name : snponly64.efi.vmw-hardwired as the value for DHCP option 067.
In the vSphere Client Select the Auto Deploy menu:
If you haven’t setup Auto Deploy previously, click Enable Auto Deploy and Image Builder.
You now need to download the required files, including the boot files mentioned earlier, that you will need to host on your TFTP server by selecting the Download TFTP Zip File link:
Copy the downloaded file to your TFTP server using something like WinSCP and extract the ZIP file to the TFTPRoot directory you configured as part of the TFTP server installation/setup.
Your directory should then look something like this:
Now using PowerShell 5.1 (PowerShell 7 is not supported by the VMware.ImageBuilder module), connect to the vCenter Server and run the following commands to set up the software depots:
You can check you have added the depot successfully by running the following:
Get-EsxImageProfile
Now to create the Deploy Rules. I will be using the latest image; ESXi-7.0U3c-19193900-standard, deploying to my ‘virtual-cluster’ and providing it with a host profile I already had. I have also provided an IP address range for the hosts I want to include. You can also just use the ‘-AllHosts’ parameter if you don’t want to restrict.
My host profile contains a few basic settings such as the root password, NTP settings & NIC configurations. There are plenty of host configuration options that can be set in this profile, configure the settings you need for your environment or lab.
You will be able to see the rules in the vSphere Client once complete.
<Side Note>
If you want to be able to manually add rules in the vSphere Client, you will need to manually add the software depot using the same URL used earlier.
You will then be able to manually create Deploy Rules.
</Side Note>
Now back to it…
You will see the Deploy Rule is currently inactive:
Running the following activates the rule:
Add-DeployRule -DeployRule "Lab Auto Deploy Rule"
You can now see the status is Active in the UI.
If using the vSphere Client, you can use the ‘Activate/Deactivate Rules’ button instead if you didn’t want to use PowerShell.
Now before we start deploying hosts, we need to create some! In this case they will be nested hosts with minimum configurations. We will also need some DHCP reservations and appropriate DNS records.
Once in place, we can go ahead an boot the hosts.
Now heading back to the vSphere UI, you will find your newly deployed host(s)!
From a troubleshooting perspective, you will be wanting to take a look in syslog.log on the host. This helped me identify my issues when I hadn’t applied a firewall rule correctly!
As always, thanks for reading!
If you like my content, consider following me on Twitter so you don’t miss out!
Following the last blog post on create vSphere Port Groups, let’s take a look at creating Tags and Tag Categories.
Let’s first look at the process via the GUI, in this case, the vSphere Client. (Based on vSphere 7.0.3c)
vSphere Client
I wont go into to much detail here as this information is readily available, but here is a brief run through.
After logging into the vSphere Client, select the menu followed by Tags & Custom Attributes.
You the have the option to select either Tags or Categories, followed by the ‘New’ option.
For Categories you need to provide the Category name, optional description, the cardinality (single or multiple) and select the objects that can have this tag associated with it.
Then with Tags, you need to provide the name, optional description and the category the tag will be part of.
Now this may be ok for one or two, but if you need to create in bulk, this will take a while! Lets look as some alternatives.
PowerShell
Firstly, PowerShell, specifically the VMware PowerCLI PowerShell module. Here are examples of the using the cmdlets New-TagCategory and New-Tag to create the same thing we did in the vSphere Client.
Below is the output from PowerShell after running the script above:
Name Cardinality Description
---- ----------- -----------
costcentre Multiple Created with PowerCLI
Name Category Description
---- -------- -----------
0001 costcentre Created with PowerCLI
Now this isn’t much quicker than doing it in the vSphere Client so here is one way to create in bulk.
Here is a custom array with multiple categories and the additional values needed to create a Category.
Name Cardinality Description
---- ----------- -----------
costcentre Multiple Created with PowerCLI
environment Single Created with PowerCLI
nsx-tier Multiple Created with PowerCLI
Name Category Description
---- -------- -----------
0001 costcentre Created with PowerCLI
0002 costcentre Created with PowerCLI
0003 costcentre Created with PowerCLI
0004 costcentre Created with PowerCLI
environment environment Created with PowerCLI
production environment Created with PowerCLI
pre-production environment Created with PowerCLI
test environment Created with PowerCLI
development environment Created with PowerCLI
web nsx-tier Created with PowerCLI
app nsx-tier Created with PowerCLI
data nsx-tier Created with PowerCLI
That is just one way to create multiple Categories and Tags. You could take this information from a CSV file using the ‘Get-Content’ cmdlet as an alternative to creating the array manually.
Terraform
Now let’s take a look at using Terraform to achieve the same result. Terraform is an infrastructure and code tool used to manage infrastructure in the form of configuration files and state:
First we are specifying which terraform provider we want to use, this will be the vSphere provider in this case. We are then providing some parameters for the provider to connect to your vCenter instance; VCSA FQDN and credentials. You would want make use of variables for this data, but for this blog I am keeping it simple.
We then have three vsphere_tag_category resource blocks, one for each of the categories we want to create. This again provides values for cardinality and associable types like we did in PowerShell.
Next we are going to create the tags, but I am going to use a set of local variables to then pass into the three vsphere_tag resource blocks to reduce the amount of repeating code.
Here are the local variables. This is similar to creating the array we did in PowerShell.
And then the resource blocks, notice the for_each parameter. For each Tag Category, it will cycle through each value in the locals array for each category. This is just like we did in PowerShell foreach function earlier.
resource "vsphere_tag" "costcentre-tags" {
for_each = toset(local.costcentre_tags)
name = each.key
category_id = vsphere_tag_category.costcentre.id
description = "Managed by Terraform"
}
resource "vsphere_tag" "environment-tags" {
for_each = toset(local.environment_tags)
name = each.key
category_id = vsphere_tag_category.environment.id
description = "Managed by Terraform"
}
resource "vsphere_tag" "nsx-tier-tags" {
for_each = toset(local.nsx_tier_tags)
name = each.key
category_id = vsphere_tag_category.nsx-tier.id
description = "Managed by Terraform"
}
Now when we run ‘terraform apply’ from the command line to apply for code, this is the output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# vsphere_tag.costcentre-tags["0001"] will be created
+ resource "vsphere_tag" "costcentre-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "0001"
}
# vsphere_tag.costcentre-tags["0002"] will be created
+ resource "vsphere_tag" "costcentre-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "0002"
}
# vsphere_tag.costcentre-tags["0003"] will be created
+ resource "vsphere_tag" "costcentre-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "0003"
}
# vsphere_tag.costcentre-tags["0004"] will be created
+ resource "vsphere_tag" "costcentre-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "0004"
}
# vsphere_tag.environment-tags["development"] will be created
+ resource "vsphere_tag" "environment-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "development"
}
# vsphere_tag.environment-tags["pre-production"] will be created
+ resource "vsphere_tag" "environment-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "pre-production"
}
# vsphere_tag.environment-tags["production"] will be created
+ resource "vsphere_tag" "environment-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "production"
}
# vsphere_tag.environment-tags["test"] will be created
+ resource "vsphere_tag" "environment-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "test"
}
# vsphere_tag.nsx-tier-tags["app"] will be created
+ resource "vsphere_tag" "nsx-tier-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "app"
}
# vsphere_tag.nsx-tier-tags["data"] will be created
+ resource "vsphere_tag" "nsx-tier-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "data"
}
# vsphere_tag.nsx-tier-tags["web"] will be created
+ resource "vsphere_tag" "nsx-tier-tags" {
+ category_id = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "web"
}
# vsphere_tag_category.costcentre will be created
+ resource "vsphere_tag_category" "costcentre" {
+ associable_types = [
+ "Datastore",
+ "VirtualMachine",
]
+ cardinality = "MULTIPLE"
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "costcentre"
}
# vsphere_tag_category.environment will be created
+ resource "vsphere_tag_category" "environment" {
+ associable_types = [
+ "Datastore",
+ "VirtualMachine",
]
vsphere_tag.environment-tags["production"]: Creating...
vsphere_tag.environment-tags["pre-production"]: Creating...
vsphere_tag_category.nsx-tier: Creation complete after 0s [id=urn:vmomi:InventoryServiceCategory:20a2167a-b0f8-4a60-9d29-6c7ca57711ef:GLOBAL]
vsphere_tag.nsx-tier-tags["data"]: Creating...
vsphere_tag.nsx-tier-tags["app"]: Creating...
vsphere_tag.nsx-tier-tags["web"]: Creating...
vsphere_tag_category.costcentre: Creation complete after 0s [id=urn:vmomi:InventoryServiceCategory:28a909f5-ee41-4d94-b228-b5e96e09284e:GLOBAL]
vsphere_tag.costcentre-tags["0004"]: Creating...
vsphere_tag.costcentre-tags["0002"]: Creating...
vsphere_tag.costcentre-tags["0003"]: Creating...
vsphere_tag.environment-tags["development"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:5b63e350-ef6e-4bbc-a633-09c9047b327b:GLOBAL]
vsphere_tag.costcentre-tags["0001"]: Creating...
vsphere_tag.environment-tags["pre-production"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:e2a8737c-e42a-4c6f-b9a8-716a1681d0c0:GLOBAL]
vsphere_tag.nsx-tier-tags["data"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:b9d3394d-388c-4018-b7b2-9e4d3da8287b:GLOBAL]
vsphere_tag.costcentre-tags["0002"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:8a482528-5d67-40e9-86cb-4dbf566f85ac:GLOBAL]
vsphere_tag.nsx-tier-tags["web"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:5a325904-4dfd-46ac-b0db-37fd6fda1533:GLOBAL]
vsphere_tag.environment-tags["production"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:89c609b9-7f90-457d-9f71-0bd0b7cc667d:GLOBAL]
vsphere_tag.nsx-tier-tags["app"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:45c2dd0e-533a-4917-82be-987d3245137a:GLOBAL]
vsphere_tag.costcentre-tags["0004"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:230db56e-7352-4e14-ba63-0ad4b4c0ba18:GLOBAL]
vsphere_tag.environment-tags["test"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:ebcf1809-8cae-4cb2-a5fa-82a492e54227:GLOBAL]
vsphere_tag.costcentre-tags["0001"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:e4649ad2-08d2-4dcd-aabf-4e2d74f93a36:GLOBAL]
vsphere_tag.costcentre-tags["0003"]: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:18de9eca-456c-4539-ad6c-19d625ac5be7:GLOBAL]
Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
For more information on the vSphere provider from Terraform, check out this link.
I hope this has given you some idea’s on how you can perhaps leverage other options beside the GUI, especially when looking to build or configure in bulk. All the code in this blog can be found on my GitHub here.
I recently came across an issue with creating subscribed VMware Content Libraries, and deploying templates from a Content Library.
An error similar to the one below, would be received when attempting to deploy a VM template or OVF from a Content Library, or an error related to connection issues when setting up a subscribed Content Library.
Failed to deploy OVF Package. ThrowablePrxy.cause A general system error occurred: Transfer failed.
After some investigation, I came to see that vCenter was attempting to communicate with linked vCenter’s and hosts via the web proxy that was configured in the VAMI, when attempting to deploy an OVF from a Content Library or when trying to synchronise a library.
As I didn’t want this traffic going via the proxy as it is internal traffic, a support ticket was logged. It was advised to add proxy exceptions, or bypasses, to the proxy file located here on a vCenter Server Appliance:
/etc/sysconfig/proxy
As this information isn’t something I managed to find documented publicly and support couldn’t provide me with anything as they were using internal documentation to assist, I thought I would write a quick post on it to help anyone facing the same issue!
Note: Always test in a non production environment and contact official support channels!
To begin reviewing and editing this file, you will need to SSH to the VCSA using the below command using your SSH tooling of choice.
ssh root@vm-vcsa-01.smt-lab.local
Using the following cat command you can then view the file:
cat /etc/sysconfig/proxy
Here is what the default file looks like with the HTTP and HTTPS options set:
# Enable a generation of the proxy settings to the profile.
# This setting allows to turn the proxy on and off while
# preserving the particular proxy setup.
#
PROXY_ENABLED="no"
# Some programs (e.g. wget) support proxies, if set in
# the environment.
# Example: HTTP_PROXY="http://proxy.provider.de:3128/"
HTTP_PROXY="proxy.smt-lab.local"
# Example: HTTPS_PROXY="https://proxy.provider.de:3128/"
HTTPS_PROXY="proxy.smt-lab.local"
# Example: FTP_PROXY="http://proxy.provider.de:3128/"
FTP_PROXY=""
# Example: GOPHER_PROXY="http://proxy.provider.de:3128/"
GOPHER_PROXY=""
# Example: SOCKS_PROXY="socks://proxy.example.com:8080"
SOCKS_PROXY=""
# Example: SOCKS5_SERVER="office-proxy.example.com:8881"
SOCKS5_SERVER=""
# Example: NO_PROXY="www.me.de, do.main, localhost"
NO_PROXY="localhost, 127.0.0.1"
Take note of the section at the bottom, “NO_PROXY”. This is where we need to add the fqdn’s of any hosts and VCSA’s you wish to deploy to, or subscribe with. If however you don’t want to maintain this for each and every host, you can add a wild card:
.*.domain.name
Note the ‘.’ at the beginning!
For instance, in my lab I would add the following entry to the NO_PROXY list:
.*.smt-lab.local
To edit this we can use the VI editor (More info on using VI here.):
vi /etc/sysconfig/proxy
Edit the file to include the FQDN’s or a wildcard, based on your requirements.
As technology moves forward, more and more ways to achieve your goal become available. Many people still rely on the good old trusty GUI to achieve their goal, I know I do at times. Is this because it’s quicker, more comfortable or familiar? Or perhaps because they don’t realise there are other options out there!?
This blog post will be one of many, where I highlight some of the options available for completing various technical tasks or configurations, in the hope it can provide additional options or tools for consideration.
To kick off, let’s take a look at a common example for a vSphere Administrator, creating Port Groups on a Distributed Switch.
vSphere Client
So let’s first look at the process via the GUI, in this case, the vSphere Client. I wont go into too much detail on the steps involved, as it is a well documented process, but the screenshots are below:
Repeat for the remaining Port Groups and you will be left with the finished article.
And there we have it, three Port Groups on a distributed Switch. Now, imagine doing this for 10’s or 100’s of Port Groups? It’s going to be slow and painful, so let’s look at some other options.
PowerShell
Firstly, PowerShell, specifically the VMware PowerCLI PowerShell module. Here is an example script that will create the same three Port Groups that we did using the GUI:
So lets break down this code. Firstly we are defining some variables;
$vDSName – This is the name of an existing virtual distributed switch in which you will be creating your Port Groups.
$Ports – This defines the number of ports the Port Group will be initially configured with. (By default 128 ports are created, there is nothing wrong with using the default, see the note further down as to why I have specified 8.)
$LoadBalancing – This is the load balancing policy I wish to set for the Port Group. Available options are:LoadBalanceLoadBased, LoadBalanceIP, LoadBalanceSrcMac, LoadBalanceSrcId, ExplicitFailover. This can be adjusted as required.
$ActiveUP – This variable defines the uplinks you wish to set as active for the Port Group. (If you want to add standby uplinks, you could add this parameter in too)
$VDPGS – Finally, this is an array containing both the name and VLAN ID for each Port Group.
Now we have our input information in variables, we move onto the next two lines of code. These are within a ‘ForEach Loop’. This will take each entry within an array and run a block of code against it. In this case, each Port Group we wish to create.
So for each entry in the array, ‘Get-VDswitch -Name $vDSName‘ gets the existing Virtual Distributed Switch based on the variable and then pipes (‘|’) this into the command (New-VDPortGroup -Name $VDPG.PG -VLanId $VDPG.VLANID -NumPorts $Ports) to create the Port Group on the Distributed Switch, using the properties set for each line of the array.
Secondly, we get the Port Group we just created (Get-VDswitch -Name $vDSName | Get-VDPortgroup $VDPG.PG) and then ‘Get & Set’ the Teaming and Loadbalancing options (Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy $LoadBalancing -ActiveUplinkPort $ActiveUP), again ‘piping’ the results into the next command.
Below is the output from PowerShell after running the script above:
Now let’s take a look at using Terraform to achieve the same result. Terraform is an infrastructure and code tool used to manage infrastructure in the form of configuration files and state:
provider "vsphere" {
vsphere_server = "vCenter Server FQDN"
user = "Domain\\Username"
password = "Password"
}
data "vsphere_datacenter" "datacenter" {
name = "dc-smt-01"
}
data "vsphere_distributed_virtual_switch" "vds" {
name = "vDS-Workload-Networks"
datacenter_id = data.vsphere_datacenter.datacenter.id
}
resource "vsphere_distributed_port_group" "pg20" {
name = "dvPG-Guest-VM-1"
distributed_virtual_switch_uuid = data.vsphere_distributed_virtual_switch.vds.id
number_of_ports = 8
vlan_id = 20
}
resource "vsphere_distributed_port_group" "pg21" {
name = "dvPG-Guest-VM-2"
distributed_virtual_switch_uuid = data.vsphere_distributed_virtual_switch.vds.id
number_of_ports = 8
vlan_id = 21
}
resource "vsphere_distributed_port_group" "pg25" {
name = "dvPG-Secure-VM-1"
distributed_virtual_switch_uuid = data.vsphere_distributed_virtual_switch.vds.id
number_of_ports = 8
vlan_id = 25
}
Lets break this down.
First we are specifying which terraform provider we want to use, this will be the vSphere provider in this case. We are then providing some parameters for Terraform to connect to your vCenter instance; VCSA FQDN and credentials.
We then have two ‘data’ blocks. These are used to get information about an existing resource, such as the Distributed Switch and the Datacenter it resides in. You could loosely consider this similar to populating variables in the PowerShell example.
Next we have three ‘resource’ blocks. Each block represents one of the three Port Groups we want to configure. It provides parameters for Name, number of ports and vlan ID for each, along with a reference to the Distributed Switch from the ‘data’ block.
Now when you run ‘terraform apply’ to apply for code, here is the output:
terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# vsphere_distributed_port_group.pg20 will be created
+ resource "vsphere_distributed_port_group" "pg20" {
+ active_uplinks = (known after apply)
+ allow_forged_transmits = (known after apply)
+ allow_mac_changes = (known after apply)
+ allow_promiscuous = (known after apply)
+ auto_expand = true
+ block_all_ports = (known after apply)
+ check_beacon = (known after apply)
+ config_version = (known after apply)
+ directpath_gen2_allowed = (known after apply)
+ distributed_virtual_switch_uuid = "50 33 5e 01 05 1e 32 66-ea f7 7c 42 ce fa f1 96"
+ egress_shaping_average_bandwidth = (known after apply)
+ egress_shaping_burst_size = (known after apply)
+ egress_shaping_enabled = (known after apply)
+ egress_shaping_peak_bandwidth = (known after apply)
+ failback = (known after apply)
+ id = (known after apply)
+ ingress_shaping_average_bandwidth = (known after apply)
+ ingress_shaping_burst_size = (known after apply)
+ ingress_shaping_enabled = (known after apply)
+ ingress_shaping_peak_bandwidth = (known after apply)
+ key = (known after apply)
+ lacp_enabled = (known after apply)
+ lacp_mode = (known after apply)
+ name = "dvPG-Guest-VM-1"
+ netflow_enabled = (known after apply)
+ network_resource_pool_key = "-1"
+ notify_switches = (known after apply)
+ number_of_ports = 8
+ port_private_secondary_vlan_id = (known after apply)
+ standby_uplinks = (known after apply)
+ teaming_policy = (known after apply)
+ tx_uplink = (known after apply)
+ type = "earlyBinding"
+ vlan_id = 20
+ vlan_range {
+ max_vlan = (known after apply)
+ min_vlan = (known after apply)
}
}
# vsphere_distributed_port_group.pg21 will be created
+ resource "vsphere_distributed_port_group" "pg21" {
+ active_uplinks = (known after apply)
+ allow_forged_transmits = (known after apply)
+ allow_mac_changes = (known after apply)
+ allow_promiscuous = (known after apply)
+ auto_expand = true
+ block_all_ports = (known after apply)
+ check_beacon = (known after apply)
+ config_version = (known after apply)
+ directpath_gen2_allowed = (known after apply)
+ distributed_virtual_switch_uuid = "50 33 5e 01 05 1e 32 66-ea f7 7c 42 ce fa f1 96"
+ egress_shaping_average_bandwidth = (known after apply)
+ egress_shaping_burst_size = (known after apply)
+ egress_shaping_enabled = (known after apply)
+ egress_shaping_peak_bandwidth = (known after apply)
+ failback = (known after apply)
+ id = (known after apply)
+ ingress_shaping_average_bandwidth = (known after apply)
+ ingress_shaping_burst_size = (known after apply)
+ ingress_shaping_enabled = (known after apply)
+ ingress_shaping_peak_bandwidth = (known after apply)
+ key = (known after apply)
+ lacp_enabled = (known after apply)
+ lacp_mode = (known after apply)
+ name = "dvPG-Guest-VM-2"
+ netflow_enabled = (known after apply)
+ network_resource_pool_key = "-1"
+ notify_switches = (known after apply)
+ number_of_ports = 8
+ port_private_secondary_vlan_id = (known after apply)
+ standby_uplinks = (known after apply)
+ teaming_policy = (known after apply)
+ tx_uplink = (known after apply)
+ type = "earlyBinding"
+ vlan_id = 21
+ vlan_range {
+ max_vlan = (known after apply)
+ min_vlan = (known after apply)
}
}
# vsphere_distributed_port_group.pg25 will be created
+ resource "vsphere_distributed_port_group" "pg25" {
+ active_uplinks = (known after apply)
+ allow_forged_transmits = (known after apply)
+ allow_mac_changes = (known after apply)
+ allow_promiscuous = (known after apply)
+ auto_expand = true
+ block_all_ports = (known after apply)
+ check_beacon = (known after apply)
+ config_version = (known after apply)
+ directpath_gen2_allowed = (known after apply)
+ distributed_virtual_switch_uuid = "50 33 5e 01 05 1e 32 66-ea f7 7c 42 ce fa f1 96"
+ egress_shaping_average_bandwidth = (known after apply)
+ egress_shaping_burst_size = (known after apply)
+ egress_shaping_enabled = (known after apply)
+ egress_shaping_peak_bandwidth = (known after apply)
+ failback = (known after apply)
+ id = (known after apply)
+ ingress_shaping_average_bandwidth = (known after apply)
+ ingress_shaping_burst_size = (known after apply)
+ ingress_shaping_enabled = (known after apply)
+ ingress_shaping_peak_bandwidth = (known after apply)
+ key = (known after apply)
+ lacp_enabled = (known after apply)
+ lacp_mode = (known after apply)
+ name = "dvPG-Secure-VM-1"
+ netflow_enabled = (known after apply)
+ network_resource_pool_key = "-1"
+ notify_switches = (known after apply)
+ number_of_ports = 8
+ port_private_secondary_vlan_id = (known after apply)
+ standby_uplinks = (known after apply)
+ teaming_policy = (known after apply)
+ tx_uplink = (known after apply)
+ type = "earlyBinding"
+ vlan_id = 25
+ vlan_range {
+ max_vlan = (known after apply)
+ min_vlan = (known after apply)
}
}
Plan: 3 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
vsphere_distributed_port_group.pg20: Creating...
vsphere_distributed_port_group.pg21: Creating...
vsphere_distributed_port_group.pg25: Creating...
vsphere_distributed_port_group.pg25: Creation complete after 0s [id=dvportgroup-2669728]
vsphere_distributed_port_group.pg21: Creation complete after 0s [id=dvportgroup-2669730]
vsphere_distributed_port_group.pg20: Creation complete after 0s [id=dvportgroup-2669729]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
For more information on the vSphere provider from Terraform, check out this link.
You will have noticed that I have explicitly defined the number of ports in both the PowerShell and Terraform examples. This is purely to match up with the default value that is set when using the vSphere Client; 8. By default the port allocation automatically expands as required, so this is for consistency rather than anything else.
If you are someone who relies heavily on a GUI as part of your work, I hope this have given you some idea’s on how you can perhaps leverage other options, especially when looking to build or configure in bulk.
Some time back I wrote about setting up and enabling a HyTrust Key Management setup for vSphere to make use of VM and vSAN encryption. Following the release of vSphere 7.0 Update 2, VMware have introduced native key management capabilities! This is a great feature as you no longer require a potentially expensive separate key management solution to make use of vSphere’s encryption offerings.
Lets take a look at this new capability by heading over to the Key Providers menu on your vCenter object, and selecting ‘Add Native Key Provider’:
Give your provider a name:
It then needs backing up! There is an option to do this next to the ‘Add’ option, or in the flow graphic at the bottom:
It is recommended to protect this with a password, make sure you keep this safe along with the key itself, after it downloads when you hit ‘Back Up Key Provider’. You won’t be able to restore the provider without it should you have a need to. Without the provider, any VM’s or data encrypted with it will be lost.
Once its backed up and safely stored you will have an active KMS! You can choose to set it to default if you have more than one key provider if you wish. Any VM’s that are encrypted from the point of changing the default, will be with the new provider, any already encrypted VM’s will continue to be encrypted with the original key.
If you head over to vSAN services, you will now have your native key provider available and can enable Data-At-Rest encryption as well as Data-In-Transit encryption:
Likewise, if you edit the settings of a VM via the VM Options tab you will be able to enable VM encryption:
There you have it, a native Key Management capability, in built with vSphere 7.0 Update 2.
Having recently had to do some work with RDM perennial reservations I looked into ways to make this less of a manual headache. There are plenty of examples out there for doing this, which I took as a basis to make a PowerShell function. If anything it was a great way to refresh my PowerShell skills and an opportunity to learn some new skills.
Note: Although this has been tested in my environment, please make sure you test it appropriately before running against a production environment!
Lets take a look…
Get-PerennialReservation
This function targets a vSphere cluster, gets all RDM disks that are connected to VM’s and then queries each host in the cluster to check if the disk/storage device is perennially reserved or not.
There are multiple ways to use it, whether that is by specifying the target cluster using the -Cluster parameter or by piping it from Get-Cluster. You can also specify a specific canonical name or a comma separated string of them, if you just want the status of a single/select disk(s) using the -CanonicalName parameter. There is also an Export flag to export the results to CSV, if you wish to make use of the data outside of PowerShell. You can get the full usage information by running the following command once you have loaded the function into your PowerShell session:
This function again targets a vSphere cluster, gets all RDM disks that are connected to VM’s and sets the IsPerenniallyReserved flag too ‘True’ on all hosts.
There are multiple ways to use it like the Get function; specifying the target cluster using the -Cluster paramater or by piping it from Get-Cluster. You can still specify a specific canonical name or a comma separated string of them, if you just want to set the flag of a single/select disk(s) using the -CanonicalName parameter. There is still an Export function that will provide you an output to CSV. You can get the full usage information by running the following command once you have loaded the function into your PowerShell session:
To complete the set there is a Remove function. This function again targets a vSphere cluster, but this time you need to pass in the canonical name you wish to set the IsPerenniallyReserved flag too ‘False’ for.
To use this one, you need to specify the target cluster using the -Cluster paramater and specify a specific canonical name or a comma separated string of them, using the -CanonicalName parameter. There is still an Export function that will provide you an output to CSV. You can get the full usage information by running the following command once you have loaded the function into your PowerShell session:
There are times as a vSphere admin, you are going to want to run ESXCLI commands against multiple ESXi Hosts from a central location. This could be for configuration / administration, reporting, patching or a number of other things.
Recently I have been testing different values in the /DataMover/MaxHWTransferSize advanced setting. To make life easier, I wanted a way to change multiple hosts quickly and easily. To do this, I customised a script that Luc Dekens posted as a solution to a problem someone was having that can be used to send ESXCLI commands to multiple hosts using PowerCLI and plink.exe. This slightly modified version uses a CSV file as a source containing my hosts FQDN and the username and password I will be connecting with.
Plink, which is part of the PuTTy suite, can can be found here.
When using this script, you need to either run the script from a directory containing the plink executable, copy it to where you want to run the script, or adjust the script to include the path to the plink executable… whichever takes your fancy.
Disclaimer: Always complete your own testing in an appropriate environment and refer to the vendors official documentation!
$Hosts = Import-Csv C:\ESXiHosts.csv
$Commad = 'esxcfg-advcfg -s 16384 /DataMover/MaxHWTransferSize'
Foreach ($H in $Hosts) {
#Starting the SSH Service if not already started
$SSHService = Get-VMHostService -VMHost $H.HostName | where {$_.Key -eq 'TSM-SSH'}
if ($SSHService.Running -eq 'True') {
Write-Host "****************************" -ForegroundColor Blue
Write-Host "WARNING: SSH already enabled, this will be stopped on completion of this script" -ForegroundColor Yellow
}
Else {
Write-Host "Starting SSH Service on Host $($H.HostName)" -ForegroundColor Green
Start-VMHostService -HostService $SSHService -Confirm:$false > $null
}
#Running the defined ESXCLI Command(s)
Write-host "Running remote SSH commands on $($H.HostName)." -ForegroundColor Green
Echo Y | ./plink.exe $H.HostName -pw $H.Password -l $H.UserName $Commad
#Stopping the SSH Service
$SSHService = Get-VMHostService -VMHost $H.HostName | where {$_.Key -eq 'TSM-SSH'}
if ($SSHService.Running) {
Write-Host "Stopping SSH Service on Host $($H.HostName)" -ForegroundColor Green
Stop-VMHostService -HostService $SSHService -Confirm:$false > $null
Write-Host "****************************" -ForegroundColor Blue
}
}
Write-Host "Complete $(Get-Date)" -ForegroundColor Green
You can run as many commands as you need by declaring another ‘Command’ variable at the beginning of the script and adding another line to the ‘Running the defined ESXCLI Command(s)’ section.
When run, it will then cycle through each of the ESXi hosts from your CSV file, enable SSH (if its not already enabled), accept the host key, run the commands you have specified and finally turn the SSH service off.
Here you can see it has set the MaxHWTransferSize to 16384 on each host.
You will see the Recent Task pane show the SSH Service starts and stops.
The commands passed in can be anything you need. All you need to do is change the commands that are defined in the variables section. For example, restarting the management agents –
Recently I decided it was time to add a second vCenter 7.0 Appliance to my main lab environment after the lab containing my SRM and vSphere Replication installation ceased to exist…
I thought I would take the CLI route as its been a while, and thought I’d share!
To begin, you need to decide what you are deploying. There are four deployment options available to you, which you can see listed below. To see the options, mount the vCenter ISO image, browse to vcsa-cli-installer\templates\install, and you will find 4 templates;
Embedded on ESXi
Embedded on VC
Embedded replication on ESXi
Embedded replication on VC.
Note there is not a distributed option here anymore as this is depreciated in 7.0.
For my lab I will be using the 3rd option; ‘Embedded replication on ESXi’. Firstly because I’m deploying to a standalone host and not to an existing vCenter. Secondly as I already have an existing VCSA and SSO Domain. This new VCSA will be added, or linked to the existing VCSA for my ‘Recovery’ site, in my Site Recovery Manager (SRM) setup.
If you are looking to deploy your first VCSA, onto a standalone host, you will want to use the ‘Embedded on ESXi’ template.
Once you have decided on the template that suits your scenario, you are going to add some details to this template, such as the ESXi host information you are deploying to, networking information, NTP and in my case SSO details as I will be adding it to an existing SSO Domain. One important value is the deployment size (deployment_option in the example below).
A useful command that can be run to help you decide what size appliance is suitable for your needs is:
vcsa-deploy --supported-deployment-sizes
This outputs the vCenter sizing to assist you. It shows you the resource requirements as well as the amount of hosts and VM’s each can support.
For my lab, ‘tiny’ covers my needs.
Here is the json file I used for the deployment in my lab. I have excluded the passwords for obvious reason, but it can be ran like this, and will prompt you for the passwords in the terminal.
{
"__version": "2.13.0",
"__comments": "Sample template to deploy a vCenter Server Appliance with an embedded Platform Services Controller as a replication partner to another embedded vCenter Server Appliance, on an ESXi host.",
"new_vcsa": {
"esxi": {
"hostname": "smt-lab-esx-04.smt-lab.local",
"username": "root",
"password": "",
"deployment_network": "vSS_PG_Management",
"datastore": "smt-lab-vmfs-02a"
},
"appliance": {
"__comments": [
"You must provide the 'deployment_option' key with a value, which will affect the VCSA's configuration parameters, such as the VCSA's number of vCPUs, the memory size, the storage size, and the maximum numbers of ESXi hosts and VMs which can be managed. For a list of acceptable values, run the supported deployment sizes help, i.e. vcsa-deploy --supported-deployment-sizes"
],
"thin_disk_mode": true,
"deployment_option": "tiny",
"name": "smt-lab-vcsa-02"
},
"network": {
"ip_family": "ipv4",
"mode": "static",
"system_name": "smt-lab-vcsa-02.smt-lab.local",
"ip": "10.200.15.249",
"prefix": "24",
"gateway": "10.200.15.254",
"dns_servers": [
"10.200.15.10"
]
},
"os": {
"password": "",
"ntp_servers": "0.uk.pool.ntp.org",
"ssh_enable": true
},
"sso": {
"password": "",
"domain_name": "vsphere.local",
"first_instance": false,
"replication_partner_hostname": "smt-lab-vcsa-01.smt-lab.local",
"sso_port": 443
}
},
"ceip": {
"description": {
"__comments": [
"++++VMware Customer Experience Improvement Program (CEIP)++++",
"VMware's Customer Experience Improvement Program (CEIP) ",
"provides VMware with information that enables VMware to ",
"improve its products and services, to fix problems, ",
"and to advise you on how best to deploy and use our ",
"products. As part of CEIP, VMware collects technical ",
"information about your organization's use of VMware ",
"products and services on a regular basis in association ",
"with your organization's VMware license key(s). This ",
"information does not personally identify any individual. ",
"",
"Additional information regarding the data collected ",
"through CEIP and the purposes for which it is used by ",
"VMware is set forth in the Trust & Assurance Center at ",
"http://www.vmware.com/trustvmware/ceip.html . If you ",
"prefer not to participate in VMware's CEIP for this ",
"product, you should disable CEIP by setting ",
"'ceip_enabled': false. You may join or leave VMware's ",
"CEIP for this product at any time. Please confirm your ",
"acknowledgement by passing in the parameter ",
"--acknowledge-ceip in the command line.",
"++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
]
},
"settings": {
"ceip_enabled": true
}
}
}
Once you have prepared your file, there are a couple of commands you can run from a PowerShell prompt to validate your configuration before deploying, saving you some time should mistakes have been made. The first being:
.\vcsa-deploy.exe install --accept-eula --acknowledge-ceip --verify-template-only <Path to json File>
This completes some basic checks to ensure your json file is correct, here is a successful output:
Secondly:
.\vcsa-deploy.exe install --accept-eula --acknowledge-ceip --precheck-only <Path to json File>
This will perform a more in depth validation, checking things like the credentials for your SSO domain, DNS or whether the IP or name you plan to use for your VCSA is in use already.
Note: Make sure you have your DNS setup correctly and is resolving the appliance FQDN!
It will also provide warnings if it thinks you might not be using an appropriate template. I originally specified a host what was already managed by vCenter, so it warned me like so:
You will get a similar output to the first command, should you pass all the tests. If not you will need to look at resolving them to ensure you get a successful deployment.
The Install!
Once you are confident you have everything in place, including DNS, and your configuration files are correct, you are ready to install:
.\vcsa-deploy.exe install --accept-eula --acknowledge-ceip --no-ssl-certificate-verification <Path to json File>
Here is a cut down version of the output you will see during the deployment:
====== [START] Start executing Task: To validate CLI options at 12:46:25 ======
Command line arguments verfied.
[SUCCEEDED] Successfully executed Task 'CLIOptionsValidationTask: Executing CLI
optionsValidation task' in TaskFlow 'template_validation' at 12:46:26
[START] Start executing Task: To validate the syntax of the template. at
12:46:27
Template syntax validation for template
'M:\Software\VMware\vCenter\embedded_vCSA_replication_on_ESXi.json' succeeded.
Syntax validation for all templates succeeded.
====== [START] Start executing Task: Perform precheck tasks. at 12:46:39 ======
[START] Start executing Task: Verify that the provided credentials for the
target ESXi/VC are valid at 12:46:45
The certificate of server 'smt-lab-esx-04.smt-lab.local' will not be verified
because you have provided either the '--no-ssl-certificate-verification' or
'--no-esx-ssl-verify' command parameter, which disables verification for all
certificates. Remove this parameter from the command line if you want server
certificates to be verified.
================== [START] Start executing Task: at 12:47:47 ==================
= [SUCCEEDED] Successfully executed Task '' in TaskFlow 'install' at 12:47:47 =
[START] Start executing Task: Check whether the datastore's free space
accommodate the VCSA's deployment option at 12:47:51
[SUCCEEDED] Successfully executed Task 'Running precheck: TargetDsFreespace' in
TaskFlow 'install' at 12:47:51
==========VCSA Deployment Progress Report========== Task: Install
required RPMs for the appliance.(RUNNING 5/100) - Setting up storage
VCSA Deployment is still running
==========VCSA Deployment Progress Report========== Task: Install
required RPMs for the appliance.(SUCCEEDED 100/100) - Task has completed
successfully. Task: Run firstboot scripts.(SUCCEEDED 100/100) - Task has
completed successfully.
Successfully completed VCSA deployment. VCSA Deployment Start Time:
2020-12-28T13:19:19.291Z VCSA Deployment End Time: 2020-12-28T14:18:27.103Z
[SUCCEEDED] Successfully executed Task 'MonitorDeploymentTask: Monitoring
Deployment' in TaskFlow 'embedded_vCSA_replication_on_ESXi' at 14:18:45
Monitoring VCSA Deploy task completed
The certificate of server 'smt-lab-vcsa-02.smt-lab.local' will not be verified
because you have provided either the '--no-ssl-certificate-verification' or
'--no-esx-ssl-verify' command parameter, which disables verification for all
certificates. Remove this parameter from the command line if you want server
certificates to be verified.
== [START] Start executing Task: Join active domain if necessary at 14:18:59 ==
Domain join task not applicable, skipping task
[SUCCEEDED] Successfully executed Task 'Running deployment: Domain Join' in
TaskFlow 'embedded_vCSA_replication_on_ESXi' at 14:18:59
[START] Start executing Task: Provide the login information about new
appliance. at 14:19:10
Appliance Name: smt-lab-vcsa-02
System Name: smt-lab-vcsa-02.smt-lab.local
System IP: 10.200.15.249
Log in as: Administrator@vsphere.local
[SUCCEEDED] Successfully executed Task 'ApplianceLoginSummaryTask: Provide
appliance login information.' in TaskFlow 'embedded_vCSA_replication_on_ESXi' at
14:19:10
=================================== 14:19:16 ===================================
Once complete, you will now have a second vCenter appliance deployed in Linked mode with the original. Here it is once I had configured a datacenter and cluster with two hosts.