Author Archives: Stephan

NSX-T Manager Certificate Replacement

I decided it was time to add VMware NSX-T to my HomeLab. I had been putting it off for a while but I couldn’t avoid it any longer!

Once I had fired up my NSX Manager Nodes and Cluster (I am using version 3.1), I looked to installing certificates. I choose to use a single certificate for all 3 of the NSX managers and the cluster using Subject Alternative Names (SAN’s) to simplify the process and this means I don’t need to renew 4 certificates each time.

As this is a different process to other VMware products I have put together a quick run through on how to achieve this.

Firstly, we need to generate the CSR from one of the NSX Manager nodes using openssl. SSH to one of your nodes and run the following command to create a new file called ‘ssl.conf’:

vim ssl.conf

Then populate this file with the below text, changing the values to suit your environment. I have left my values in to help with reading the file. If you are using a single NSX manager in your lab, you can remove the lines for DNS.3, DNS.4, IP.3 and IP.4.

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no

[ req_distinguished_name ]
countryName = GB
stateOrProvinceName = Labshire
localityName = Lab City
organizationName = SMT-Lab
organizationalUnitName = SMT-Lab
commonName = vm-nsx-00.smt-lab.local

[ req_ext ]
subjectAltName = @alt_names

DNS.1 = vm-nsx-00.smt-lab.local
DNS.2 = vm-nsx-01.smt-lab.local
DNS.3 = vm-nsx-02.smt-lab.local
DNS.4 = vm-nsx-03.smt-lab.local
IP.1 =
IP.2 =
IP.3 =
IP.4 =

Now to generate the CSR, run the following, but replacing the files names to suit:

openssl req -out vm-nsx-00.smt-lab.local.csr -newkey rsa:2048 -nodes -keyout vm-nsx-00.smt-lab.local.key -config ssl.conf -sha256

This will generate 2 files in the current working directory. You will have your CSR and the private key. Using something like WinSCP, copy the files off the NSX manager to a location of your choice.

Head off to your CA and issue the certificate using the CSR.

Now you need to copy the root and issuing (if you have an issuing CA) certificate to certificate you just created. This will complete the chain. Also have the private key handy as you are going to need it.

We are now ready to import import the certificate. Head to System > Certificates > Import and select Import Certificate.

Give it a name, browse to the certificate file that now includes the certificate chain, followed by browsing the the private key file. Be sure to change the ‘Service Certificate’ slider to ‘No’ and then click Import.

Once imported you can select it and see that it includes the certificates in the chain.

Now to assign them! Firstly, click on the identifier next to the name and copy the value. This is what will be used to target the certificate in the next steps.

To validate and replace the certificates in NSX we need to use API’s. Using a tool like Postman, validate then replace the certificate on the NSX Manager Cluster by running the following as a GET request. Note you need to provide credentials for the NSX managers on the Authorization tab.


The result of ‘”status” : “ok”‘ is what we are looking for here.

Now its confirmed valid, lets replace the certificate by running the following POST request:


Then its time to apply to all nodes by running each line below:




And that completes the replacement. If you browse to either your cluster address or individual nodes, you will see your new certificate in place.

You can find the full VMWare documentation on this here.

Thanks for reading!

Sessions I’ll Be Attending at VMworld 2021

VMworld 2021 is right around the corner! This year we are again unable to attend in person due to COVID-19, however this still gives everyone the chance to attend as its an online event!

It is running from the 5th to the 7th October this year –

Like many, I have been marking various sessions as favourites in the catalog ready for when they can be booked. Here are a few that are on my list this year.

Firstly, Frank’s session on NUMA has been on my list for all 3 years I have been attending. For anyone who uses vSphere, this is a must.

60 Minutes of Non-Uniform Memory Access (NUMA) 3rd Edition [MCL1853]

Pass Type: General and Tech+ Passes
Session by: Frank Denneman

“Although we enrich the stack with multiple layers of abstraction, obtaining consistent performance boils down to understanding the fundamentals. This requires the admin and the architect to focus on individual host components again. In this session, we dive into the impact the Multi-chip Module (MCM) has on scheduler behavior and workload sizing. Learn the underlying configuration of a virtual machine and discover the connection between the General-Purpose Graphics Processing Unit (GPGPU) and the NUMA node. Determine how the cores-per-socket impact a virtual NUMA configuration. We will look at the impact of heterogeneous clusters on workload performance and how you can detect faux-wide virtual machine configurations. You will understand how your knowledge of NUMA concepts in your cluster can help the developer by aligning the Kubernetes nodes to the physical infrastructure with the help of VM Service.”

The next two are sessions are focused on performance. I’m personally always looking for ways to improve performance via configuration and tuning, but also ways to identify performance issues.

Extreme Performance Series: vSphere Advanced Performance Boot Camp [MCL2033]

Pass Type: Tech+ Pass Only
Session by: Mark Achtemichuk & Valentin Bondzio

“The VMware vSphere Advanced Performance Boot Camp provides the most advanced technical performance-oriented training available about vSphere performance design, tuning and troubleshooting. Hosted by VMware Certified Design Expert Mark Achtemichuk, we will cover a broad range of topics on all resource dimensions, including the VMware ESXi scheduler, memory management, storage and network optimization. The student will become empowered to identify the location of performance issues, diagnose their root cause, and remediate a wide variety of performance conundrums using the many techniques practiced by the most seasoned vSphere veterans and VMware internal experts. Armed with the knowledge provided in the class will allow you to confidently approach virtual performance and manage it successfully.”

Not got the Tech+ Pass? Here is an alternative session available on the General Pass – Extreme Performance Series: Performance Best Practices [MCL1635].

Deep Dive: VM Performance and Best Practices [VI2158]

Pass Type: Tech+ Pass Only
Session by: Jimmy Arias

“This session will provide a very detailed and technical explanation of the utilization of resources by VM, how to evaluate the performance indicators using ESXtop, and how to better architect and create solutions for performance issues.”

Having watched a couple of sessions this year by David Klee on performance tuning SQL on vSphere, this session was one of the first on my favourite list. Having supported SQL in some fashion my entire career, I am always looking to learn how to get the best performance possible.

Meet the Experts: Virtualizing Microsoft SQL Server on vSphere – Stories from the Trenches [MCL1318]

Pass Type: Tech+ Pass Only
Session by: Deji Akomolafe & David Klee

“Virtualizing Microsoft SQL Server (the most virtualized mission-critical application) on VMware vSphere has become the standard for SQL Server deployments around the world. As vSphere continues to be the target platform for most SQL Server workloads and, with vSphere now being available in all major public cloud infrastructures, it is a given that you will virtualize your SQL Server workloads. The degree to which you will achieve bare-metal performance is up to how well you align SQL Server with the underlying infrastructure. This session (based on more than two decades of field experience) presents the common pitfalls you need to avoid and those you need to embrace as you run (or plan) your SQL Server instances on premises or in one of the various hybrid cloud options based on vSphere available from AWS, Microsoft, Google, and more.”

Another topic I have really enjoyed getting involved in this year has been Infrastructure as Code (IaC), Packer & Terraform specifically. So this year, I was for sure going to have this session on my list.

Automation Showdown: Imperative vs Declarative [CODE2786]

Pass Type: General and Tech+ Passes
Session by: Luc Dekens & Kyle Ruddy

“The automation landscape has always been a source of rapid innovation. Historically, the languages, whether it’s Perl, Python, vRealize Orchestrator JavaScript, or PowerCLI, may have changed, but the imperative, step-by-step workflows you’ve learned and know have not. However, a new challenger has appeared. Declarative workflows upended the usual processes and even the languages all in the name of infrastructure as code. Human readable, plain text files can be interpreted by products like HashiCorp Terraform and RedHat Ansible to do the heavy lifting of the imperative process. The key is knowing when, how, and where to use each method within your VMware environment. Join Luc and Kyle for this session where they will discuss these different styles of automation, complete with practical examples that you can use in your own environment!”

If like me you like to see things in action, Kyle also has a Live Coding session – Live Coding: Terraforming Your vSphere Environment [CODE2755].

Both of theses session are available on the General Pass so no excuses to miss out on them!

Finally, a session on Azure VMware Solution (AVS). As I am currently studying for my Microsoft AZ-104 exam, I wanted to start exploring and learning about this offering. Perhaps you are already using Azure or O365 and want to begin looking into the options for extending your vSphere solution to the cloud? If so, this session is definitely worth looking at!

Azure VMware Solution: Deployment Deep Dive [MCL2036]

Pass Type: Tech+ Pass Only
Session by: Jeremiah Megie & Steve Pantol

“In this session, we will discuss planning and deployment of Azure VMware Solution beyond the quick start. We will cover planning for network addressing, connectivity, integrating into an existing Azure hub and spoke or virtual WAN deployment, configuring monitoring and management, and establishing governance controls.”

If you haven’t already, head over to the VMworld website and register for the event! All content can be found in the Content Catalog, so get browsing!

As always, thanks for reading!

Enabling VM Rightsizing in vRealize Operations Manager (vROPS)

One of the many great features of vRealize Operations Manager (vROPS) is the ability to identify and address over or under sized virtual machines.

I was asked a short while ago why the option to resize a VM was unavailable or ‘greyed out’ as you can see below.

This feature is something that you need to a enable for a connection or ‘Cloud Account’. In this instance, this is my connection with vCenter.

You can check this by heading to Administration, Cloud Accounts and then select the three ‘dots’ next to the connection you want to check, or enable it for.

When reviewing the connection configuration you can see that the enable ‘Operational Actions’ is not selected.

Go ahead and select it.

Now if you head back to the rightsizing section, you will see that you have the option to resize the VM’s (for the connection or Cloud account you have enabled it for). One thing to note, the account you have used for the credentials on this connection require the appropriate privileges to modify the VM’s!

Once you click resize, you can then confirm the suggested resizing and continue.

Hope you found this useful. Once again thank you for reading!

Getting Started With Packer to Create vSphere Templates – Part 5 – Bringing it Together!

Here we are, Part 5! If you have stuck with me through this series, thank you for taking the time. If not, you can catchup with Parts 1-4 by searching my blog!

I wanted to end this series with something different to just text, code and images. So I am going to show you the end to end template deployment process with video’s using user defined variables but with a few environment variables in the Linux example.

Lets start with a Windows example – Windows 2019 Core

To give some context to the files being referenced for this build, here is the folder structure I will be working with, all of which is available on the link above.

From the root directory of your configuration, run the following:

packer build --var-file=./win2019core.pkrvar.hcl .

The trailing ‘.’ is important as this tells Packer that it needs to reference all of the .hcl files in its current directory.

And here is the finished article in the content library!

Now lets look at a Linux example that uses a HTTP server to acquire its kickstart configuration file from, rather than it being loaded as removal media. – Centos 8

This example also makes use of environment and user defined variables!

And again, the finished article.

If you have followed this series throughout, thank you. I hope you have found it useful and its inspired you to begin your Packer journey! Feel free to reach out via my socials if you have any questions or just want to chat about Packer!


Getting Started With Packer to Create vSphere Templates – Part 4 – Blocks

Welcome to Part 4 of the Packer Series! In this part we will look at putting together all the block and files we need to deploy a template!

As we have touched upon in earlier parts, we have multiple blocks and files available to us that can be used to make up a complete configuration. We will walk through a complete Source and Build Block here using user defined variables to complete the build. In the final part of this series, I will use a combination of user and environment variables to give you an idea of how you may use this outside of a lab.

Lets start by breaking down a Source Block for a Windows 2019 Core template.

source "vsphere-iso" "win-2019-std-core" {
  CPUs            = var.CPUs
  RAM             = var.RAM
  RAM_reserve_all = var.ram_reserve_all
  boot_command    = var.boot_command
  boot_order      = var.boot_order
  boot_wait       = var.boot_wait
  cluster         = var.vsphere_compute_cluster
  content_library_destination {
    destroy = var.library_vm_destroy
    library = var.content_library_destination
    name    = var.template_library_Name
    ovf     = var.ovf
  datacenter           = var.vsphere_datacenter
  datastore            = var.vsphere_datastore
  disk_controller_type = var.disk_controller_type
  firmware             = var.firmware
  floppy_files         = var.config_files
  folder               = var.vsphere_folder
  guest_os_type        = var.guest_os_type
  insecure_connection  = var.insecure_connection
  iso_paths = [var.os_iso_path,var.vmtools_iso_path]
  network_adapters {
    network      = var.vsphere_portgroup_name
    network_card = var.network_card
  notes        = var.notes
  password     = var.vsphere_password
  communicator = var.communicator
  winrm_password = var.winrm_password
  winrm_timeout  = var.winrm_timeout
  winrm_username = var.winrm_user
  storage {
    disk_size             = var.disk_size
    disk_thin_provisioned = var.disk_thin_provisioned
  username       = var.vsphere_user
  vcenter_server = var.vsphere_server
  vm_name        = var.vm_name
  vm_version     = var.vm_version

All values are passed in via variables in this example. You can see this by the ‘var.<variable_name>’ entry against every configuration line. All variables in this example are user defined variables in a pkrvar.hcl file.

We have configuration for CPU, Memory and disk sizes for instance, then we also have the WinRM username, password and timeout values used for connecting to the operating system after it’s been installed, for use with provisioners.

You can deploy your template as just a ‘normal’ VM Template in the VM and Templates Inventory by using this line:

convert_to_template        = true

Or a using a variable:

convert_to_template             = var.convert_to_template

Or you can deploy to Content Libraries by either removing the “convert_to_template” option or setting it to false, and replacing it with this:

  content_library_destination {

    library = var.content_library_destination
    name    = var.template_library_Name

If you already use Content Libraries, then you are likely going to want to continue to do so.  Or, if you have multiple vCenter’s, you may want to make use of subscribed libraries so you only have to deploy the template once.

To go further you can automatically destroy the original VM after its been uploaded to the Content Library by adding:

destroy = var.library_vm_destroy

And to take it even further, you can add the following to convert the template to an OVF.  OVF’s can be updated in the content library and therefore will be overwritten when you deploy your template again.  This can’t be done with a standard VM template.

ovf     = var.ovf

To bring that all together it looks like this:

  content_library_destination {
    destroy = var.library_vm_destroy
    library = var.content_library_destination
    name    = var.template_library_Name
    ovf     = var.ovf

A key line to point out in this windows example configuration above, is the ‘floppy_files’ option. This option is used to mount a floppy disk with any configuration files or media that you need to reference during the operating system installation. This includes your unattended.xml file, any scripts and any media or drivers such as VMware Paravirtual drivers for the SCSI controller. Checkout Part 2 for more info.

If we were looking at a Linux build, we would see the WinRM options replaced by SSH, like so:

  ssh_password = var.ssh_password
  ssh_timeout  = var.ssh_timeout
  ssh_username = var.ssh_username

A full list of the different configuration options available can be found here.

Now we have defined our source, we now want to deploy it using a build block.

build {
  name    = "win-2019-std-core"
  sources = [""]

  provisioner "powershell" {
    scripts           = var.script_files
  provisioner "windows-update" {
            search_criteria = "IsInstalled=0"
            filters = [
                      "exclude:$_.Title -like '*Preview*'",
            update_limit = 25
  post-processor "manifest" {
    output = "output/out-win-2019-std-core.json"
    strip_path = false

What’s happening in this block, is that we are referencing the source block that contains our configuration based on the name of the source block that we defined earlier, in this case ‘’.

In this example we also have two provisioners being used once the operating system has been installed. Firstly, the Windows-Update-Provisioner which installs the latest Windows updates based on any filters you include. In this example, its configured to exclude any updates with ‘Preview’ in the title and also to limit it to install up to 25 updates.

Secondly, we are making use of the Manifest post-processor. This produces an output that includes information such as build time each time it is run.

      "name": "win-2019-std-core",
      "builder_type": "vsphere-iso",
      "build_time": 1617185954,
      "files": null,
      "artifact_id": "windows-2019-std-core",
      "packer_run_uuid": "865be1fd-0dec-1688-8c89-9252e48d0818",
      "custom_data": null
  "last_run_uuid": "865be1fd-0dec-1688-8c89-9252e48d0818"

All of the above makes up a complete build file that can be deployed with any media or variables you have referenced. The full set of files for this example can be found here.

To give you an example of a non-windows Provisioner, here is a Shell Provisioner for a Linux template:

provisioner "shell" {
    execute_command = "echo '${"var.ssh_password"}' | sudo -S -E bash '{{ .Path }}'"
    scripts         = var.script_files

This executes all scripts that are referenced in the script.files variables.

Now using environment variables, nothing really changes. Your build file will look the same, the only differences will be you won’t provide a value for your declared variable in your pkrvar.hcl file, instead adding the variable to your terminal session. Check out Part 3 for more info. In the final part of this series, I will show an example of using both user defined and environment variables.

That concludes a short run through of the different files in the examples you can find on my GitHub. By no means have I covered everything in those examples or everything you can do with Packer, but this series along with the examples should help you on your way with discovering Packer! There is so much more that can be done using this product to create templates on vSphere as well as multiple other platforms so do head over to to discover more.

In the final part of this series, I am going to try a different content type, video’s! In these, we will run through two end to end template deployments using default values for variables, user defined and environment variables to show how you could use this as part of a workflow.

If you have gotten this far, thanks for sticking with me and I hope you have enjoyed it and found it useful!


Getting Started With Packer to Create vSphere Templates – Part 3 – Variables

Welcome back to part 3 of my Creating vSphere Templates using Packer series, if you missed part 1 or 2, you can find them here and here. In part 3 we will explore variables!

Why would we use variables? Variables allow you to specify customisations to your templating code without having to edit your actual build files. This can be useful when you are reusing code for multiple templates.

There are multiple types of variables that can be used, but we will talk about 2 types of input variables in this blog. They are what I will refer to as; User defined variables and Environment variables. We will talk about both during the blog post and the use cases for each.

Regardless of whether we use a user defined variable or an environment variable, we still need to declare them. This is done in a variable declaration file, so lets start with that!

Variable Declaration

Following the release of Packer version 1.7 the Hashicorp Configuration Language (HCL) format is now the preferred language over JSON. Everything you will see will be in HCL.

The variable declaration file is a pkr.hcl file used to declare any variables you will be using as part of your configuration, be it user defined or environment variables.

Lets take a look at a few of the variable types you can make use of as well as some of the options you can also set.

Variable Type

Here is a few common variable types, you don’t have to define a type at all, but you could then pass the wrong type of data into your config.

  • String – E.g. The templates name or the name of a datastore.
  • Boolean – E.g. A true or false value for whether you are using thin or thick provisioned disks.
  • List – E.g. A list of DNS server IP addresses.

We will see examples of these later on.

Default Value

You can set default values for variables. These values will be used if no other variable value is found in either your pkrvar.hcl file or as an environment variable. Using default values can help reduce the amount of repeat configuration if you use a shared variable definition file.


Another useful option is to be able to provide a description to a variable. This can be useful if you need to add any additional information about the variable or why a particular default has to be set.


You can also mark variables as sensitive for values such as keys, password or usernames etc, however you can mark any variable as sensitive if you have a need to. When a variable is marked as sensitive, it will not be displayed in any of Packers output.

User Defined Variables

Lets take a look at a few examples of declared variables in the variables.pkr.hcl file as well as any values then set for those variables in the user variables file. You will see a couple of examples of variables that have default, type and sensitive options set to give you an idea of some of the use cases.

Lets start with a basic user defined variable:

Variable Declaration – variables.pkr.hclVariable Defination – template.pkrvar.hcl
variable “vsphere_datastore” {}vsphere_datastore = “ds-vsan-01”
variable “vsphere_portgroup_name” {}vsphere_portgroup_name = “dvPG_Demo_DHCP_149”

So in this example, we are declaring that we are going to use variables called ‘vsphere_datastore’ and ‘vsphere_portgroup_name’. We then have values defined for these variables in our pkrvar.hcl file. This can be any data type for the value, as no type has been defined.

Variable Declaration – variables.pkr.hcl Variable Defination – template.pkrvar.hcl
variable “content_library_destination” {
  type    = string
  default = “Images”
Nothing defined = Default value would be used
content_library_destination = “ISOs”

In this example we have declared a variable with the type ‘String’, and also provided a default value. The configuration will use this default if no other value is defined either via a user variable or environment variable, but will be overridden should a variable value be set.

Variable Declaration – variables.pkr.hcl Variable Defination – template.pkrvar.hcl
variable “vsphere_server” {
  type    = string
  default = “vm-vcsa-01.smt-lab.local”
  description = “vCenter Server FQDN”
Nothing defined = Default value would be used
vsphere_server = “vcsa-02.smt-lab.local”

Here is an example again using a type and default values, but also providing a description to provide some additional information. Like the previous example, not providing a variable value either in the pkrvar.hcl file or in the terminal session as an environment variable, would result in the default value being used.

Variable Declaration – variables.pkr.hcl Variable Defination – template.pkrvar.hcl
variable “vsphere_user” {
  type      = string
  default   = “packer_build@smt-lab.local”
  sensitive = true
Nothing Defined

In this final example we are using the sensitive option. This will stop the value being displayed in any Packer output. Again, it’s using a default value, so you do not need to define a value in the pkrvar.hcl file unless you want to use a different value to this default.

Environmental Variables

Now let’s take a look at environment variables. These are especially useful if you want to use Packer as part of a workflow or automation pipeline, or to pass in secrets (passwords or keys) into the workflow from a secret management tool.

You still declare all your variables in your variables.pkr.hcl file as you would for user defined variables, but instead of providing a value in your pkrvar.hcl file, you create environment variables in your terminal session, in this case, PowerShell.

Packer will look for variables in the session with the prefix of PKR_VAR_. If Packer finds any variables with this prefix, it knows they are for its use.

You do not need to add this prefix anywhere in your configuration as Packer knows to ignore the prefix when matching the variable name.

For example lets set the vSphere connection password in the PowerShell session we are using. This can be done by running the following to set the variable:

$env:PKR_VAR_vsphere_password = "VMware123!"

This example will match up to the variable declaration:

variable "vsphere_password" {}

You do not need to provide a value in your pkrvar.hcl file as Packer will read the value from the ‘PKR_VAR_vsphere_password’ environment variable.

NOTE: If you also provide a user defined variable in pkrvar.hcl, this will take precedence over the environment variable.

You can find HashiCorps documentation on variables here, have a read to discover even more options.

Referencing a Variable from Build Blocks

Now we have taken a brief look at some of the ways to declare and define variables, lets now take a look at how you use them in your source block!

Here are some examples:

  username       = var.vsphere_user
  vcenter_server = var.vsphere_server
  vm_name        = var.vm_name
  vm_version     = var.vm_version

There are two components here. Firstly, ‘var.’ this defines that a variable is being referenced. Secondly, the name of the variable you wish to reference. Each variable referenced will need to exist in variables.pkr.hcl and either a default value specified or a user or environment variable set. It doesn’t matter whether you are using environment or user defined variables, this syntax is the same. Remember that you do not need to include ‘PKR_VAR_’ in the variable name in the source block when you are referencing an environment variable, it’s only needed as a prefix when actually setting the variable.

That concludes my brief overview of user defined and environment variables. Do checkout the link to HashiCorp’s official documentation above and you can also find an example of a variable declaration file here, and a pkrvar.hcl file here on my GitHub.

In Part 4 we will put all the blocks and files together to complete the configuration before moving onto the final part of the series, where we will deploy some templates!

Thanks for reading!

Getting Started With Packer to Create vSphere Templates – Part 2 – Answer Files and Scripts

Welcome to part 2 of my Getting Started with Packer series, if you missed part 1, you can find it here. In part 2, we will take a look through an important part of creating your vSphere templates; Answer Files and scripts.

Firstly, we will be looking at a couple of example scripts that can be used to configure your operating system before its turned into your template. We will then move on to answer files that allow an automated, non user prompting installation of your operating system. These answer files provide configuration details during the operating system installation.

Let’s get started!

Scripts, Drivers and Media

These can be referenced either during the installation of the operating system via the answer file, like VMTools is during the Windows example below. They can also be run by a Provisioner via PowerShell or Shell, after the operating system install has completed. If media is required during the installation of the operating system, such as disk controller drivers or VMware Tools, they need to be made available to the operating system during installation. This can be achieved in multiple ways, Floppy disks, CD_rom, via a HTTP server or a combination. Either way you are going to need them available, more on how to make them available later in the series, but for now lets look at a couple of examples.


Disabling TLS (Windows)

Here is an example script for disabling TLS 1.0 &1.1 on Windows using a PowerShell script. This could be run during the installation via the answer file, or via the PowerShell Provisioner. If running during the installation of the OS, this must be mounted as media during the installation. If it’s being run via the Provisioner, this can be referencing directly from the working directory of the machine you are running Packer from.

#Disable TLS 1.0
new-item -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols" -Name "TLS 1.0"
new-item -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0" -Name "Server"
new-item -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0" -Name "Client"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client" -Name "Enabled" -Value 0
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Client" -Name "DisabledByDefault" -Value 1
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server" -Name "Enabled" -Value 0
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0\Server" -Name "DisabledByDefault" -Value 1
#Disable TLS 1.1
new-item -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols" -Name "TLS 1.1"
new-item -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1" -Name "Server"
new-item -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1" -Name "Client"
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -Name "Enabled" -Value 0
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client" -Name "DisabledByDefault" -Value 1
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -Name "Enabled" -Value 0
new-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server" -Name "DisabledByDefault" -Value 1

This is a simple script to create the required registry entries to disable TLS versions 1.0 &1.1.

Updating Installed Packages (Linux)

Now let’s look at a CentOS example. Here is an example Shell script for updating all installed packages, that again can be ran via the answer file (kickstart.cfg) or via the Shell Provisioner.

# Update existing packages
sudo yum update -y


Depending on the type of disk controller you plan on using for your templates and subsequent virtual machines, you may need to make drivers available during the operating system installation. An example of this are drivers for ParaVirtual SCSI (PVSCSI) disk controllers. These drivers aren’t available in during a Windows installation by default and need to be provided.

These can be mounted via floppy or another method. They just need to be available during the operating system installation. I stick to floppy currently as I don’t have to do anything other than reference the folder containing the drivers, along with my answer file and required scripts:

floppy_files         = var.config_files

This is the floppy_files config line referencing the variable ‘config_files’. That variable references the path and file name of each file I wish to make available to the VM.

Here is detail of that variable as an example. It is referencing files in two directories, config and scripts, within my template parent directory.

config_files            = ["config/autounattend.xml","scripts/pvscsi","scripts/install-vm-tools.cmd","scripts/enable-winrm.ps1"]

If you don’t provide drivers where needed, your operating system installation will fail.


Depending on what you intend to install on your templates, you will need to make any install media or install scripts available. Like above, you can either mount any media to the VM using the floppy_files option and run the installs from the answer file, or via the Provisioner referencing your local working directory.

Examples of media or installations could be security products such as Antivirus or Data Loss Preventions agents, Management/Monitoring agents such as System Center Configuration Manager (SCCM) or System Center Operations Manager (SCOM).

There is no right or wrong answer as to what you should include in your templates, this is something you need to decide based upon your needs and environment. Although I would say, keep them as light as possible and use the right tool for the job. Consider using configuration management tools when its the right time too!

Answer Files

As we touched upon above, answer files are used to provide configuration details during the operating system install. In this blog, we will take a look at two types of answer files; A windows autounattended.xml & a CentOS kickstart.cfg.

Lets begin with the Windows answer file. You can create a Windows answer file using the Windows System Image Manager (Windows SIM) which you can find more information on here.

There are multiple sections within this file from the locale settings, disk partition configurations, the edition of Windows and even a section to stop the administrator account from expiring.

Here is a cut down example of a Windows answer file, you can find a complete example on my GitHub:

<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
    <settings pass="windowsPE">
        <component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="" xmlns:xsi="">
        <component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="" xmlns:xsi="">
                <Disk wcm:action="add">
                        <CreatePartition wcm:action="add">
                        <CreatePartition wcm:action="add">
    <settings pass="specialize">
        <component name="Microsoft-Windows-Deployment" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="" xmlns:xsi="">
                <RunSynchronousCommand wcm:action="add">
                    <Description>Disable Network Discovery</Description>
                    <Path>cmd.exe /c a:\disable-network-discovery.cmd</Path>
    <settings pass="oobeSystem">
        <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="" xmlns:xsi="">
                <SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c wmic useraccount where "name='Administrator'" set PasswordExpires=FALSE</CommandLine>
                    <Description>Disable password expiration for Administrator user</Description>
                <SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c a:\install-vm-tools.cmd</CommandLine>
                    <Description>Install VMware Tools</Description>
                <SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\enable-winrm.ps1</CommandLine>
                    <Description>Enable WinRM</Description>
    <cpi:offlineImage cpi:source="wim:c:/wim/install.wim#Windows Server 2019 SERVERSTANDARDCORE" xmlns:cpi="urn:schemas-microsoft-com:cpi" />

Key parts of this file are the installation of VMTools and the enabling of WinRM:

<SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c a:\install-vm-tools.cmd</CommandLine>
                    <Description>Install VMware Tools</Description>
<SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\enable-winrm.ps1</CommandLine>
                    <Description>Enable WinRM</Description>

VMTools is important to make sure that the correct drivers are installed, ensuring you can get a network connection etc, and you need WinRM and the appropriate firewall rules to be enabled, to allow Packer to continue any post OS install configurations to take place via PowerShell Provisioner block. If WinRM isn’t enabled and working, you wont be able to complete any post install configuration actions!

You will notice both these actions are achieved by running a script from a floppy drive. A:\<scriptname>. As touched upon earlier, these can be made available to the virtual machine as floppy disks (other options available) as it is built and subsequently removed when the build is complete.

One other setting to mention, the Administrator password in encrypted, you don’t want to be leaving this in plain text!


Lets now take a look at a Linux Kickstart.cfg file, again cut down but a complete annotated example can be found here:

lang en_GB
keyboard --vckeymap=uk --xlayouts='gb'
network --onboot yes --bootproto=dhcp --activate
rootpw --iscrypted $1$JlSBrxl.$ksXaF7TIE.70iV12//V4R0
firewall --disabled
authconfig –enableshadow –enablemd5
selinux --permissive
timezone --utc Europe/london --isUtc
bootloader --location=mbr --append="crashkernel=auto rhgb quiet" --password=$1$JlSBrxl.$ksXaF7TIE.70iV12//V4R0
autopart --type=lvm
clearpart --linux --initlabel
firstboot --disabled
eula --agreed
services --enabled=NetworkManager,sshd
user --name=linux_user --iscrypted --password=$1$JlSBrxl.$ksXaF7TIE.70iV12//V4R0 --groups=wheel
%packages --ignoremissing --excludedocs
chkconfig ntpd on
chkconfig sshd on
chkconfig ypbind on
chkconfig iptables off
chkconfig ip6tables off
chkconfig yum-updatesd off
chkconfig haldaemon off
chkconfig mcstrans off
chkconfig sysstat off
echo "linux_user        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers.d/linux_user
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
reboot --eject

Although this may look completely different it is still doing similar things as a Windows answerfile.

We are still detailing locale settings and encrypted passwords:

rootpw --iscrypted $1$JlSBrxl.$ksXaF7TIE.70iV12//V4R0

There is also a networking section. In this case I am specifying that I want the operating system to use DHCP:

network --onboot yes --bootproto=dhcp --activate

Also the packages section is quite useful. Here you can specify any packages you want to install during the operating system installation.

%packages --ignoremissing --excludedocs

In Part 3 we will dive into variables in more detail!

Thanks for reading!

Getting Started With Packer to Create vSphere Templates – Part 1

Virtual Machines Templates, why? Templates are a great way to achieve consistent, repeatable and fast virtual machine (VM) deployments, be it an on-premise vSphere environment or cloud based environment. Having up to date VM templates for each of the operating systems you use, is key to being able to deploy infrastructure quickly and easily across multiple platforms.

In this series of blog posts, I will be focusing on deploying virtual machine templates in vSphere, specifically vSphere 7, using a product called Packer by HashiCorp. Packer is an Infrastructure as Code tool specifically for template management.

There is so much that can be done with Packer. I aim to be able to give you enough information to be able to start your journey with Packer.

Throughout this series, I will reference two templates as examples. A Windows (Windows 2019 Core), and a Linux (CentOS 7) template to give you an idea of the differences, and will also give you a basis that you can apply to other operating systems. But to start we of course need to know how to install Packer and understand the components! Let’s get started!

Installing Packer

Firstly you are going to need to download the Packer executable from the Packer website, here. The latest version at the time of writing is 1.7.0. This is an exciting release for many reasons, but specifically that its moved over to HCL (Hashicorp Configuration Language) from JSON! This brings it closer inline with the other Hashicorp products such as Terraform which already use HCL.

You have a choice of downloads for multiple operating systems. Everything in this blog series will be done on a Windows 10 machine.

Now you need to unzip the download and copy ‘packer.exe’ to either an existing PATH directory or create a new one. For simplicity here, I have copied the executable to ‘C:\Windows\System32’.

Another option for installing on Windows is to use Chocolatey by running the following:

choco install packer

All installation options can be found here.

Once done, you can confirm its working by opening a PowerShell Terminal and running the Packer executable:


The Command Line

Packer has a simple command line to build your templates, you will have seen the available options when you ran ‘packer’ to check your install.

Lets take a look at a few of them that we might see during this series:

  • build: Builds the template you have defined.
  • fmt: This is a nice command that will format your code. Anyone who likes their code to look tidy formatting wise, will like this one!
  • hcl2_upgrade: For anyone that has been using Packer with JSON configuration files, this is a great starting point to get your code converted to HCL. Be aware its not perfect in my experience and will need to be manually tweaked, but it gets you on your way.
  • validate: This checks whether your template is valid. It will check to make sure your syntax is correct and has all mandatory values set for any resources you make use of.
  • version: This is a quick easy way to check which version of Packer you are currently using.

As we saw above, you can find brief descriptions for the remaining options by running ‘packer’ from the command line.


There are multiple blocks that can be used to build your virtual machine templates. You can find a complete list here, but lets take a look at some of the ones you will see throughout this series.

Source Blocks

There are two types of source blocks, top level that can be used and reused by multiple builder blocks, and then there are nested source blocks which can be used to inject build specific content.

Build Blocks

Build blocks are used to build your templates, in this case by referencing a source block. It can reference any top level source blocks you have referenced or source blocks nested within your build and merge them to produce a complete configuration.

Provisioner blocks and Post-Processor blocks are also referenced in the build block. More on what they do below…

snippet of a build block referencing a top level source block

Provisioner Blocks

Provisioners are how we interface with your template once the operating system is installed. They will use either SSH or WinRM to communicate with the operating system.

We will be focusing on three provisioners throughout the series; Shell for Linux, PowerShell for Windows, and a community managed provisioner called ‘Windows Update Provisioner’.

Both PowerShell and Shell can be used to run scripts, commands, copy files (you can also use the File Provisioner to do this), install software, basically anything you want. The Windows Update provisioner, is exactly what it sounds like. It’s a way of installing the latest Windows patches. More on that later.

There are multiple HashiCorp supported provisioners available which you can find here.

Post-Processor Blocks

Finally, Post-Processors, these run once the build is complete, but its not mandatory to use them. I haven’t really used them yet apart from producing a manifest file which you will see included later in the series.

Information on the available Post-Processors can be found here.

Folder Structure, Configuration Files and Scripts

There are many ways to set out your configuration files for your templates in which ever directory structure you wish. This is the way I have found logical for me; by separating the configuration out into multiple files (mainly the 3 highlighted in Bold below), it makes it easier to reuse your code.

Folder Structure

--> OperatingSystemName

You can have a set of folders per operating system .

Configuration Files, Scripts and Drivers

All Packer configuration files use the file extension; .pkr.hcl apart from your user defined variables file which uses the .pkrvar.hcl extension. Lets take a look at each file.

Variables Declaration file (example – variables.pkr.hcl): This file is where you declare all the variables you want to reference in your source, build or provisioner blocks. This includes user defined variables and environmental variables.

User Defined Variables file (example – win2019.pkrvar.hcl ): This file is where you will define your user variable values. This could include values for Template Name, CPU, RAM and disk size for instance. These variable values are in plain text, therefore you don’t want to be keeping sensitive values such as passwords in this file in any scenario outside of a lab. These can be handled by environment variables which we will see in later parts of the series.

‘Build’ file (example – win2019.pkr.hcl): This is where you define your template using a Source Block that we mentioned earlier, and build it using a Build Block. In this case we are going to be using the ‘vSphere-ISO’ Source.

Operating System Answer File (example autounattended.xml: This is the answer file needed to complete the installation of your operating system. For Windows this would be an autounattend.xml file and for CentOS, a kickstart.cfg file.

Scripts and Drivers: Finally you will need any scripts, drivers or media ready to reference in either the answer files or for use by a provisioner. The output file is not a prerequisite, as this is generated by the post processor at the end of the build.

In a later part of this series I will break down each of the components and blocks, and explain the content of a Windows and Linux template build in further detail.

So what next? In part 2, we will take a closer look at the operating system answer files and some example scripts & drivers that can be used or are required.

Thanks for reading!

Upgrading Site Recovery Manager (SRM) 8.3.1 to 8.4

I recently started looking at prerequisites to a vSphere 7 upgrade, by reviewing any associated upgrades that might be needed. VMware Site Recovery Manager was one product that needed to be upgraded prior to this. I decided I would fire up a quick nested setup in my HomeLab to run through the process before hand and share the process!

This nested lab consists of two ESXi 6.7 nested hosts, two vCenter 6.7 VCSA’s and two SRM 8.3.1 appliances, with the VCSA’s and SRM appliances having custom CA certificates installed.

I made use of @lamw’s VirtuallyGhetto Nested ESXi Appliances for the host deployment via the subscribed content library he offers. (Super easy to deploy nested hosts quickly if you haven’t come across this before!)

Now on to the upgrade.

Firstly, make sure you have have sufficiently backed up your environment! Take a backup of your SRM configuration by using the Export/Import SRM Configuration Tool within SRM. Once you click export it will allow you to download the config backup to your local machine. Then take a snapshot the SRM appliances.

During the upgrade, SRM does not retain any advanced settings that you configured in the previous installation, so make sure you have made a note of any modified advanced settings such as timeouts etc before beginning.

Note: protection groups and recovery plans that are not in a valid state will not be preserved!

Other important checks before you begin –

Verify that there are no pending cleanup operations on recovery plans and that there are no configuration issues for the virtual machines that Site Recovery Manager protects.

  • All recovery plans are in the Ready state.
  • The protection status of all the protection groups is OK.
  • The protection status of all the individual virtual machines in the protection groups is OK.
  • The recovery status of all the protection groups is Ready.

Now, mount the SRM 8.4 ISO to the appliance you are going to upgrade first, and log into the SRM VAMI. Browse to the update section and edit the update source to be CD-ROM.

You will then get the option to install 8.4.

Providing you are in an appropriate window to take your SRM solution offline, have no recoveries in progress and have checked the list of important steps above, hit install and follow the prompts.

If you are upgrading other VMware products too make sure you visit this site to review the order for upgrading other components, such as vSphere Replication.

Once the upgrade is complete, log back into the SRM VAMI. You will see a prompt to reconfigure the connection to vCenter/PSC.

Hit the ‘RECONFIGURE’ button and follow the wizard to reconnect to your vCenter and PSC

Once complete, refresh your browser and log back in. You will now see your successfully upgraded SRM appliance running 8.4 and connected to your vCenter/PSC.

Sometimes clearing your browser cache is needed should you get oddities…

Now repeat the process for your partner SRM appliance.

Once complete, you should now have two upgrade SRM appliances!

From here you many need to update the Storage Replication Adapters (SRA) (if you are using array based replication). Check the VMware Compatibility Matrix – here.

You can find VMware’s official documentation here.

Thanks for reading!

Upgrade vRealize Operations Manager (vROPS) 7.5 to 8.4

Recently I tested a vRealize Operation Manager (vROPS) upgrade from version 7.5 to 8.4 ahead of a vCenter 7.0.2 upgrade and thought I would share the process.

Something worth noting with this upgrade. vROPS 7.5 is based on a SUSE Linux OS, 8.4 is Photon OS.

Before we install the 8.4 update, make sure you back up any customised content and install the vRealize Operations Manager Pre-Upgrade to 8.4.0 Assessment Tool! This will inform you of any content that is being removed that could affect your metrics/content and advise of any upgrade issues.

Make sure you download the correct upgrade assessment and upgrade .pak file. You will find options for 7.x to 8.4 and 8.x to 8.4.

From the vROPS admin console, head over to the Software Update section.

Upload the appropriate pre-upgrade assessment .PAK and complete the wizard.

Follow the rest of the wizard through to the end and click Install.

Now you can view the report by following the instructions detailed in this article. This will tell you what’s going to break… Make sure you extract all the ZIP files within the download, otherwise you just get a ‘Loading…’ message!

It looks something like this, although this is a blank setup for testing purposes.

Now its time to upload the actual update, in the same manner as the upgrade pre assessment.

Again, follow the rest of the wizard through to the end and click Install.

Should you hit a problem with the installer hanging on step 4 of 9, firstly make sure you are able to log into your root account via SSH. If not reset the password using this procedure. If you are still getting stuck after this, take a look at this article.

Once complete, you will be running vRealize Operations Manager 8.4!

Thanks for reading.

« Older Entries Recent Entries »