Categoriearchief: ESXi

Reset VMware ESXi root password

vmware_vSphere7_graphicThe root account is the only login account to vSphere ESXi. There is no extra account to create a backdoor to logon to vSphere ESXi when the root password is lost. When a vSphere ESXi host is added to a vCenter instance, management of the host is primary done via vCenter. Troubleshooting ESXi is done primarily on the command line via an SSH connection. By default the SSH service is stopped. To start the SSH service you have to access the server via vCenter Host>Configure>System>Services. When you don’t have the root password for the vSphere ESXi host you have to follow the following procedure.

This procedure uses the Host profile functionality that is only available when you have an Enterprise license. If you have lost the root password but you don’t have an Enterprise license you have no other option but reinstall the host.

Lees verder

VMworld 2021 Top 10 session to watch

vmworld2021It is that time of the year again to start looking forward to VMworld 2021. Due to the ongoing Covid-19 pandemic VMworld 2021 will again be “fully virtual” again.

The upside to a virtual event is that you don’t need to walk across a big conference complex to get from one session to another. You can follow the conference from the luxury of you own chair and desk. Poor your own drink of choice, sit back and relax and take in all the information on VMware latest and greatest from your own home. Because VMworld 2021 will be fully virtual, like last year that will make it easier for people to attend since you don’t need to arrange travel (flight/hotel) to attend VMworld.

Lees verder

VIBS Error vSphere ESXi upgrade

vmware_vSphere7_graphicRecently I was upgrading vSphere ESXi host from version 6.5.0 (7388607) to version 7.0.1. vCenter for this environment is upgraded to version at first I was trying to start the upgrade via VMware LifeCycle Manager but that resulted in an error indicating that the vCenter/LifeCycle Manager and the ESXi version where not working well together. In order to make progress I’ve accessed the server via it’s Integrated Lights Out (ILO) interface (HPE). Mounted the HPE ESXi image through ILO and booted the server.

During the upgrade progress the installer finds the drive where ESXi is installed. The next step is that the installer scans the current installation to see if an upgrade is possible. At this point the installer throw’s the following error.

2021-05-31 10_32_55-

Some investigation through the list the installer is showing here it is clear that these VIBs are for storage drivers that are no longer in use by ESXi.

The correct way to resolve these errors is to remove the unused storage drivers from ESXi. The next step is to reboot (F11) the server. When the ESXi  is completely loaded I connect via SSH (I use the MobaXterm client).

2021-06-08 09_49_12-Photos

With the following command we retrieve the name of the package:

esxcli software vib list | grep

The output shows that the package is called net-mst.

With the following command we remove this VIB.

esxcli software vib remove –n net-mst

After we remove all the VIBS that are mentioned in the above error, the VMware vSphere ESXi upgrade can be restarted.

Advanced Cross vCenter vMotion

vmware_vSphere7_graphicVMware released vSphere version 7.0 U1c – 17327586 in December 2020. Next to the cool new features that is included in this version (This blog is al about one of those cool features) another very important reason to download and install this version of vSphere is that it closes a major security issue with previous versions. You can find more info on this here.

New features in this version of vSphere include the following:

  • Physical NIC statistics
  • Advanced Cross vCenter vMotion
  • Parallel remediation on host in clusters that you manage with vSphere Lifecycle Manager baselines
  • Third-party plug-ins to manage services on the vSAN Data Persistence platform

The VMware release notes have the following to say about this new feature:

With vCenter Server 7.0 Update 1c, in the vSphere Client, you can use the Advanced Cross vCenter vMotion feature to manage the bulk migration of workloads across vCenter Server systems in different vCenter Single Sign-On domains. Advanced Cross vCenter vMotion does not depend on vCenter Enhanced Linked Mode or Hybrid Linked Mode and works for both on-premise and cloud environments. Advanced Cross vCenter vMotion facilitates your migration from VMware Cloud Foundation 3 to VMware Cloud Foundation 4, which includes vSphere with Tanzu Kubernetes Grid, and delivers a unified platform for both VMs and containers, allowing operators to provision Kubernetes clusters from vCenter Server. The feature also allows smooth transition to the latest version of vCenter Server by simplifying workload migration from any vCenter Server instance of 6.x or later.

In this blog we will describe the process of importing VMs form a 6.7 vCenter to the updated 7.0.1 vCenter, making use of the cross vCenter technology. To prepare the environment for cross vCenter vMotion the vMotion network has to be configured with a gateway.


At the receiving side we tried to VMKping the sending host over the vMotion VMKernel port. When this failed we added a route to any foreign network across the gateway. When we retried the VMKping it was successful.

On the sending side we also configured the vMotion network with a gateway entry.


To start the process of performing a cross vCenter vMotion we right click  on the cluster or ESXi host.


Click on Import VMs


Select source vCenter


Select the VMs you want to move.


Select the host to transfer the compute to.


Select the destination storage.


Select networks.


Select vMotion priority.

imageReady to complete, click Finish.

The 7.0.1 environment also makes use of NSX-T network virtualization. Why is this important to mention? If you want to perform a roll back you can’t move a VM that is connected to a NSX-T managed portgroup to a none NSX-T managed portgroup. To remediate this issue you should create a none NSX-T portgroup with the same vLAN and add the VM you want to rollback to that portgroup.

vSAN Hybrid / All Flash

vsan-est-2013As a VMware partner we (my employer PQR) conducts VMware Health Checks. To perform a Health Check on a vSphere (or EUC, NSX-T) environment VMware provides a tool to check if the environment matches the VMware best practices. The tool to check if the environment matches the VMware best practices is called the VMware Health Analyzer. The VMware Health Analyzer is a Photon appliance that you install in the client environment. There is also a Windows installed version of the VMware Health Analyzer. My preference is to use the appliance version. I have the appliance also running on my environment, so if I collected data at a customer site I can load this information in my own appliance, this means that I don’t need a connection with the customer to create my Health Check report. Current version of the VMware Health Analyzer is: Next to the VMware Health Analyzer the consultant checking the VMware environment will also use his own knowledge to check the environment and to interpret the data presented by the VMware Health Check Analyzer.

VMware Health Analyzer
Above screenshot is from a lab environment.

Recently we did a Health Check on a vSphere 6.7 environment for a large company. The environment consists of six vSphere host with a single vSAN cluster. Before the Health Check the customer decided to expand the environment with four extra host. The original vSAN cluster over consisting of those six vSphere servers is a Hybrid vSAN, the Diskgroups on the four new servers are all flash. This situation has resulted in a combined vSAN with Hybrid and All Flash Diskgroups. This setup is not supported by VMware. When we investigate the servers of the Hybrid vSAN we noticed that the disks in the servers are also all flash, but marked as HDD.

Disk group “Hybrid” servers


Disk Group All Flash servers


For performance purposes we highly recommend to use an All Flash vSAN instead of an Hybrid vSAN.

Advantages of an All Flash vSAN:

  1. Make use of space efficiency: Deduplication and compression;
  2. Provide organizations with the ability to run business critical applications and OLTP databases using vSAN enabled by fast, predicable throughput and lower latency;
  3. Give customers the ability to scale and support a significantly larger number of VMs and virtual desktops using the same compute and network resources;
  4. Increase business agility and productivity by enabling IT to provision services faster, increasing user satisfaction and executing on faster backup and disaster recovery for production deployments;
  5. Combine the benefits of vSAN and flash to deliver a lower TCO using less power, cooling, data center floor space and other resources per virtual machine, virtual desktop or transaction;
  6. While data de-staging happens from cache to capacity, flushing of data would happen far faster in all-flash vSAN in comparison to a hybrid (HDD + SSD) vSAN, helping define better SLA.

Converting the disk groups and converting the vSAN from hybrid to all flash has a large impact and must be well prepared before executed.
We proposed the following method.

  1. Remove three “new” servers from the current vSAN cluster;
  2. Build a new All Flash vSAN Cluster with these three servers;
  3. Add the new vSAN cluster to the VMware Horizon environment;
  4. Empty the remaining 7 servers one by one, and add them to the new All Flash vSAN.
  5. If the old cluster is empty, delete it.

Thanks to Ronald de Jong

vSphere Cluster Services (vCLS)

vmware_vSphere7_graphic_small1In vSphere 7.0 Update 1 (released in October 2020) a new feature was released called vSphere Cluster Services (vCLS). The purpose of vCLS is to ensure that cluster services, such as vSphere DRS and vSphere HA) are available to maintain the resources and health of the workload’s running the cluster. vCLS is independent of the vCenter Server availability.

vCLS uses agent virtual machines to maintain cluster services health. vCLS run in every cluster, even when cluster services like vSphere DRS and vSphere HA are not enabled.

The architecture of the vCLS control plane consists of max 3 virtual machines, also called system or agent VMs. The vCLS machines are placed on sperate hosts in a cluster. On a smaller environment (less than 3 host) the number of vCLS VMs will be equal to the number of hosts. SDDC (Software Defined Datacenter) admin’s do not need to maintain the life cycle of these vCLS VMs.

The architecture for the vSphere Cluster Services is displayed in this image.

2021-01-06 17_21_40-vSphere 7 Update 1 - vSphere Clustering Service (vCLS) - VMware vSphere Blog and

The vCLS VMs that form the cluster quorum state, are self correcting. This means that when the vCLS VMs are not available the vSphere Cluster Services will try to create, update or power-on the vCLS VMs automatically.

2021-02-02 21_37_33- - Remote Desktop Connection_small

There are three health states for the cluster services:

  • Healthy: The vSphere Cluster Services heath is green when at least one vCLS VM is running in the cluster. To maintain vCLS VM availability, there’s a cluster quorum of three vCLS VMs deployed.

  • Degraded: This is a transient state when at least one of the vCLS VMs is not available, but DRS maintains functionality. The cluster could also be in this state when either vCLS VMs are being re-deployed or getting powered-on after some impact to the running vCLS VMs.

  • Unhealthy: A vCLS unhealthy state happens when DRS loses it’s functionality due to the vCLS Control plane not being available.

The vCLS VMs are automatically places in there own folder within the cluster.

2021-02-02 21_50_17- - Remote Desktop Connection_small

The vCLS VMs are small, with minimum resources. If no shared storage is available the vCLS VMs are created on local storage. If a cluster is created before shared storage is configured on the ESXi host (for instance vSAN), it would be strongly recommended to move the vCLS VMs to the shared storage once it is created.

The vCLS VMs are running a customized Photon OS. In the image below you see the resources of a vCLS VM.

2021-02-02 21_50_59- - Remote Desktop Connection_small

The two GB virtual disk is thin provisioned. The vCLS VM has no NIC, it does not need one to communicate because vCLS leverages a VMCI/vSocket interface to communicate with the hypervisor.

The health of vCLS VMs, including power state, is managed by vSphere ESX Agent Manager (EAM). In case of power on failure of vCLS VMs, or if the first instance of DRS for a cluster is skipped due to lack of quorum of vCLS VMs, a banner appears in the cluster summary page along with a link to a Knowledge Base article to help troubleshoot the error state. Because vCLS VMs are treated as system VMs, you do not need to backup or snapshot these VMs. The health state of these VMs is managed by vCenter services.

Tags: VMware, vSphere, vCLS

Configure vMotion in vSphere environment

Download Recently I’ve expanded my lab environment with a second vSphere host. One of the advantages of having two vSphere hosts is that you can move machine from on vSphere host to the other. If you perform this move while the machine is powered down you don’t and need any additional configuration. However, if you want to move a running machine from one vSphere host to the other without losing connectivity to this VM, you need vMotion. First let me explain what vMotion is.

vMotion in vSphere allows a running virtual machine to move between two different vSphere hosts. During vMotion memory of the VM is sent from the running VM to the new VM (the instance on another host that will become the running VM after the vMotion). The content of memory is changing all the time. vSphere uses a system where the content is sent to the other VM and then it will check what data is changed and send that, each time smaller blocks. At the last moment it will very briefly ‘freeze’ the existing VM, transfer the last changes in the memory content and then start the new VM and remove the old one. This process will minimize the time during which the VM is suspended.

Lees verder

Managing vSphere with PowerCLI (creating Tags)

DownloadVMware vSphere PowerCLI is a command line tool for automating vSphere and vCloud management.

VMware PowerCLI is a very powerful command-line tool that lets you automate close to all aspects of a vSphere management. This includes among others: network, storage, guest OS.

PowerCLI is distributed as PowerShell modules, and includes over 500 PowerShell cmdlets.

The first version of PowerShell was released in November 2006 for Windows XP, Windows Server 2003 and Windows Vista. We have come a long way since then. PowerShell is an important part of today’s IT landscape. By using PowerShell you can manage systems from different vendors in a unified way. I find myself using PowerShell almost every day in my work.

Lees verder

How to remain in control of your Horizon environment

Ljmaegmnepbgjekghdfkgegbckolmcok-featuredIn this blog post I want to share a simple piece of advice that will help you in maintaining your VMware Horizon environment. Image management is an important part of managing your VMware Horizon environment. If you are using Instant Clones (this is the future proof way of delivering VMware Horizon VDI’s in your environment), during the image publishing phase (this is referred to as the Priming phase) VMware Horizon starts creating the following VM’s CP-Template, CP-Replica (both are turned off and there is one per datastore per Desktop pool) and a CP-Parent (this machine is turned on and there is one per ESX host per datastore per Desktop Pool).

Lees verder

VMware ESXi : Thick disk to Thin disk

imageAt the end of last year I have reinstalled my home lab environment from Microsoft Hyper-V to VMware ESXi. I converted my VM’s to the ESXi format with the VMware Offline converter. This process was going smoothly. To be honest way smoother that expected. I worked a lot with VMware in the past but since I started working at PQR I worked with VMware more and more. I really like the management features in VMware vCenter.
Lees verder