Archive for the ‘VMware’ Category

Fixing an interrupted NSX-T Manager upgrade

maandag, april 26th, 2021

nsxtThe process for upgrading the NSX-T managers in an environment is a automated process that works through three managers and finishes the moment all the NSX-T managers are upgraded to the new desired version. Recently I was upgrading a NSX-T datacenter environment from version 3.1.0.0.017107177 to version 3.1.1.0.0.17483065 in my lab environment. The Edge nodes and Transport Nodes had already been upgraded successfully. While we where in the middle of upgrading the the NSX-T manager upgrade got interrupted and the NSX-T managers rebooted when the upgrade was not yet finished.

After all the nodes where back up again I was not able to logon to the Management environment, the designated Virtual IP (VIP) appeared to be down.  When I connected to the first NSX-T Manager machine I was presented with a message indicated that the upgrade had not fully completed. When I executed the following command at the prompt Get upgrade progress-status I was presented with the following output:

2021-03-27 13_46_50-SSDC-Man-PEC - TeamViewer

The output shows that all the upgrade steps where completed successfully. When I connected to the second NSX-T manager machine I got the same output.

I then connected to the third NSX-T Manager, this one was not completed and caused the other NSX-T managers to remain in the upgrading status and the Management VIP to remain unavailable.

2021-03-27 13_45_22-SSDC-Man-PEC - TeamViewer

I first executed the command to see the available upgrade packages on the NSX-T Manager machine.  get upgrade-bundle playbooks To resume the NSX-T Manager upgrade I executed the following command:  resume upgrade-bundle VMware-NSX-appliance-3.1.1.0.0.17483186 playbook

The upgrade process resumed and completed successfully in a manner of minutes, after which the environment became functional again and Management VIP became accessible again.

image

Advanced Cross vCenter vMotion

donderdag, april 1st, 2021

vmware_vSphere7_graphicVMware released vSphere version 7.0 U1c – 17327586 in December 2020. Next to the cool new features that is included in this version (This blog is al about one of those cool features) another very important reason to download and install this version of vSphere is that it closes a major security issue with previous versions. You can find more info on this here.

New features in this version of vSphere include the following:

  • Physical NIC statistics
  • Advanced Cross vCenter vMotion
  • Parallel remediation on host in clusters that you manage with vSphere Lifecycle Manager baselines
  • Third-party plug-ins to manage services on the vSAN Data Persistence platform

The VMware release notes have the following to say about this new feature:

With vCenter Server 7.0 Update 1c, in the vSphere Client, you can use the Advanced Cross vCenter vMotion feature to manage the bulk migration of workloads across vCenter Server systems in different vCenter Single Sign-On domains. Advanced Cross vCenter vMotion does not depend on vCenter Enhanced Linked Mode or Hybrid Linked Mode and works for both on-premise and cloud environments. Advanced Cross vCenter vMotion facilitates your migration from VMware Cloud Foundation 3 to VMware Cloud Foundation 4, which includes vSphere with Tanzu Kubernetes Grid, and delivers a unified platform for both VMs and containers, allowing operators to provision Kubernetes clusters from vCenter Server. The feature also allows smooth transition to the latest version of vCenter Server by simplifying workload migration from any vCenter Server instance of 6.x or later.

In this blog we will describe the process of importing VMs form a 6.7 vCenter to the updated 7.0.1 vCenter, making use of the cross vCenter technology. To prepare the environment for cross vCenter vMotion the vMotion network has to be configured with a gateway.

image

At the receiving side we tried to VMKping the sending host over the vMotion VMKernel port. When this failed we added a route to any foreign network across the gateway. When we retried the VMKping it was successful.

On the sending side we also configured the vMotion network with a gateway entry.

image

To start the process of performing a cross vCenter vMotion we right click  on the cluster or ESXi host.

image

Click on Import VMs

image

Select source vCenter

image

Select the VMs you want to move.

image

Select the host to transfer the compute to.

image

Select the destination storage.

image

Select networks.

image

Select vMotion priority.

imageReady to complete, click Finish.

The 7.0.1 environment also makes use of NSX-T network virtualization. Why is this important to mention? If you want to perform a roll back you can’t move a VM that is connected to a NSX-T managed portgroup to a none NSX-T managed portgroup. To remediate this issue you should create a none NSX-T portgroup with the same vLAN and add the VM you want to rollback to that portgroup.

Upgrade NSX-T Edge Nodes

donderdag, april 1st, 2021

image-1VMware NSX-T delivers virtual networking in a software defined datacenter. In this article we are going to take a look at a VMware NSX-T environment that is ready for upgrading. In this blog we will upgrade the seven NSX-T Edge nodes. Let’s first take a look at what is the function of Edge nodes within the NSX-T architecture. An NSX Edge nodes are service appliances that run centralized network services that cannot be distributed to the hypervisors. An NSX Edge node can belong to one overlay transport zone and multiple vLan transport zones.

Today we are performing an upgrade for the Edge Nodes of a NSX-T environment. We are upgrading 7 Edge Nodes from version 3.1.0.0.017107177 to version 3.1.1.0.0.17483065. Before the upgrade we first preform a pre check of the environment, to make sure it is ready for the upgrade.

2021-03-15 18_47_45-SSDC-Man-PEC - TeamViewer

The above image shows that during the pre check there where 6 NSX-T Edge nodes with issues in the environment that could prevent a successful upgrade. Before we go any further we are going to investigate what those issues are.

2021-03-15 18_53_29-SSDC-Man-PEC - TeamViewer

By clicking on one of the affected NSX-T Edge nodes we can see that this node had two issues.

2021-03-15 18_53_55-SSDC-Man-PEC - TeamViewer2021-03-15 18_54_17-SSDC-Man-PEC - TeamViewer

When we click on the blue two with the exclamation mark next to it we can drill further down to identify the current issue. The two alarms indicate that the password expiration is approaching for both the admin and root account.

2021-03-15 18_59_55-SSDC-Man-PEC - TeamViewer

To remediate this issue we will change the password for the Admin and Root account. To accomplish this task we connect to the NSX-T Edge node as root via SSH and execute the following commands:

  • /etc/init.d/nsx-edge-api-server stop
  • passwd admin
  • passwd root
  • touch /var/vmware nsx/reset_cluster_credentials
  • /etc/init.d/nsx-edge-api-server start

2021-03-15 19_06_13-SSDC-Man-PEC - TeamViewer
The Edge-TN-07 is now without errors, we proceed by checking the other NSX-T Edge nodes and preform the same actions on those nodes.

2021-03-15 19_21_02-SSDC-Man-PEC - TeamViewer

The other NSX-T Edge nodes are now also without errors.

2021-03-15 21_35_40-TraXall – Toegang tot de car configurator_ Robin PLOMP - Message (HTML)

In the upgrade window we select the Edge Node cluster and we start the upgrade.

2021-03-15 22_48_41-SSDC-Man-PEC - TeamViewer

Grab a drink (coffee) and wait for the progress bar to fill up to 100%

2021-03-15 22_50_04-SSDC-Man-PEC - TeamViewer

In the upgrade overview window we can now see that the seven NSX-T Edge nodes are now upgraded.

Awarded vExpert 2021

donderdag, februari 11th, 2021

vExpert 2021VMware vExpert is an honorary title VMware grants to outstanding advocates of the company’s products.

The vExpert title is held in high regards within the community due to the expertise of the selected vExperts. The vExpert honorees are sharing their knowledge towards enabling and empowering customers around the world with VMware’s software defined hybrid cloud technology adoption.

The vExpert award is for individuals, not for companies. The title last for one year. Employees of both customers and partners can receive the vExpert award. VMware started the vExpert program in 2009.

I am honored, happy and very proud that I am named vExpert 2021. I look forward to participate in the vExpert program and to continue to share knowledge about the VMware products and their different use cases.

vSAN Hybrid / All Flash

woensdag, februari 3rd, 2021

vsan-est-2013As a VMware partner we (my employer PQR) conducts VMware Health Checks. To perform a Health Check on a vSphere (or EUC, NSX-T) environment VMware provides a tool to check if the environment matches the VMware best practices. The tool to check if the environment matches the VMware best practices is called the VMware Health Analyzer. The VMware Health Analyzer is a Photon appliance that you install in the client environment. There is also a Windows installed version of the VMware Health Analyzer. My preference is to use the appliance version. I have the appliance also running on my environment, so if I collected data at a customer site I can load this information in my own appliance, this means that I don’t need a connection with the customer to create my Health Check report. Current version of the VMware Health Analyzer is: 5.5.2.0. Next to the VMware Health Analyzer the consultant checking the VMware environment will also use his own knowledge to check the environment and to interpret the data presented by the VMware Health Check Analyzer.

VMware Health Analyzer
image
Above screenshot is from a lab environment.

Recently we did a Health Check on a vSphere 6.7 environment for a large company. The environment consists of six vSphere host with a single vSAN cluster. Before the Health Check the customer decided to expand the environment with four extra host. The original vSAN cluster over consisting of those six vSphere servers is a Hybrid vSAN, the Diskgroups on the four new servers are all flash. This situation has resulted in a combined vSAN with Hybrid and All Flash Diskgroups. This setup is not supported by VMware. When we investigate the servers of the Hybrid vSAN we noticed that the disks in the servers are also all flash, but marked as HDD.

Disk group “Hybrid” servers

image

Disk Group All Flash servers

image

For performance purposes we highly recommend to use an All Flash vSAN instead of an Hybrid vSAN.

Advantages of an All Flash vSAN:

  1. Make use of space efficiency: Deduplication and compression;
  2. Provide organizations with the ability to run business critical applications and OLTP databases using vSAN enabled by fast, predicable throughput and lower latency;
  3. Give customers the ability to scale and support a significantly larger number of VMs and virtual desktops using the same compute and network resources;
  4. Increase business agility and productivity by enabling IT to provision services faster, increasing user satisfaction and executing on faster backup and disaster recovery for production deployments;
  5. Combine the benefits of vSAN and flash to deliver a lower TCO using less power, cooling, data center floor space and other resources per virtual machine, virtual desktop or transaction;
  6. While data de-staging happens from cache to capacity, flushing of data would happen far faster in all-flash vSAN in comparison to a hybrid (HDD + SSD) vSAN, helping define better SLA.

Converting the disk groups and converting the vSAN from hybrid to all flash has a large impact and must be well prepared before executed.
We proposed the following method.

  1. Remove three “new” servers from the current vSAN cluster;
  2. Build a new All Flash vSAN Cluster with these three servers;
  3. Add the new vSAN cluster to the VMware Horizon environment;
  4. Empty the remaining 7 servers one by one, and add them to the new All Flash vSAN.
  5. If the old cluster is empty, delete it.

Thanks to Ronald de Jong

vSphere Cluster Services (vCLS)

dinsdag, februari 2nd, 2021

vmware_vSphere7_graphic_small1In vSphere 7.0 Update 1 (released in October 2020) a new feature was released called vSphere Cluster Services (vCLS). The purpose of vCLS is to ensure that cluster services, such as vSphere DRS and vSphere HA) are available to maintain the resources and health of the workload’s running the cluster. vCLS is independent of the vCenter Server availability.

vCLS uses agent virtual machines to maintain cluster services health. vCLS run in every cluster, even when cluster services like vSphere DRS and vSphere HA are not enabled.

The architecture of the vCLS control plane consists of max 3 virtual machines, also called system or agent VMs. The vCLS machines are placed on sperate hosts in a cluster. On a smaller environment (less than 3 host) the number of vCLS VMs will be equal to the number of hosts. SDDC (Software Defined Datacenter) admin’s do not need to maintain the life cycle of these vCLS VMs.

The architecture for the vSphere Cluster Services is displayed in this image.

2021-01-06 17_21_40-vSphere 7 Update 1 - vSphere Clustering Service (vCLS) - VMware vSphere Blog and

The vCLS VMs that form the cluster quorum state, are self correcting. This means that when the vCLS VMs are not available the vSphere Cluster Services will try to create, update or power-on the vCLS VMs automatically.

2021-02-02 21_37_33-192.168.0.103 - Remote Desktop Connection_small

There are three health states for the cluster services:

  • Healthy: The vSphere Cluster Services heath is green when at least one vCLS VM is running in the cluster. To maintain vCLS VM availability, there’s a cluster quorum of three vCLS VMs deployed.

  • Degraded: This is a transient state when at least one of the vCLS VMs is not available, but DRS maintains functionality. The cluster could also be in this state when either vCLS VMs are being re-deployed or getting powered-on after some impact to the running vCLS VMs.

  • Unhealthy: A vCLS unhealthy state happens when DRS loses it’s functionality due to the vCLS Control plane not being available.

The vCLS VMs are automatically places in there own folder within the cluster.

2021-02-02 21_50_17-192.168.0.103 - Remote Desktop Connection_small

The vCLS VMs are small, with minimum resources. If no shared storage is available the vCLS VMs are created on local storage. If a cluster is created before shared storage is configured on the ESXi host (for instance vSAN), it would be strongly recommended to move the vCLS VMs to the shared storage once it is created.

The vCLS VMs are running a customized Photon OS. In the image below you see the resources of a vCLS VM.

2021-02-02 21_50_59-192.168.0.103 - Remote Desktop Connection_small

The two GB virtual disk is thin provisioned. The vCLS VM has no NIC, it does not need one to communicate because vCLS leverages a VMCI/vSocket interface to communicate with the hypervisor.

The health of vCLS VMs, including power state, is managed by vSphere ESX Agent Manager (EAM). In case of power on failure of vCLS VMs, or if the first instance of DRS for a cluster is skipped due to lack of quorum of vCLS VMs, a banner appears in the cluster summary page along with a link to a Knowledge Base article to help troubleshoot the error state. Because vCLS VMs are treated as system VMs, you do not need to backup or snapshot these VMs. The health state of these VMs is managed by vCenter services.

Tags: VMware, vSphere, vCLS

Configure vMotion in vSphere environment

dinsdag, november 24th, 2020

Download Recently I’ve expanded my lab environment with a second vSphere host. One of the advantages of having two vSphere hosts is that you can move machine from on vSphere host to the other. If you perform this move while the machine is powered down you don’t and need any additional configuration. However, if you want to move a running machine from one vSphere host to the other without losing connectivity to this VM, you need vMotion. First let me explain what vMotion is.

vMotion in vSphere allows a running virtual machine to move between two different vSphere hosts. During vMotion memory of the VM is sent from the running VM to the new VM (the instance on another host that will become the running VM after the vMotion). The content of memory is changing all the time. vSphere uses a system where the content is sent to the other VM and then it will check what data is changed and send that, each time smaller blocks. At the last moment it will very briefly ‘freeze’ the existing VM, transfer the last changes in the memory content and then start the new VM and remove the old one. This process will minimize the time during which the VM is suspended.

(meer…)

Managing vSphere with PowerCLI (creating Tags)

donderdag, november 19th, 2020

DownloadVMware vSphere PowerCLI is a command line tool for automating vSphere and vCloud management.

VMware PowerCLI is a very powerful command-line tool that lets you automate close to all aspects of a vSphere management. This includes among others: network, storage, guest OS.

PowerCLI is distributed as PowerShell modules, and includes over 500 PowerShell cmdlets.

The first version of PowerShell was released in November 2006 for Windows XP, Windows Server 2003 and Windows Vista. We have come a long way since then. PowerShell is an important part of today’s IT landscape. By using PowerShell you can manage systems from different vendors in a unified way. I find myself using PowerShell almost every day in my work.

(meer…)

How to remain in control of your Horizon environment

zondag, oktober 11th, 2020

Ljmaegmnepbgjekghdfkgegbckolmcok-featuredIn this blog post I want to share a simple piece of advice that will help you in maintaining your VMware Horizon environment. Image management is an important part of managing your VMware Horizon environment. If you are using Instant Clones (this is the future proof way of delivering VMware Horizon VDI’s in your environment), during the image publishing phase (this is referred to as the Priming phase) VMware Horizon starts creating the following VM’s CP-Template, CP-Replica (both are turned off and there is one per datastore per Desktop pool) and a CP-Parent (this machine is turned on and there is one per ESX host per datastore per Desktop Pool).

(meer…)

Update a VMware App Volumes AppStack

woensdag, juli 8th, 2020

OIP_thumb

In this blog article I will describe the process of updating a VMware App Volumes AppStack.

VMware acquired CloudVolumes in August 2014 and released it with the name App Volumes in December 2014. App Volumes is free to owners of the Horizon View Enterprise bundle and can also be purchased as a standalone product.

VMware App Volumes has proven to be a very powerful product to deliver applications to both VMware Horizon as well as Citrix Virtual Apps & Desktops.

(meer…)