VMware Cloud Foundation 4.5.1 on Dell EMC VxRail Release Notes
This document contains the following sections
Introduction
VMware Cloud Foundation 4.5.1 | 11 MAY 2023 | Build 21682411 Check for additions and updates to these release notes. |
What's New
The VMware Cloud Foundation (VCF) 4.5.1 on Dell EMC VxRail release includes the following:
- Improved process for installing third-party CA-signed certificates: You can now import or copy server certificate and certificate authority files into SDDC Manager. Files with .crt, .cer, .pem, .p7b and .p7c extensions are supported.
- NFS server: Removed NFS server from SDDC Manager which fixes a critical security vulnerability.
- VCF+ UX: User experience improvements for VCF+ to handle LCM notifications and other UX enhancements.
- Postgres 13.9 update: Updated the Postgres to the latest available version.
- BOM updates: Updated Bill of Materials with new product versions.
- NSX-T Data Center 3.2.2.1, which includes new features and enhancements as part of NSX 3.2.2 and critical bug fixes.
- VMware vCenter Server 7.0 Update 3l, which contains critical bug fixes.
- VMware ESXi 7.0 Update 3l, which contains critical bug fixes.
New URL and Authentication Method for the VMware Depot
You connect to the VMware Depot to download installation, upgrade, and patch bundles using the:
- SDDC Manager UI
- Async Patch Tool
- Bundle Transfer Utility
In March 2025, Broadcom introduced changes to the VMware Depot URL and authentication method. In order to download bundles, you must update your environment to access the new URL using the new authentication method. See KB 390098 for details.
Until you update your environment, you will see error messages related to the depot.
Bundle Download Method | Error Message |
---|---|
SDDC Manager UI |
|
Async Patch Tool |
|
Bundle Transfer Utility |
|
VMware Cloud Foundation Bill of Materials (BOM)
The VMware Cloud Foundation software product is comprised of the following software Bill-of-Materials (BOM). The components in the BOM are interoperable and compatible.
Software Component | Version | Date | Build Number |
---|---|---|---|
Cloud Builder VM | 4.5.1 | 11 MAY 2023 | 21682411 |
SDDC Manager | 4.5.1 | 11 MAY 2023 | 21682411 |
VxRail Manager | 7.0.450 | 09 MAY 2023 | NA |
VMware vCenter Server Appliance | 7.0 Update 3l | 30 MAR 2023 | 21477706 |
VMware vSAN Witness Appliance | 7.0 Update 3l | 30 MAR 2023 | 21424296 |
VMware NSX-T | 3.2.2.1 | 30 MAR 2023 | 21487560 |
VMware vRealize Suite Lifecycle Manager | 8.8.2* | 12 JUL 2022 | 20080494 |
* After deploying vRealize Suite Lifecycle Manager 8.8.2, you must install vRealize Suite Lifecycle Manager 8.8.2 Product Support Pack 6.
- VMware ESXi and VMware vSAN are part of the VxRail BOM.
- You can use vRealize Suite Lifecycle Manager to deploy vRealize Automation, vRealize Operations Manager, vRealize Log Insight, and Workspace ONE Access (formerly known as VMware Identity Manager). vRealize Suite Lifecycle Manager determines which versions of these products are compatible and only allows you to install/upgrade to supported versions. See vRealize Suite Upgrade Paths on VMware Cloud Foundation 4.4.x +.
- vRealize Log Insight content packs are installed when you deploy vRealize Log Insight.
- The vRealize Operations Manager management pack is installed when you deploy vRealize Operations Manager.
- You can access the latest versions of the content packs for vRealize Log Insight from the VMware Solution Exchange and the vRealize Log Insight in-product marketplace store.
Documentation
The following documentation is available:
Limitations
The following limitations apply to this release:
- vSphere Lifecycle Manager images are not supported on VMware Cloud Foundation on Dell EMC VxRail.
- Customer-supplied vSphere Distributed Switch (vDS) is a new feature supported by VxRail Manager 7.0.010 and later that allows customers to create their own vDS and provide it as an input to be utilized by the clusters they build using VxRail Manager. VMware Cloud Foundation on Dell EMC VxRail does not support clusters that utilize a customer-supplied vDS.
Installation and Upgrade Information
You can install VMware Cloud Foundation 4.5.1 on Dell EMC VxRail as a new release or perform a sequential or skip-level upgrade to VMware Cloud Foundation 4.5.1 from VMware Cloud Foundation 4.2.1 or later. If your environment is at a version earlier than 4.2.1, you must upgrade the management domain and all VI workload domains to VMware Cloud Foundation 4.2.1 and then upgrade to VMware Cloud Foundation 4.5.1.
If your VMware Cloud Foundation instance includes vRealize Suite Lifecycle Manager, you may need to install a Product Support Pack to support VMware Cloud Foundation 4.5.1. Check thevRealize Suite Lifecycle Manager release notes to see what Product Support Pack is required for your current version of vRealize Suite Lifecycle Manager:
Before you upgrade a vCenter Server, take a file-based backup. See Manually Back Up vCenter Server.
Scripts that rely on SSH being activated on ESXi hosts will not work after upgrading to VMware Cloud Foundation 4.5 and later, since VMware Cloud Foundation 4.5 deactivates the SSH service by default. Update your scripts to account for this new behavior. See KB 86230 for information about activating and deactivating the SSH service on ESXi hosts.
Resolved Issues
The following issues have been resolved:
- VxRail parallel cluster upgrade fails for one or more clusters.
- VxRail bundle is available for an upgrade even though the NSX-T upgrade is still in-progress on VMware Cloud Foundation on VxRail 4.x environment.
Known Issues
For VMware Cloud Foundation 4.5.1 known issues, see VMware Cloud Foundation Release Notes. Some of the known issues may be for features that are not available on VMware Cloud Foundation on Dell EMC VxRail.
VMware Cloud Foundation 4.5.1 on Dell EMC VxRail known issues appear below:
Upgrading VxRail cluster to 7.0.450 may fail
As part of the VxRail cluster upgrade, ESXi hosts get rebooted. An intermittent issue can cause some ESXi hosts to remain disconnected after reboot. If this happens, the upgrade fails.
Workaround: Use the vSphere Client to connect the disconnected ESXi hosts and retry the upgrade from SDDC Manager.
Failed VxRail first run prevents new cluster/workload domain creation
When you use the Workflow Optimization Script to add a cluster or create a workload domain, the script performs a VxRail first run to discover and configure the ESXi hosts. If the VxRail first run fails, some objects associated with the failed task remain in the vCenter Server inventory and prevent new cluster/workload domain creation.
Workaround: Remove the inventory objects associated with the failed task using the vSphere Client.
- Log in to the vSphere Client.
- In the Hosts and Clusters inventory, right-click the failed cluster and selectDelete.
- In the Networking inventory, right-click the network objects created for the failed cluster and selectDelete.
- In the Storage inventory, right-click the datastore object created for the failed cluster and selectDelete Datastore.
After the inventory is cleaned up, you can retry adding a cluster or creating a workload domain.
Incorrect date and time is shown for upgrades in the update history for a workload domain
Viewing the update status on the Update History tab for a workload domain may show the incorrect date and time for an upgrade.
Workaround: None. This does not affect upgrade or any other functionality and will be addressed in future release.
VxRail Manager upgrade shows as failed in SDDC Manager but completed in vSphere Client
The upgrade might time out while waiting for VxRail to return the new version after the upgrade completes.
Workaround: Retry the VxRail manager upgrade.
Add VxRail hosts validation fails
When adding VxRail hosts to a workload domain or cluster that uses Fibre Channel (FC) storage, the task may fail with the message
No shared datastore can be found on host
. This can happen if you used the Workflow Optimization Script to deploy the workload domain or cluster and chose an FC datastore name other than the default name.Workaround: Use the VMware Host Client to rename the FC datastore on the new VxRail hosts to match the name you entered when creating the workload domain or cluster. Once the FC datasore name of the new hosts matches the existing FC datastore name, retry the Add VxRail Hosts operation.
SDDC Manager UI buttons or links may be grayed out (inaccessible) after parallel add host or add cluster operations
If you run parallel add host operations to different clusters or parallel add cluster operations to different workload domains, the system may not release system locks after the operations complete. These locks prevent certain operations until the locks are released.
Workaround: Contact VMware Support.
Adding hosts to a cluster fails
When you add hosts to a VxRail cluster, the hosts being added must have their vmnics in the same order as the existing hosts in the cluster. If the vmnics of the new hosts are in a different order, then validation fails and the hosts cannot be added to the cluster.
Workaround: Modify the vmnic order in the new hosts to match that of the existing hosts and retry the add hosts task.
VxRail Manager system precheck does not provide error information in /v1/system/prechecks/ API
The
/v1/system/prechecks/
API does not populate the errors
attribute in cases where the precheck status is marked as WARNING
. For example:{ 'name': 'VXM_SYSTEM_PRECHECK', 'description': 'Perform Stage - Perform VxRail System Precheck', 'status': 'WARNING', 'creationTimestamp': '2022-09-21T09:34:55.044Z', 'completionTimestamp': '2022-09-21T09:45:20.283Z', 'errors': [] }
Workaround: None
vSAN/vMotion network disruption can occur when using the workflow optimization script
When you use the workflow optimization script to create a new VI workload domain or add a new cluster to an existing workload domain, you can cause a network disruption on existing vSAN/vMotion networks if:
- The IP range for the new vSAN network overlaps with the IP range for an existing vSAN network.
- The IP range for the new vMotion network overlaps with the IP range for an existing vMotion network.
Workaround: None. Make sure to provide vSAN/vMotion IP ranges that do not overlap with existing vSAN/vMotion networks.