VCDA – vCenter to vCenter with no VCD

VCDA

VMware Cloud Director Availability (VCDA) 4.4 introduced a new deployment topology architecture where you do not need VMware Cloud Director (VCD) as an endpoint.  The VCDA on-prem OVA was only to replicate or migrate VMs to VCDA running in VCD via a Cloud Provider.

The nested lab was setup with the following for each Site A and Site B.  Each VCSA is its own SSO domain.
The network is flat; not firewalls, same subnet for all VMs and ESXi VMK0.
No NSX-V or NSX-T.  Standard Switch.  No ESXi Clusters.

  • VMware-Cloud-Director-Availability-Provider-4.5.0.5855303-21231906fd_OVF10.ova
  • VCSA 7.0.3 21290409
  • ESXi 7.0.3 20328353

The “New Role” is selected . ” As a provider, deploy one vCenter Replication Management Appliance. This appliance contains all services required for vSphere DR and migration between vCenter Server sites without VMware Cloud Director.”

Tip – When you deploy the VCDA OVA, Pay attention – IP address in CIDR notation. In my lab, I added a /24 ex. 192.168.0.221/24

Created a vcda@vsphere.local account that is part of the “Administrator” role & SSO “Administrators” Group in vCenter

Logged into the VCDA /ui/portal/initial-config website to “Run the initial setup wizard”.
The Lookup Service address is the VCSA of Site A.
Tip- Filling in   https://site-a-vcsa.my.lab  and hitting tab filled in :443/lookupservice/sdk   Result – https://site-a-vcsa.my.lab:443/lookupservice/sdk

It was odd the “Public Service Endpoint address” was the IP address of the VCDA appliance since DNS is working.
I edited to the FQDN site-a-vcda.my.lab, even though the banner warned “A connectivity problem with the tunnel has been detected.”

I repeated the above setup process on the 2nd VCDA VM “Site B”

On the Site A VCDA. Peer Sites – https://site-a-vcda.my.lab/ui/portal/sites/vcenter
Added the Site B VCDA – https://site-b-vcda.my.lab:8048

Popup after accepting the site B cert – “Additional actions required Visit the Manager Service at https://site-b-vcda.my.lab:8048 to complete the pairing operation.”
The peer Site B shows ” No configured Replicator Services”

  • Logged into the Site B Peer Sites – https://site-b-vcda.my.lab/ui/portal/sites/vcenter
  • Paired it with Site Ahttps://site-a-vcda.my.lab:8048
  • Now both Peer sites webpages shows each other with no errors.

Site A vCenter – Setup a Alpine Linux VM “site-a-alpine” with open vm tools https://wiki.alpinelinux.org/wiki/Install_Alpine_on_VMware_ESXi

Site A vCenter -On the VM “site-a-alpine” / Actions / Configure Migration to Site B.
Picked “Compress replication traffic” – Enabling compression will reduce network traffic at the expense of CPU.

  • Site B VCDA plugin –
  • Site B shows the how much CPU/MEM/Disk is needed.
  • Only took 5 seconds to transfer 96 MB.
  • Incoming replications – Migrate the alpine VM
  • Instances handling after recovery – Default
  • Mapped to a prod port group.
  • Site A VC for the VM events.
    Initiate guest OS shutdown, then Reload virtual machine by VSPHERE.LOCAL\vcda
  • Site B VC for the VM –
    Register virtual machine, then Reload virtual machine, 2x Reconfigure virtual machine, Power On virtual machine
  • I can ssh into the VM. The VC shows a banner “No connection to VR Server: Unknown” for the VM.
  • VCDA on Site B show the VM’s recover state as “Failed-Over
  • Since I am happy with the VM migration to Site B, I will Delete the replication vs click on Reverse.“Are you sure you want to delete the selected replications?
    This will permanently stop replication traffic and remove all retained instances at the destination.
    There is one or more active test fail overs. Test images will also be deleted”
  • The banner “No connection to VR Server: Unknown” for the VM is now gone.
  • Site A still has the VM in a powered off state for me to delete.
  1. The VM used 203MB on Site A, but now is using 1.21GB (per the VC view) on Site B
  2. The Datastore folder is “v2v-replicas-vm-17-1677291077224” vs the VM name.  A s-vmotion would need to occur to correct the folder name.
  3. Dropping the vCPU from 8 to 2 on the VCDA VM didn’t peg out the CPU during a migration or initial replication (granted this is a tiny VM)