Install Home Assistant on ESXi-ARM

One of the use cases for running ESXi-ARM in my home lab environment is running Home Assistant. Home Assistant is a great open-source platform for home automation.  After assembling the Pi cluster (see my previous blog post) and installing ESXi on ARM (ESXi-ARM) using the “Fling on Raspberry Pi” documentation, it’s time to install Home Assistant.

In this example, I install the following components:

  • Ubuntu for ARM
  • VMware Tools
  • Docker
  • Home Assistant in a Docker container.

Pre-requisites

  1. Download Ubuntu 20.04.1-live-server-arm64.iso, Link
  2. Make a connection to the vCenter Server : https://<vcenter server>/ui or the local ESXi server https://<esx-server>
  3. Upload the Ubuntu ISO to a datastore

Home Assistant VM Creation

Create a new virtual machine with the following specifications:

  • Right click Host Select Create/Register VM

  • Virtual Machine name: HomeAssstant-001
  • Select a compute resource: select an ESXi server
  • Select storage: <Select the datastore>
  • Select compatibility: ESXi 7.0 and later
  • Select a guest OS:
    • Guest OS Family: Linux
    • Guest OS Version: Ubuntu Linux (64-bit)
  • Customize hardware:
    • CPUs: 2
    • Memory: 2048 MB
    • Hard disk 1: 30 GB
    • Network adapter 1: Select the port group
      • Adapter type: E1000E
    • CD/DVD Drive 1: Datastore ISO file
      • Browse to the Ubuntu ISO
      • Connect at Power On: checked
    • Video Card: Default settings
  • Next
  • Finish
  • Power on the VM
  • Open a console session

Ubuntu Installation

  • Select: Install Ubuntu Server

  • Language: English
  • Select: Install Ubuntu Server
  • Choose your preferred language: English
  • Keyboard configuration: Select the layout and variant: English US (variant (US)
  • Installation: Install Ubuntu
  • Networking connections: The ens160 is the ethernet NIC of the VM. Select the IPv4 method: DHCP or a manual fixed IP address
  • Configure proxy: leave this blank you are a not using a proxy server
  • Ubuntu mirror: Use the mirror address suggested
  • Filesystem setup: Use an Entire Disk
    • Filesystem summary: Done
    • Confirm destructive action. Are you sure you want to continue: Continue
  • Profile setup: Fill in the following fields (remember the username and password)
    • Your name: <your name>
    • Your server’s name: <server name>
    • Pick a username: <username>
    • Choose a password: <password>
    • Confirm a password: <password>
  • SSH Setup: Install the OpenSSH server
    • Import SSH identity: No

  • Featured Server Snaps: Select none
  • The installation of Ubuntu begins

  • Once the installation is complete! Reboot the system

After the installation of the Ubuntu OS, perform the following post configuration actions:

  • In a console session, find the IP address with the ip -a command. VMware Tools is not installed yet, so the IP address is not visible in VM properties.
  • Connect to the Ubuntu using an SSH session with Putty for example
  • Install the latest Ubuntu updates and upgrades
sudo apt update && sudo apt upgrade -y
sudo reboot

Open VMware Tools installation

In the SSH session install package dependencies and clone the open git repository.

sudo -i
apt install -y automake-1.15 pkg-config libtool libmspack-dev libglib2.0-dev libpam0g-dev libssl-dev libxml2-dev libxmlsec1-dev libx11-dev libxext-dev libxinerama-dev libxi-dev libxrender-dev libxrandr-dev libgtk2.0-dev libgtk-3-dev libgtkmm-3.0-dev
git clone https://github.com/vmware/open-vm-tools.git
cd open-vm-tools/open-vm-tools/
autoreconf -i
./configure
make
make install
ldconfig

build and install the Open VMware Tools

autoreconf -i 
./configure
make
make install
ldconfig

create a vmtoolsd.service file

cat > /etc/systemd/system/vmtoolsd.service << EOF
[Unit]
Description=
Description=Open VM Tools
After=
After=network-online.target
[Service]
ExecStart=
ExecStart=/usr/local/bin/vmtoolsd
Restart=always
RestartSec=1sec
[Install]
WantedBy=multi-user.target
EOF

Enable the VMware Tools daemon at startup

systemctl enable vmtoolsd.service
systemctl start vmtoolsd.service

Check if the VMware tools are running

systemctl status vmtoolsd.service

Docker installation

In the SSH session run the following commands:

apt install software-properties-common -y
apt-get install -y apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat
curl -fsSL get.docker.com | sh

Home Assistant installation

In the SSH session run the following command:

curl -sL "https://raw.githubusercontent.com/home-assistant/supervised-installer/master/installer.sh" | bash -s -- -m raspberrypi4

After the installation, it takes about 1 minute for all the 7 docker HA containers are up. You can check this by using the docker -ps command.

When the docker containers are up, you can make a web browser connection to:

http://<ip address:8123>

There will be a “Preparing Home Assistant” page for some minutes. Once completed you’re ready to create a user of restore a snapshot of you’re existing configuration. After this, you can start with your home automation projects.

How to build an ESXi on ARM Pi cluster?

Shortly after VMworld 2020, VMware released (after years of announcing and demoing) the ESXi On ARM fling (*1). On Social media and the community ESXi on ARM is a very hot topic. The ESXi ARM fling makes it possible to run the VMware ESXi hypervisor on ARM platforms such as:

    • Avantek Workstation and server (Ampere eMAG)
    • Lenovo ThinkSystem HR330A and HR350A (Ampere eMAG)
    • SolidRun Honeycomb LX2
    • Raspberry Pi (rPi) 4b model (4GB and 8GB only).

Because it supports the Raspberry Pi 4b model is very interesting for the home labbers.

(*1) A fling shows an early stage of software to the VMware community. There is no official (only community) support available. The ESXi on ARM fling can be download for the following location: link.

Use cases

Some use cases for ESXi On ARM are:

  • vSAN Witness node, link
  • Automation environment for PowerCLI and Terraform and packer, link.
  • Security at the edge
  • Other home lab projects such as running Home Assistant (blog post is coming).

For my home lab environment, I wanted to build an ESXi ARM cluster for my IoT stuff (such as Home Assistant) with two Pi nodes attached to my existing QNAP NAS. With the two ESXi ARM nodes, a vCenter Server, and shared storage, cluster functions such as vMotion, High Availability, DRS, and even FT are available. How cool is that!

Every day there are new use cases created in the community. That’s one reason why ESXi on ARM is such a cool technology!

My Environment build

Here is a simple diagram of how my setup looks like:

 

build of materials (BOM)

In this blog article, I will mention the build of materials (BOM). The following components I use:

Number Component ~Cost €  Link (cheapest Pi shop in the Netherlands)
1 Raspberry Pi 4 Model B with 8GB memory. 87,50 (per Pi) Link
2 Raspberry Pi 15W USB-C Power Supply  9,95 (per Pi) Link
3 Argon One Pi 4 case 28,95 (per Pi) Link
4 Official Raspberry Pi USB keyboard 17,95 Link
5 Micro SD card, 32 GB 13,95 (per Pi) Link
6 Delock USB 3.2 16 GB flash drive  8,99 (per Pi) I reused the USB drives
7 Micro-HDMI to HDMI cable 1,5m.  7,95 Link

1. Raspberry Pi 4 Model B with 8GB memory.

This Pi model has the following specifications:

    • 1.5GHz quad-core ARM Cortex-A72 CPU
    • VideoCore VI graphics
    • 4kp60 HEVC decode
    • True Gigabit Ethernet
    • 2 × USB 3.0
    • 2 × USB 2.0 ports
    • 2 × micro-HDMI ports (1 × 4kp60 or 2 × 4kp30)
    • USB-C for input power, supporting 5V 3A operation
    • 8Gb LPDDR4 memory

2. Raspberry Pi 15W USB-C Power Supply.

The power Supply uses the USB-C for charging the Pi. Make sure to use a decent power supply such as this one.

3. Argon One Pi 4 case

This case (which looks like the Tesla Cybertruck) has an aluminum enclosure for passive cooling and a fan inside for active cooling. Proper cooling is very important for the Pi because when running VMware ESXi it can get hot. You can control the FAN by software or enable the always-on mode. In software mode when the CPU temp reaches 55 degrees, the fan will run at 10%, at 60 degrees it will run at 55%, and at 60 degrees it will run 100%. The driver does not work on VMware ESXi, it is designed for the Pi OS. Hopefully, there will be a VIB available in the future that makes software control of the fan possible.  For VMware ESXi, you need to enable the always-on mode by switching the jumper pen next to the fan.

The assembly of the Pi and case is very easy:

  • Next to the fan, you see two cooling blocks (grey ones), one for the CPU and the other for the RAM chip
  • Add some terminal paste to the cooling blocks
  • Plug the PCB board into the Pi and the case. With the PCB board, all the ports and buttons are accessed from the back!
  • Tighten the screws.

The GPIO pins are still available when removing the magnetic cap from the top of the case.

4. Official Raspberry Pi USB keyboard.

This is a 78-key QWERTY keyboard with a built-in 3 ports hub on the back. It has a small form factor.

5 & 6. Micro SD card and USB disk.

The SD card is for storing the UEFI firmware that is required to boot the VMware ESXi-ARM installer. I used 32 GB SD. The USB drive is for installing the VMware ESXi ARM ISO.

7. Micro-HDMI to HDMI cable 1,5m.

The following components I already have in my home lab environment and will be re-used:

  • Netgear switch
  • 2 x Delock USB 3.2 16 GB flash drives
  • 2 x UTP CAT 5e cables
  • QNAP NAS

After the assembly of the case, connect the USB drive, SD card, Power-Supply, Monitor, keyboard, UTP cable, and you’re ready to install the VMware ESXi for ARM fling.

In the next ESXi on ARM blog, I will highlight the ESXi on ARM installation process and how to install and configure Home Assistant.

Here are some great links to follow:

Thanks to the Raspberry Store for the quick delivery.

Quick Tip: The local Windows 10 taskbar is in front during an RDP session

Sometimes it happens to me that the local taskbar of my Windows 10 laptop is in front and you don’t have access to the remote toolbar during a full-screen RDP session. This is quite annoying. In the picture below you see the local toolbar of my Windows 10 laptop is in front during a full-screen RDP session.

Accessing the remote taskbar from the server is only possible when you don’t run RDP in full-screen mode. The fix for this is to reboot the local Windows 10 device or kill the “explorer.exe” process.  You can do this manually using the Windows task manager or automated using the command line. The syntax is as follows:

C:\Windows\System32\cmd.exe /c "C:\Windows\System32\taskkill.exe /F /IM explorer.exe & start explorer"