Running Windows 10 for ARM64 in a QEMU virtual machine

/images/2020-08-04_scrot.png

Since the development stages of Windows 10, Microsoft has been releasing a version of Windows that runs on 64-bit ARM (AArch64) based CPUs. Despite some hardware shipping with Windows 10 ARM 1 2 3 this port has received little attention and you can barely find programs that run on it.

Naturally, I wanted to try this out to see if it worked. And it turned out it does!

Getting the ISO

I'm not aware of any Microsoft page that lets you download an ARM64 ISO, so this part relies on community-made solutions instead.

I looked for the right ESD download link and used an ESD>ISO conversion script to get a bootable ISO (both found on the MDL forums).

Alternatively adguard's download page provides similar scripts that download and pack an ISO for you. However, in my experience these take a long time (hours) to finish.

There's one more important point:

I had no success booting version 2004 or 20H2 (specifically: 19041.388 / 19041.423) so I went with version 1909 (18363.592) instead.

Installation

Before we begin we also need:

  • the virtio driver ISO

  • an appropriately sized disk image (qemu-img create -f qcow2 disk.qcow2 64G)

  • QEMU_EFI.fd extracted from the edk2.git-aarch64 RPM found here

The qemu command line then looks as follows:

isoname=18363.592.200109-2016.19H2_RELEASE_SVC_REFRESH_CLIENTBUSINESS_VOL_A64FRE_DE-DE.ISO
virtio=~/Downloads/virtio-win-0.1.185.iso
qemu-system-aarch64 -M virt -cpu cortex-a53 -smp 4 -m 4096 \
        -device qemu-xhci -device usb-kbd -device usb-tablet \
        -drive file=disk.qcow2,if=virtio \
        -nic user,model=virtio \
        -drive file="$isoname",media=cdrom,if=none,id=cdrom -device usb-storage,drive=cdrom \
        -drive file="$virtio",media=cdrom,if=none,id=drivers -device usb-storage,drive=drivers \
        -bios QEMU_EFI.fd -device ramfb

You can then follow the installation process as normal. Before partitioning the disks the setup will ask you to load disk drivers, these can be found at viostor/w10/ARM64 on the virtio cdrom.

Qemu video output

The above command line already takes these limitations into account, these sections are for explanation only.

A previous blogpost on running Windows 10 ARM in QEMU has used a patched EDK2 to get support for standard VGA back in. It's not clear to me why EDK2 removed support if it was working, but this is not a solution I wanted to use either way.

It turns out 4 that the options on ARM are limited to virtio gpu and ramfb. Virtio graphics are Linux-only so that leaves ramfb.

Attaching disks with Qemu

Since the virt machine has no SATA controller we cannot attach a hard disk to the VM the usual way, I went with virtio here instead. It would have been possible to do this over usb-storage, this works out of the box and would have saved us all the work with virtio drivers (except for networking 5).

This also means something else (which has wasted me quite some time): You cannot use -cdrom.

If you do, EDK2 will boot the Windows CD fine but setup will ask you to load drivers early (because it cannot find its own CD). None of the virtio drivers can fix this situation, leaving you stuck with no clear indication what went wrong.

After installation

The onboarding process has a few hiccups (in particular device detection), if you retry it a few times it'll let you continue anyway.

High CPU Usage

After the first boot I noticed two regsvr32.exe processes at 100% CPU that didn't seem to finish in reasonable time.

Further investigation with Process Explorer 6 showed these belonging to Windows' printing service. Since I don't want to print in this VM anyway, the affected service can just be stopped and disabled:

sc stop "Spooler"
sc config "Spooler" start= disabled

Networking

We're still missing the network driver from the virtio cdrom. Unfortunately the NetKVM driver doesn't seem to be properly signed, so you have to enable loading unsigned drivers first (and reboot!):

bcdedit /set testsigning on

Afterwards the right driver can be installed from the device manager (NetKVM/w10/ARM64 on cdrom).

General Performance Tweaks

These aren't specific to Windows 10 ARM or Virtual Machines, but are most useful in that case.

REM Disable Windows Search Indexing
sc stop "WSearch"
sc config "WSearch" start= disabled
REM Disable Automatic Defragmentation
schtasks /Delete /TN "\Microsoft\Windows\Defrag\ScheduledDefrag" /F
REM Disable Pagefile
wmic computersystem set AutomaticManagedPagefile=FALSE
wmic pagefileset delete
REM Disable Hibernation
powercfg -h off

Higher Display Resolution

As of writing QEMU's ramfb has its resolution locked to 800x600, which even breaks EDK2's menu (press F2 or Esc during boot).

Fortunately, this has already been fixed in master 7 and will be in qemu 5.1.0. You can compile 5.1.0-rc3 today if you don't want to wait.

In addition to that you need vars-template-pflash.raw from the same edk package as earlier (UEFI will store its settings in there).
Add the following to your qemu args: -drive file=vars-template-pflash.raw,if=pflash,index=1

The display resolution can then be set up to 1024x768 under Device Manager > OVMF Platform Configuration.

Wrapping up

With a bit of preperation it is possible to run Windows 10 ARM in a virtual machine. Although the emulation is pretty slow you could feasibly use this to test one or two programs.

If you have ARM64 hardware with sufficient specs and KVM support, the -enable-kvm flag can get you native execution speed, though I haven't had a chance to see how this performs yet.

1

https://www.samsung.com/au/tablets/galaxy-book-s-w767/SM-W767NZAAXSA/

2

https://www.lenovo.com/ie/en/laptops/yoga/yoga-c-series/Yoga-C630-13Q50/p/88YGC601090

3

https://www.microsoft.com/en-us/p/surface-pro-x/8vdnrp2m6hhc

4

https://www.kraxel.org/blog/2019/09/display-devices-in-qemu/#tldr

5

An usb-net device does not function and doesn't appear in Windows' device manager at all.

6

Procexp for ARM64 is available here: http://live.sysinternals.com/ARM64/

7

https://github.com/qemu/qemu/commit/c326eedc7584b94f6f9f3b8ba61a6e9ff04ad681

Installing the Debian X32 port on a VM or real machine

X32 is an ABI for Linux that uses the x86-64 instruction set but 32-bit longs and pointers (this is called ILP32), thereby limiting the memory for a single process to 4 GiB. Compared to amd64 it offers significant memory savings and unlike plain i386 it can make use of all registers and extensions also available to 64-bit code.

Debian has an X32 port since 2013 but installing it isn't quite straightforward.
To follow this guide you'll need:
  • A Debian netinst CD for the amd64 architecture: https://www.debian.org/CD/netinst/

  • A computer or VM with x86-64 compatible CPU

  • An internet connection on the machine you are installing

Booting into rescue mode

In the boot menu, choose "Advanced Options" and select the "Rescue Mode" option.

Press TAB to edit the kernel command line and append the following option before booting: syscall.x32=y

Follow the menus until you're dropped to a shell ("Execute shell in installer environment").

Partitioning

Inside the shell run partman.

This starts the usual partitioning setup seen during Debian installation. Once it's done you will be dropped to the shell again.

Installing the system

Debootstrap

Debootstrap will install the system for us, but unfortunately there is no convenient way to get the debian-ports GPG keys into the rescue environment, so we'll just run with disabled signature checks.

debootstrap --no-check-gpg --arch=x32 unstable /target http://ftp.ports.debian.org/debian-ports/

You might notice that isc-dhcp-client fails to install at this step, this is not critical and will be dealt with later.

Chrooting

The next step is to enter the target system in a chroot:

mount --bind /dev /target/dev
mount --bind /proc /target/proc
mount --bind /sys /target/sys
chroot /target

First we'll do some basic system configuration. The fstab you write needs to match the configured partition layout, in this example the layout is a single root partition (no /boot, no swap space). Using UUID=... notation is also advisable but not fun to type out by hand (and irrelevant for a single-disk VM).

Here you should also uninstall the package that failed to install earlier.

passwd root
echo "/dev/sda1 / ext4 rw 0 1" >/etc/fstab
apt purge -y isc-dhcp-client

Repository configuration

The X32 repository does not include a Linux kernel package. This requires us to add the amd64 repository so we can install a kernel, but we need to make sure only linux-image is pulled from there but no other packages (via APT pinning).

Configure APT as follows:

apt install -y debian-ports-archive-keyring
cat >/etc/apt/sources.list
deb [arch=x32] http://ftp.ports.debian.org/debian-ports unstable main
deb [arch=amd64] http://deb.debian.org/debian unstable main
^D
cat >/etc/apt/preferences.d/amd64
Package: linux-image*:amd64
Pin: release b=amd64
Pin-Priority: 500

Package: *
Pin: release b=amd64
Pin-Priority: -1
^D
dpkg --add-architecture amd64
apt update

Kernel & Bootloader

Install the kernel and bootloader (GRUB):
export TERM=dumb
apt install -y linux-image-amd64 grub-pc

When asked by GRUB, select your primary hard drive (here: /dev/sda) as installation device.

Next, reconfigure grub to boot up the kernel with X32 enabled:
dpkg-reconfigure grub-pc

GRUB will ask for the Linux command line, which needs to be syscall.x32=y.

Cleaning up

The installation failure during debootstrap has left some files misconfigured.

The purpose of policy-rc.d and the modification to start-stop-daemon is that services are not started up during package installation. Since we're finished installing, these need to be undone:

rm /usr/sbin/policy-rc.d

mv /usr/sbin/start-stop-daemon.REAL /usr/sbin/start-stop-daemon

Lastly, umount the chroot and reboot:

umount /target/dev /target/proc /target/sys
umount /target

sync; reboot

Extras

Why does isc-dhcp-client fail to install?

isc-dhcp-client transitively depends on libmaxminddb, which requires pandoc during build.

pandoc itself is not available on x32 since one of its dependencies is also missing.

This is tracked as Debian bug #956041 and is unsolved at the time of writing.

Quick DHCP setup using systemd-networkd

Since isc-dhcp-client provides a DHCP client we have a problem when we want to connect our freshly installed system to a network.

Fortunately, setting up DHCP with systemd-networkd is quite easy:

cat >/etc/systemd/network/eth.network
[Match]
Name=en*
[Network]
DHCP=ipv4
^D
systemctl enable --now systemd-networkd

Installing standard system software

tasksel can take care of installing software you'd usually find on a standard Debian installation (man pages, locale support, Perl, ...).

Same issue as before: isc-dhcp-client and bind9-related packages are uninstallable, so that requires some manual working around:

eval $(tasksel -t install standard | tr ' ' '\n' | egrep -v 'isc-dhcp-client|^bind9-' | tr '\n' ' ')

Material

Most parts of this guide are based on either of these two pages:

Virtualizing Raspbian (or any ARM/Linux distro) headless using QEMU

For testing or development it can be very useful to have a distribution that usually runs on an embedded ARM board such as the Raspberry Pi run right on your machine (that isn't ARM) using a virtual machine.

QEMU provides excellent support for emulation of the ARM architecture (both 32 and 64-bit) and can emulate many different real ARM boards.

Why not use QEMU's "raspi2" machine for emulation?

QEMU comes with a raspi2 machine. It emulates the GPU's framebuffer, HWRNG, UART, GPIO and SD controller.

Spot something missing? It doesn't implement USB, which makes it useless for headless and graphical use as you can plug in neither a network connection nor a keyboard or mouse.

If you still want to use it, this guide will only help you halfway but here are the parameters:

-M raspi2 -kernel kernel7l.img -dtb bcm2709-rpi-2-b.dtb -append "root=/dev/mmcblk0 rootwait console=ttyAMA0"

The plan

Instead of (poorly) emulating a real piece of hardware, QEMU also has a virt machine 1 that is designed for virtualization. It gives you a modern system with PCI and also works out-of-the-box with Linux without providing a Device Tree (QEMU generates one internally).

The most straightforward way of getting network and disk into such a VM is to use virtio-net and virtio-disk respectively, which is what we'll be doing.

Since virtio requires kernel support, chances are the Raspberry Pi kernel wouldn't work anyway, so we'll be using a different one.

I picked Arch Linux ARM's armv7 kernel from here, though any other should work just as well provided it comes with the appropriate modules. To load the virtio modules during boot we'll require an initramfs, but more on that later.

Extracting Raspbian's root filesystem into a virtual disk image

Start by downloading Raspbian from the Raspberry Pi website, then run the script below or follow the steps manually.

The script will create a copy of the image file, expand the image and its partition to 10 gigabytes, mount the partition using a loop device and make two adjustments:

  • Remove both SD card partitions from /etc/fstab, these don't exist inside the VM and we will be mounting the rootfs ourselves

  • Disable several startup services that do not work inside the VM

After unmounting the partition it will convert the filesystem into a qcow2 format image for use with QEMU.

#!/bin/bash -e
input=2020-02-13-raspbian-buster-lite.img
[ -f $input ]

mkdir mnt
cp --reflink=auto $input source.img
truncate -s 10G source.img
echo ", +" | sfdisk -N 2 source.img
dev=$(sudo losetup -fP --show source.img)
[ -n "$dev" ]
sudo resize2fs ${dev}p2
sudo mount ${dev}p2 ./mnt -o rw
sudo sed '/^PARTUUID/d' -i ./mnt/etc/fstab
sudo rm \
        ./mnt/etc/systemd/system/multi-user.target.wants/{hciuart,dphys-swapfile}.service \
        ./mnt/etc/rc?.d/?01{resize2fs_once,rng-tools}
sudo umount ./mnt
sudo chmod a+r ${dev}p2
qemu-img convert -O qcow2 ${dev}p2 rootfs.qcow2
sudo losetup -d $dev
rm source.img; rmdir mnt

The kernel and initramfs

Kernel

Conveniently the linux-armv7 package is just a tar archive, so you can extract the kernel executable using:

tar -xvf linux-armv7*.pkg.tar.xz --strip-components=1 boot/zImage

Making an initramfs

Since virtio support is not compiled into the kernel and the root filesytem is missing modules for the exact kernel we'll be using (maybe copying them would've been easier?), we need to write an initramfs that can load these modules prior to mounting the rootfs.

Fortunately the Gentoo Wiki has a great article on making a custom one yourself. The basic idea is to extract the required kernel modules into the initramfs, whose init script loads the modules, mounts the root filesystem and actually boots.

The script shown below does the following steps:

  • Extract kernel modules from package

  • Delete some that we won't be needing and take a lot of space (optional)

  • Download and install a statically-linked busybox executable

  • Create the init script

  • Pack contents into a cpio archive as required by the Linux kernel

Using a virtio disk and network adapter requires loading the virtio-pci, virtio-blk, virtio-net modules (plus their dependencies). If you need any more the init script can easily be changed accordingly.

#!/bin/bash -e
pkg=$(echo linux-armv7-*.pkg.tar.xz)
[ -f "$pkg" ]

mkdir initrd; pushd initrd
mkdir bin proc sys dev mnt
tar -xaf "../$pkg" --strip-components=1 usr/lib/modules
rm -rf lib/modules/*/kernel/{sound,drivers/{{gpu,iio,media,staging},net/{ethernet,wireless}}}
wget https://www.busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-armv7l -O bin/busybox
cat >init <<"CONTENTS"
#!/bin/busybox sh
busybox mount -t proc none /proc
busybox mount -t sysfs none /sys
busybox mount -t devtmpfs none /dev

for mod in virtio-pci virtio-blk failover net-failover virtio-net; do
        busybox modprobe $mod
done

busybox mount -o rw /dev/vda /mnt || exit 1

busybox umount /proc
busybox umount /sys
busybox umount /dev

exec busybox switch_root /mnt /sbin/init
CONTENTS
chmod +x bin/busybox init
bsdtar --format newc --uid 0 --gid 0 -cf - -- * | gzip -9 >../initrd.gz
popd; rm -r initrd

Booting the virtual machine

With the initramfs built, we have all parts needed to actually run the VM: 2

qemu-system-arm -M virt,highmem=off -m 2048 -smp 4 -kernel zImage -initrd initrd.gz \
-drive file=rootfs.qcow2,if=virtio -nic user,model=virtio \
-append "console=ttyAMA0" -nographic
After roughly a minute of booting you should be greeted by
Raspbian GNU/Linux 10 raspberrypi ttyAMA0 and a login prompt.

Further steps

This virtualization approach should work for just about any ARM/Linux distribution. I have tested it with Raspbian, Void Linux and Arch Linux ARM (whose rootfs even works without any modifications).

To ensure the kernel performs as expected beyond basic tasks, it's a good idea to extract the modules from the linux-armv7 package into the guest rootfs.

As with any VM, you can use the full extent of QEMU's features to e.g.:

  • attach an USB controller (-device usb-ehci or nec-usb-xhci)

  • ..SCSI controller (-device virtio-scsi)

  • ..Audio input/output (-device usb-audio)

  • or even enable graphical output (-device VGA)

AArch64?

With a few adjustments in the right places, this guide also works to emulate an AArch64 kernel and userland. With two caveats:

  • qemu-system-aarch64 will not actually start up in in AArch64 mode unless you use -cpu cortex-a53

  • The busybox-armv8l binary available on the website isn't 64-bit, you'll have to build your own

1

https://wiki.qemu.org/Documentation/Platforms/ARM#Generic_ARM_system_emulation_with_the_virt_machine

2

Why highmem=off is necessary: https://bugs.launchpad.net/qemu/+bug/1790975

Opening a shell inside non-systemd nspawn containers

If you try to open shell inside a container that runs e.g. Alpine Linux using machinectl, the following error will appear:

# machinectl shell vpn
Failed to get shell PTY: Protocol error

The reason for this is that the container is not running systemd.

Because systemd-nspawn just uses Linux namespaces 1, nsenter can alternatively be used to access the container. For this, we'll need the PID of the init process inside the container:

# systemctl status systemd-nspawn@vpn
● systemd-nspawn@vpn.service - Container vpn
   Loaded: loaded (/lib/systemd/system/systemd-nspawn@.service; disabled; vendor preset: enabled)
   Active: active (running) since Sun 2019-08-11 19:49:19 UTC; 6 months 3 days ago
 Main PID: 795 (systemd-nspawn)
   Status: "Container running."
   CGroup: /machine.slice/systemd-nspawn@vpn.service
           ├─payload
           │ ├─ 797 /sbin/init
           │ ├─1028 /sbin/syslogd -t
           [...]
In this case the PID of init is 797, you can then spawn a login shell inside the container:
nsenter -t 797 -a /bin/sh -l

All in all, this can be turned into a nice alias for your .bashrc:

function center ()
{
      [ -z "$1" ] && { echo "Usage: center <name>" >&2; return 1; }
      pid=$(sed -n 2p "/sys/fs/cgroup/pids/machine.slice/systemd-nspawn@$1.service/tasks")
      [ -z "$pid" ] && { echo "Container not running" >&2; return 1; }
      nsenter -t $pid -a /bin/sh -l
}
1

http://man7.org/linux/man-pages/man7/namespaces.7.html

QEMU Configuration & Usage

This will cover some QEMU options I have found useful beyond the basics.

Machine Type

-M q35 configures a more modern chipset to be emulated. The Q35 chipset supports PCI-e and includes an AHCI controller 1.

UEFI
Get UEFI support by replacing the bios with -bios ovmf-x64/OVMF-pure-efi.fd or OVMF-with-csm.fd if legacy boot is desired.
OVMF can be downloaded from https://www.kraxel.org/repos/jenkins/edk2/ (edk2.git-ovmf-x64-...). bsdtar can extract the rpms.

Many distributions also offer a matching ovmf package in their repos.

Attaching disk images using VirtIO

Use -drive file=disk.img,if=virtio for improved disk performance. Windows guests require additional drivers 2 to use this.

Attaching raw disks
-drive file=/dev/sdb,if=virtio,format=raw,cache=none

When attaching entire disks, partitions or logical volumes cache=none is a good idea.

Share host directory to guest

-drive file=fat:/path/to/dir,snapshot=on creates a read-only virtual FAT-formatted disk image from the given directory.

Multiple CD-ROM images
-drive file=X.iso,index=0,media=cdrom -drive file=Y.iso,index=1,media=cdrom

The index=N parameter is optional but can be used to explicitly order drives.

Bridged Network Adapter
-netdev bridge,br=br0,id=mynet -device virtio-net-pci,netdev=mynet
short syntax: -nic bridge,br=br0,model=virtio
For virtio, Windows needs additional drivers 2.
Aside from virtio-net-pci QEMU also supports emulating real cards such as:

e1000e (Intel 82574L GbE) which is the default, e1000 (Intel 82540EM GbE) or rtl8139 (Realtek RTL-8139C 10/100M)

CPU type
The default is -cpu qemu64.
To get the full CPU feature set in the guest use -cpu host or the appropriate family, e.g. -cpu Haswell.
Alternatively, flags can also be enabled individually: -cpu qemu64,+ssse3,+sse4.2,+avx,+avx2

-cpu kvm64 is legacy and should not be used 3.

VNC

-display vnc=localhost:1,lossy starts VNC on port 5901 with lossy compression enabled to save bandwidth.

USB Input Devices
-usb -device usb-tablet -device usb-kbd attaches attaches keyboard and tablet (as mouse) via USB instead of PS/2.

This improves mouse support especially when using VNC and makes grabbing unnecessary in the GUI.

Port forwarding with User networking
When using -nic user (default) the hostfwd=PROTO::HPORT-:PORT option can be used to forward connections to the guest.

e.g. -nic user,model=virtio,hostfwd=tcp::2222-:22

VGA driver
-vga qxl offers improved performance over the default (std). Windows needs drivers, again 2.

3D acceleration for Linux guests is possible with -vga virtio 4.

Serial console
-serial pty connects the serial port to a PTY, which can then be interacted with using screen.
Alternatively when -nographic is used, the QEMU monitor and Serial get multiplexed to stdio.

Ctrl-A c can then be used to switch between the monitor/serial 5.

Emulated SCSI controller
(because it's possible, not because it's useful)

-device lsi,id=lsi -drive file=somewhere.img,if=none,id=disk0 -device scsi-hd,drive=disk0,bus=lsi.0