One Day at a Time

A site for solving the world's problems

One Hackintosh to Rule Them All

This post will chronicle the building of the ULTIMATE HACKINTOSH. It will be able to run Mac OS, a Linux distro, and Windows all concurrently. The plan is to use Xen Hypervisor as a bare-metal VM, which will then run the other three OS’s. The dom0 of Xen will be Alpine Linux, which comes with Xen support out-of-the-box.

Step 0: Create an Alpine Linux boot USB

This step was more difficult than anticipated. I tried to use the command


dd if=/alpine.iso of=/dev/sdX bs=1M

On the first try, I tried to boot from the USB. Upon loading the files, I got an error:

Unable to load libSOMETHING.c32

I unplugged the USB and tried to look at the files in the partitions. They were some funky binary looking things, which didn’t seem right for stuff like syslinux.cfg. They should be readable text files.

I tried re-creating the USB but my Linux VM gave me an input/ouput error. I didn’t resolve the error, but I decided to try on my Mac instead. Success! the .iso file was copied to the USB drive.

Step 1: Setting up Alpine

Run the steps laid out one the Alpine Site, Install to Disk

A few steps weren’t documented. Namely:

HTTP/FTP proxy URL (e.g. ‘http://proxy:8080’ or ‘none’) [‘none’]
I pressed enter to use the default none

Which NTP client to run? (‘busybox’, ‘openntpd’, ‘chrony’ or ‘none’) [chrony] This is asking which Network Time Protocol I want to use. According to this site, chrony has decent support for virtualization. So I’ll go with that.

Next step is to power off the computer, remove the USB, and boot that puppy up! If all goes well, you’ll boot to your new Alpine install. 

On the Alpine Wiki site it documents how to set up a Xen dom0 with Alpine

Initial packages:
xen/xen-hypervisor: The Xen software that controls the virtual machine
seabios: a linux-supported BIOS for legacy platforms. Not sure if really needed for modern stuff, but good to be safe?
ovmf: enables UEFI support for Virtual Machines, sounds useful. Note ovmf is part of the community repository. Adding this repo is covered in the Xen dom0 site.

Next we need to edit the extlinux.conf file to modify the boot preferences. It is located in /boot/extlinux.conf.
We need the UUID of the boot partition. A useful command for this was blkid, which was suggested on Stack Exchange

Brief side note on various acronyms used in file systems/computers.

Old Acronym Meaning Year New Acronym Meaning Year Introduced
BIOS Basic Input/Output System 1975 UEFI United Extensible Firmware Interface 2010
MBR Master Boot Record 1983 GPT GUID Partition Table 2010
      GUID Global Unique Identifier 1980’s
      UUID Universally Unique dentifier 1980

Back to setting up Xen. You may wish to set some boot configuration items for dom0, such as the RAM allocation (dom0_mem), whether to pin the dom0 vCPUs (virtual CPUs) to their respective pCPUs (physical CPUs) (dom0_vcpus_pin), and max number of vCPUs (dom0_max_vcpus).  This xenbits site has a list and descriptions of boot options. And this site Xen Boot Options has some more in depth descriptions

According to our install website, it seems like these options should be placed in /boot/extlinux.conf. But intuition and this Red Hat site suggest it should be in /boot/grub/grub.conf. I guess we’ll see. Here’s an example of the boot options applied from the Red Hat site

title Red Hat Enterprise Linux Server (2.6.18-3.el5xen) root (hd0,0) kernel /xen.gz.-2.6.18-3.el5 dom0_mem=800M module /vmlinuz-2.6..18-3.el5xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet module /initrd-2.6.18-3. el5xenxen.img

It seems that extlinux has already added a Xen-LTS option to the file, so me adding a new one is not needed. And the proper kernel modules are in /etc/modules, AND the proper services are already in OpenRC (Open Run-level Configuration), the run-level init system used by Alpine. On to the next thing!

Step 2: Changing OS’s

I was having trouble with Alpine because it was not listing any IOMMU groups in /sys/kernel/iommu_groups. AFAIK, you need these Input-Output Memory Management Units to control the I/O for graphics cards, audio ports, USBs, etc. (see here) If no groups are assigned, you can’t assign the devices to the virtual machines. So we will try the process again with Debian instead of Alpine. Following the install guides on the Xen website and a blog Minimal Debian install.

Most of the Debian install process is pretty straight forward. There are some important considerations for the partition scheme. I chose to set it up manually, as suggested by the Minimal Debian website. A few notes:

  • To set up the boot partition, you need to set “Use as:” to “EFI System Partition”. This will enable the Bootable Flag.

The remaining steps are pretty straightforward. Another note, on the Software Selection screen, use the space bar to select/de-select your choices. I am using just the “standard system utilities” and none of the other options. System should install as expected.

Step 3: Necessary Changes

Once the system reboots, login as yourself. Then type:
su –
to enter the root account. Next we need to install sudo and add your user account to sudoers. Type:
apt-get install sudo
(installs the sudo command)
Then: usermod -aG sudo yourusername
Reboot the machine by typing: shutdown -r now

There is an problem with AMD CPUS and the latest version of the Linux kernel, 4.19. Particularly with Secure Encrypted Virtualization. It is a service that encrypts VMs. Since I’m not to worried about it and there is an annoying message that pops up on the login screen, I looked for way to disable the message. Found a resource on this reddit thread
The solution is to download a package and modify a file:
sudo apt-get install sysfsutils
sudo vi /etc/sysfs.conf
Add a line: module/kvm_amd/parameters/sev = 0
Then reboot. That didn’t work. Grr.

Step 3.4 Changing OSs again!

This time Debian didn’t work. I was getting frustrated at two big issues. Debian was using an older kernel (as expected) that didn’t have native support for my fancy schmancy Intel AX200G Wifi chip. Support for the chip was included in an iwlwifi module which in theory could be backported. But I had very little success with that. I also tried upgrading the Debian kernel to 5.4, but then I needed the kernel headers to properly install the iwlwifi module, or the nvidia-drivers, or both. I don’t remember. It was a hot mess, and not going anywhere fast. So off to a new shiny OS! This time, Fedora (oooh aaaah)

My research found Fedora is a step back from the bleeding edge, but still more modern than good ol’ stable Debian. So it had the proper kernel for Wifi support. Additionally, I didn’t have a hard time getting XServer installed. Steps are below, but I was ecstatic when I got a terminal window to appear.

Fedora comes in two flavors: Workstation and Server. Workstation comes with all the bells and whistles, but has a lot of unnecessary fluff. Server was more leaned down, but lacked a desktop manager. I stuck with Server for the performance aspect. It also put me on a bit of a ride, but less so than Debian. 

  1. Make a bootable USB of the Fedora ISO
  2. Boot the USB, and partition the target disk into three partitions:
    1. bootbios
    2. ext4 (where you will mount /)
    3. swap
  3. In the Software Selection page, choose Custom, and then the radio button for the least amount of server junk. I forget what it’s called at the moment
  4. Install that puppy!
  5. Once it’s running, time to get the goods
yum update
vi /etc/selinux/config # set SELINUX=disabled
yum groupinstall 'Virtualization'
yum install xen

6. Then change grub to choose the default boot option as the xen modified kernel. The #### should be the name of the entry you want. For me it was Fedora, with Xen hypervisor 

grub2-mkconfig -o /boot/grub2/grub.cfg
grep ^menuentry /boot/grub2/grub.cfg | cut -d "'" -f2
grub2-set-default "########"

On to the next part! I copied my Windows domU config onto a flashdrive, and overwrote it. Or something. So time to remake that sucker! I’ll back it up properly next time. Fedora doesn’t play nicely with exFat out of the box, and doesn’t want to go the usual route of “dnf install fuse-exfat”. I need exFat to read the flashdrive with the Windows ISO. The fix is to first get this guy:

wget https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-XX.noarch.rpm
dnf install rpmfusion-free-release-XX.noarch.rpm
dnf install fuse-exfat

Above, XX is the Fedora version number you have. Tip came from this site.

Step 3.5: domU Preliminaries

I checked the settings in the BIOS for virtualization. I found the relevant setting in Advanced CPU Settings, and set SVM to Enabled.

Then I set up network bridging to make sure the ethernet port is bridged to the domUs. Open /etc/network/interfaces. Add the end, add a section such as:
# Adding bridge for Xen
auto xenbr0
iface xenbro inet dhcp
bridge_ports eth0

You may also need to modify the line in a section up above:
# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp manual

Then we can start building the Logical Volumes (LV). The idea here is that all Physical Volumes (PV) will be part of a LV controlled by the dom0. Each PV will be a Volume Group (VG) to make management easier.

Create physical volumes by first formatting each of the other SSDs to the Linux filesystem using fdisk
fdisk /dev/sdx
n
w
Repeat for the other SSDs you have. I also converted the last poriton of my M.2 drive to a Linux filesystem. This will give me 3 physical volumes. Install lvm2 to manage the logical volumes. Then create those volumes using the command
pvcreate /dev/sdx
For each physical volume.

Documentation seems to imply that Xen likes to have LVMs set up. So since there are 2 SSDs and 1 underutilized M.2 drive, we’ll set up the LVs using those. Each physical volume will be the only member of the VG. Then, each of the 3 VGs will be a LV. That way, everyone is happy, the LVs stay separate, and there will be much rejoicing.

After running pvcreate, run these commands to create the VG and LV respectively
vgcreate volumeGroupName /dev/sdx#
lvcreate -n volumeName -L <size> G (or M) volumeGroupName

And there you have it.

Step 4: Creating the Windows 10 HVM on dom1

The doms are configured using a config file (no kidding!) that follows a template like this:

name = "Windows7"

builder = "hvm"

vcpus = 4

memory = "4096"

maxmex= "8192"

vif=['bridge=br0']

disk=['phy:/dev/vgWindows,hda,rw','file:/home/mohsen/windows7.iso,hdc:cdrom,r']

vnc="1"

vnclisten="172.30.9.20"

vncconsole="0" 

boot="dc"

stdvga=1

videoram=32

on_poweroff = "destroy"

on_reboot = "restart"

on_crash = "restart"

The disk line is confusing, so let’s break it down. It is comprised of either key value pairs, or positional arguments a la passing arguments to a function. This info was obtained from this site.

disk = [target, format, vdev, access]

  • target: the intended location of the virtual machine. We specify ‘phys:’ because it is a physical place. Even though we are using the logical volume….
  • format: format of the image file. ‘raw’ is used as default, as opposed to qcow, qcow2, or vhd
  • vdev: name of the virtual device as seen by the guest. Also known as “guest drive designation”. We use ‘hda’
  • access: access control info (read only, write only, read/write)

Other arguments (not required, but enlightening)

  • backend: specifies the backend device this domain will attach to. Defaults to dom0. Other domains are possible, but require more work.
  • backendtype: controls which Xen driver is used to attach to the backend domain. Defaults to phys, but could also be tap or qdisk. phys can only support the raw format.

The disk line is causing problems, notably in the log in /var/log/xen/. It complains: Could not open ‘/dev/vgWindows’: Is a directory

Fixed by going down one more level, to /dev/vgWindows/Windows10


now installing xorg to see if that will fix vnc issues

Xorg didnt’ solve it, but it helped. There is a problem with the default nouveau drivers. If I run dmesg | grep nouveau, I get the error nouvoue 0000:09:00.0 unknown chipset (166000a1
nouveau: probe of 0000:09:00.0 failed with error -12

Also, I had to install Xorg using apt-get install xorg. But if I run startx, I get the error [drm] Failed to open DRM device for pci:0000:09:00.0: -19

xen complained with the command:
x create -V domain-U
That vncserver did not exist. I fixed this by installing tigervnc-viewer (apt-get install tigervnc-viewer)
Then the vncviewer could not connect to the display, despite the number of combinations I tried. (127.0.0.1:5900, :5901, localhost:5900, etc.)

I also decided a desktop/window manager might be necessary. I picked icewm because it is about as minimal as it gets. But I couldn’t run startx!

All in all, it seems that the nouveau drivers (and thus X) don’t know how to talk with the NVidia 2060 Super. The solution is to add the NVidia drivers!

The usual command is:
add-apt-repository ppa:graphics-drivers/ppa
But add-apt-repository is not included in Debian by default… So it gets installed with apt-get install software-properties-common

Step N-1: The Fedora route

Fedora has lead me down the path to glory!!!
Thus far in the Fedora adventure we have a Xen kernel running. Now to do something with it! I installed an Xserver using the commands:

dnf install xterm
dnf install xorg-x11-xinit
dnf install xorg-x11-server-Xorg
dnf install icewm

Then to start the Xserver, I run xinit. An xterm window pops up. In the window, run icewm to get the nice Ice Window Manager to spring alive.

Next was getting the Windows10 dom1 to load. My first try seems to bring up the machine, but I couldn’t see the darn thing! I installed TigerVNC with the command:

dnf install tigervnc

Which gives you the TigerVNC viewer.
Then, to run the dom1 you have 2 options:

xl create Windows10
xl vnc Windows10

OR!

xl create -V Windows10

That will launch TigerVNC and attach to the VM. 

The working config file came from this nice guy over on StackOverflow

Now I can see Windows! But I get the error: 

Recovery
This 64-bit application couldn’t load because your PC doesn’t have a 64-bit processor. If you’re using Windows To Go, make sure your USB device has a version of Windows that’s compatible with the PC you’re trying to use.
File: \windows\system32\boot\winload.exe
Error code: 0x000035a

But at least I can see it!!!

Step N:

Some references for getting Apple up and running. People have had success with QEMU, so there maybe hope in converting a working qemu image into a proper LVM for Xen. Check these out:

Resources on using a physical drive rather than doing LVM and installing from an ISO file. Cause windows wasn’t having any of that nonsense.

Make a bootable ISO from USB in Linux
Windows 10 in qemu
Xen with Mac guest
GPU Passthrough to HVM
Xen Storage Options
Raw disk on Xen
Google search “Xen disk on physical disk”

Next Post

Previous Post

© 2024 One Day at a Time

Theme by Anders Norén