How to Deploy OKD Minishift onto a Publicly-Hosted VM

calendar April 7, 2020

If you are familiar with minikube, a lightweight implementation of the Kubernetes ecosystem, then you may have also heard of Minishift. Designed as a development platform and delivered through a utility, this is the Red Hat OKD (Origin Kubernetes Distribution—formerly called OpenShift Origin) all-in-one implementation of Red Hat OpenShift. Being highly versatile, it can be deployed on varying platforms:

  1. OS X: xhyve (default), VirtualBox, VMware Fusion
  2. GNU/Linux: KVM (default), VirtualBox
  3. Windows: Hyper-V (default), VirtualBox, VMware Workstations (Using Linux guest OS with KVM, nested virtualization)

All of these examples have one thing in common: the Minishift utility is deploying the ecosystem as an image onto a VM from the host machine, where the host machine is not itself a virtual machine. In the case where the host machine is a virtual machine (highlighted in bold above), the underlying hardware and virtualization platform needs to support nested virtualization. Which is the ability to support running one VM within another or running a hypervisor within a VM. This capability isn’t always available.

  • Red Hat OKD
  • OKD Minishift

Installation can become challenging when you don’t have access to the host machine. This article will go through the process of installing Minishift directly onto an already existing VM. Although we will discuss this from the context of an AWS linux VM, the process can be applied to any venue.

We are going to setup an environment comprising of two linux VMs. The first being the control node from which the installation will take place. The second linux node will host the Minishift environment. For compatibility reasons, we want the Minishift node to either be RHEL or one of its derivatives (CentOS, Fedora, Oracle Linux, etc).

Minishift environment in AWS with control node and minishift instances

For the purposes of this tutorial we are going with CentOS for the Minishift node and AWS Linux 2 for the control node. You can use this CFT to deploy the necessary AWS resources. Keep in mind that this template can only be run from us-east-1, so if you want to set this up in a different region, either modify the template with the appropriate AMIs or create the instances manually.

Environment Setup

  1. Login to both nodes using your SSH client and the private key specified in CFT deployment.
    1. Username for the Minishift node is centos
    2. Username for the control_node is ec2-user
  2. Make sure the packages on both nodes are updated
    $sudo yum -y update
  3. Enable the control node to log into the Minishift node using ssh keys
    1. On the control node,
      1. Generate SSH keys
      2. Copy the contents of the file generated in /home/ec2-user/.ssh/ to your clipboard
    2. On the Minishift node,
      1. Enable the SSH daemon to authenticate using public/private keys by setting the PubkeyAuthentication and PermitRootLogin to yes in the conf/etc/ssh/sshd_config configuration file
      2. Restart the SSH daemon
        $sudo systemctl restart sshd
      3. Edit the public keys file found at /root/.ssh/authorized_keys. In a newline, paste in the contents copied previously from the file
    3. Test the connection from the control node by logging in
      $ssh root@<ip_address>

Minishift Setup

  1. Login to the control node and download the Minishift utility
  2. Decompress the file
    $tar zxvf minishift-1.34.2-linux-amd64.tgz
  3. Add the utility to the path. You should make this command persistent, by adding to your bash login scripts
    $export PATH=$PATH:<path to minishift utility>
  4. Enable the minishift generic driver
    $minishift config set vm-driver generic
  5. Verify configuration
    $minishift config view
  6. Deploy Minishift
    $minishift start --remote-ipaddress <minishift_ip> --remote-ssh-user root --remote-ssh-key /home/ec2-user/.ssh/id_rsa
  7. Check the status of minishift
    $minishift status
  8. Log into the minishift web console
  9. The oc utility can be found in /home/ec2-user/.minishift/cache/oc/v3.11.0/linux. Add this utility to the path. You should make this command persistent, by adding to your bash login scripts
    $export PATH=$PATH:<path to oc utility>
  10. Verify that the oc utility is functioning correctly
    $oc status

Minishift Hello World Example

Here is a simple “Hello, World!” example that you can use to test the installation.

(Option 1) Direct Deploy

$oc new-app openshift/hello-openshift

(Option 2) Manifest Definition

Start by creating a new file that will hold the container manifest

$vi hello-openshift-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
  name: hello-openshift
  replicas: 1
        app: hello-openshift
      - name: hello-openshift
        image: openshift/hello-openshift:latest
        - containerPort: 80
            cpu: 250m
            memory: 256Mi
            cpu: 300m
            memory: 512Mi
Snippet to create container manifest file

Deploy the file
$oc create -f hello-openshift-deployment.yml

Issue an oc get pods to see a list of active pods. If you can see the hello-world app, then you have succeeded in setting up your environment.

If you’ve been playing with Kubernetes for a while, then you know that setting the appropriate resource specs (highlighted above), and keeping them updated for a specific container over its lifecycle can prove to be challenging. Not just from the perspective of knowing what to set it to in the first place, but also coming back and manually implementing changes. In the age of automation, this process of manually keeping these specs up to date seems unnecessary. Is there a better way?

Check out our upcoming tutorial, ‘Scaling OpenShift Container Resources using Ansible,’ where I will show you how to take an automation technology like Ansible and use it to continuously and automatically keep the resource specifications optimized.