If you can’t utilize oc debug to get to your nodes, you’ll need to provision a Bastion host (jump box) to reach them. This post will go through the steps to provision the Bastion host and explain what you’ll need to understand for AWS. At the end of the post we will cover automating this job in GitHub Actions.

Pre-Reqs for Success

The OpenShift Container Platform installer does not create any public IP addresses for any of the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for your OpenShift Container Platform cluster.

My OpenShift Environment

On my OpenShift instance, I provisioned it with the IPI installer. It is just a plain standard instance. I have the SSH keys I associated with the cluster when I provisioned it.

What parameters are required for the Bastion Host?

I have three main (master) nodes, one in each availability zone of the us-east-2 region.

overview-nodes

If we look at our subnets, we can see we have 6 subnets, a public and private subnet for each availability zone.

overview-subnets

For our Bastion host, we need it to be in a public subnet, so we can access it. That leaves us with three available zones to pick from. For my purposes, I used the us-east-2a availability zone.

We can also get this information via the AWS CLI, parsing it with jq.

My cluster name is: ocp-17-4fdbz, so my query looks like:

$unique_cluster_id=ocp-17-4fdbz

subnet_id=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=$unique_cluster_id-public-us-east-2a | jq -r '.Subnets[].SubnetId')

Next, we need to get the security group:

security_group_id=$(aws ec2 describe-security-groups --filters Name=tag:Name,Values="$unique_cluster_id-worker-sg" | jq -r '.SecurityGroups[].GroupId')

A key pair needs to be associated with the Bastion host. When I provisioned my OpenShift instance, I sent all my metadata, including a dynamically generated SSH key to an S3 bucket. So I am using that SSH key for my Bastion node. I pulled it down from S3 and imported it into AWS as a key pair.

aws ec2 import-key-pair --key-name $unique_cluster_id --public-key-material fileb://./ssh-keys/$unique_cluster_id.pub

Ensure to use the fileb naming convention or you will get an SSH key format error with AWS cli version 2.

Finally, we need to decide which AMI to use for our Bastion Host. This is just a matter of preference. I use an amazon-linux AMI, but you can use anything you want.

Now we have all the information we need to provision our Bastion Host! We can use aws ec2 run-instances to kick off the deployment.

aws ec2 run-instances \
    --image-id <insert AMI id> \
    --instance-type t2.micro\
    --subnet-id $subnet_id \
    --security-group-ids $security_group_id \
    --associate-public-ip-address \
    --key-name <insert key pair name> \
    --tag-specifications \
        "ResourceType=instance,Tags=[{Key=Name,Value=$unique_cluster_id-bastion},{Key=$unique_cluster_id,Value=owned}" \
        "ResourceType=volume,Tags=[{Key=Name,Value=$unique_cluster_id-bastion-disk1},{Key=$unique_cluster_id,Value=owned}]"

Making a GitHub Actions job to deploy the Bastion host

Since we have the AWS CLI commands, putting this into a GitHub Actions workflow is easy and low effort.

GitHub Actions Deploy Bastion Host workflow