This repository provides the Pulumi infrastructure and configuration files for deploying a Kubernetes cluster to glab.
The glab infrastructure deployment process attempts to adopt the "infrastructure as code" model as much as possible. Pulumi is the primary technology that drives this methodology as well as the actual deployment process. Rather than manually specifying which resources to create for a cluster through the use of code the chosen approach relies on setting specific Stack configuration values which in turn will affect how a cluster is deployed. Pulumi has the concept of a Stack in order to enforce the DRY principle as much as possible. This deployment leverages that concept and focuses on using the configuration data within a stack in order to determine how to create and deploy the cluster. See below for usage instructions and additional commentary on the stack configuration data.
Usage varies depending on whether a new or existing stack is created. For details on how to configure a stack please see the configuration section below.
Use the Pulumi CLI to create a new stack:
$> pulumi stack init <stack name>Configure the stack using the Pulumi.<stack name>.yaml file. It's recommended to copy an existing stack configuration
and then make the necessary tweaks. Once the stack is configured a new cluster can be created with:
./up.sh <stack name>Simply specify the stack you want to bring up:
./up.sh <stack name>Running this deployment process requires a few environment variables to be defined:
VAULT_ADDR: The URL to the Vault serverVAULT_TOKEN: A token with sufficient privileges to create tokens for signing SSH host keysVSPHERE_SERVER: URL for the vCenter serverVSPHERE_USER: Username to a vSphere user with permissions to create virtual machinesVSPHERE_PASSWORD: Password for the above vSphere account
When a cluster is turned up for a stack it generates output data which is later used for configuring the cluster using Kubespray. The output structure is as follows:
- cluster
- name: The name of the cluster (used to name the actual cluster configured by Kubespray)
- node_count: The total number of nodes creating for this cluster
- masters: A list of hostnames for nodes which are configured to be masters
- workers: A list of hostnames for nodes which are configured to be workers
The up.sh script uses this output to dynamically create the Ansible inventory which Kubespray uses for bootstrapping
the cluster:
pulumi stack output cluster | python inv.py > /tmp/kubespray/inventory/glab/inventory.iniRather than modfying the underlying code for each infrastructure change, Pulumi is configured to use a set of configuration data provided by the Stack. This configuration data informs Pulumi about the environment in which the cluster is going to be created as well as the specifications of the cluster like its name and number of nodes.
The best method for configuring a new stack is to copy the Pulumi.<stack name>.yaml file contents to the new stack
and then make the necessary modifications. Each possible configuration parameter is described below:
glab:clustermasters: The number of masters to create for the cluster (counts against node total)name: The name of the cluster (used to name the actual cluster configured by Kubespray)nodes: The number of nodes to create for this cluster (minimum number is 3)
glab:envdatacenter: The name of the vSphere datacenter where the cluster will be deployeddomain: The domain name used to generate hostnames for the Ansible inventory filenetworkdomains: A list of default search domains used to configure a node's network interfacedns_servers: A list of DNS servers used to configure a node's network interfacename: The name of the vSphere network/port group which will be assigned to a node's NICsubnet: The subnet to use when generating static IP addresses for nodes
name: The name of this environmentnodemastername: The format string which will be used to name master nodes (ex.master{index}-{env})index: The index of the node (i.e.01)env: The name of the environment
network_offset: The offset applied when generating static IP addresses (i.e. if subnet is 192.168.1.0 and offset is 20, the first master node will have a static IP address of 192.168.1.21)cpus: The number of CPUs to assign to master nodesmemory: The amount of memory (in MB) to assign to master nodes
worker: same as above except targeted at worker nodes
pools: A list of resource pools which nodes will be deployed againsttype: The type of resource pool, either host or clustername: The name of the cluster or host as defined in vSpheredatastore: The datastore that will be used for nodes deployed in this resource groupweight: The weight which determines how nodes are distributed. Higher weights result in the pool having more nodes deployed against it.
template: The name of the VM template which will be cloned when creating nodesvault_address: The address to a Vault server which will be used when nodes sign their SSH host keys on boot via cloud-init
The process followed for creating a cluster is contained within up.sh and can be summarized as follows:
- Issue
pulumi upto deploy the infrastructure for the cluster - Pull down the Kubespray repository to a temporary directory
- Generate an Ansible inventory using the Stack outputs
- Run Kubespray to bootstrap the cluster
- Pull down the kube admin config file for working with the new cluster
After completion a fully bootstrapped Kubernetes cluster will be available.
There currently is not an automated way for modifying an existing cluster. It's possible to modify the Stack
configuration and then apply the changes using pulumi up, however, any nodes added or removed from the cluster will
not be configured correctly. There are future plans to work with Kubespray to dynamically add and remove nodes from the
cluster. Making changes to the resource sizes (i.e. adding more CPUs or RAM) is possible using the aforementioned
method.