diff --git a/vagrant-pxe-airgap-harvester/Makefile b/vagrant-pxe-airgap-harvester/Makefile deleted file mode 100644 index c516095..0000000 --- a/vagrant-pxe-airgap-harvester/Makefile +++ /dev/null @@ -1,6 +0,0 @@ -#!make -check-prereqs: - ./makefile-helper.sh; - -clean: - vagrant destroy -f; diff --git a/vagrant-pxe-airgap-harvester/README.md b/vagrant-pxe-airgap-harvester/README.md deleted file mode 100644 index 90d91a1..0000000 --- a/vagrant-pxe-airgap-harvester/README.md +++ /dev/null @@ -1,120 +0,0 @@ -Harvester iPXE Boot Using Vagrant Libvirt -========================================= - -Introduction ------------- - -**Note:** if you do not desire to have an environment air-gapped it is strongly-encouraged to simply run the `vagrant-pxe-harvester` ipxe-example over this one - as this one is not *only* **more** resource intensive but will take a **substantially** long time to provision and *currently* has some known issues. - -**Also Note:** this set up with the `settings.yml` is currently intended to run with `rancher_config.run_single_node_rancher`, `rancher_config.run_single_node_air_gapped_rancher`, and `harvester_network_config.offline` set to `true` - there are known issues with the setup not working if those values are changed. - -Utilizing [Vagrant][vagrant], [KVM][kvm], and [Ansible][ansible] to create a -ready-to-play virtual Harvester & Rancher environment for evaluation and testing -purposes. Two Vagrant VMs will be created by default, PXE server and a -single or multi-node Harvester respectively - with the optional vm for the Rancher instance with baked in Docker Registry, K3s & Helm installed Rancher. - -Sample Host Loadout -------------- -A sample host load-out for an Ubuntu based distro (22.04 LTS), running this with 1 to 4 Harvester Nodes can look something like the following: -- Installing dependencies such as: - - `sshpass qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager libvirt-dev wget tmux apt-transport-https ca-certificates curl software-properties-common ansible neovim libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev ruby-libvirt qemu libvirt-daemon-system libvirt-clients ebtables dnsmasq-base libguestfs-tools` - - `sudo usermod -a -G libvirt $(whoami)` - - `sudo vim /etc/libvirt/libvirtd.conf`: - - `unix_sock_group = "libvirt"` - - `unix_sock_rw_perms = "0770"` - - `sudo systemctl restart libvirtd.service` - - ensuring logging out and logging back in - - `ansible-galaxy collection install community.general` - - `vagrant plugin install vagrant-libvirt` - - ensuring a `known_hosts` file is present in `~/.ssh/known_hosts` for the sake of the provisioning leveraging `sshpass`, so possible creation could be something like `touch ~/.ssh/known_hosts` if not present -- then finally running `./setup_harvester.sh -c` , provided any edits have been made as desired to the `settings.yml` file prior to kicking off the script - - - -Prerequisites -------------- - -- ansible-base >= 2.10.0 & ansible-core >= 2.11.0 -- Vagrant \>= 2.0.3. -- vagrant-libvirt plugin \>= 0.0.43 and \<= 0.8.2 (NOTE: Anything above 0.8.2 currently breaks the Vagrantfile loadout with libvirt) -- KVM (i.e. qemu-kvm), preferably the latest and greatest. This - environment was tested with qemu-kvm 2.11. -- Host with at least 16 CPU, 64GB RAM, and 500GB free disk space. (note: running this is **very** resource intensive depending on the number of nodes) -- To run with Rancher either airgapped or not airgapped you need to have installed `sshpass` \>=1.06-1 -- Ansible Galaxy's Community General module, `ansible-galaxy collection install community.general` , the community.general module must be installed -- Libvirt 'default' network will need to be enabled / autostarted [libvirt default network](https://wiki.libvirt.org/Networking.html#id2) - -Quick Start ------------ -1. You can edit the `settings.yml` to change: - - `rancher_config.run_single_node_rancher` to `true` to enable running a single node non-airgapped Rancher instance, if you would like to airgap that Rancher instance you would set `rancher_config.run_single_node_air_gapped_rancher` to `true` as well - - `node_disk_size` for `rancher_config` is thin provisioned (so it won't truly take up the entire space), but if running air-gapped would need a minimum of 300G - - other options should be self documenting -2. Then you can run `./setup_harvester.sh` **NOTE:** you more than likely will want to pass the flag `-c` or `--clean-ssh-known-hosts` so that when the additional configuration on each Harvester node runs after standing up Rancher old-host information is cleaned up, ex: `./setup_harvester.sh -c`, since cleaning the known_hosts file is a local file, the flag is optional, because you may want to manipulate that local file yourself instead of having this script manipulate that local file -3. You then can navigate to `https://:30443` to access the harvester UI. **NOTE:** by default harvester_vip is 192.168.2.131. However, it is configureable in settings.yml. -4. You can also navigate to (from settings.yml) 'rancher_config.rancher_install_domain' to access the Air-Gapped Rancher UI. - -Running RKE2 Air-Gapped ---------- -1. When working to set up an RKE2 instance air-gapped on Harvester you will want to utilize an image that has `qemu guest agent` already installed (something like [opensuse-leap-nocloud](https://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.4/images/openSUSE-Leap-15.4.x86_64-1.0.1-NoCloud-Build2.163.qcow2)) as packages and outbound communication is cut off -1. You'll want to follow the [test-steps-section](https://harvester.github.io/tests/manual/harvester-rancher/68-fully-airgapped-rancher-integrate-harvester-no-proxy/) starting on step 5/6 - -Known Issues -------------- -- Sometimes provisioning will fail, possibly due to any number of elements, when provisioning fails, currently, the only way to recover is to simply remove and restart with something-like `vagrant destroy -f` and then rerunning the setup harvester script -- If the provisioning fails at `TASK [rancher : Run the equivalent of "apt-get update" as a separate step, first]`, this is usually due to the outbound network calls needed to download packages having an issue, there may be an issue with the network that the VM is on and/or network bandwidth and/or issues with the mirrors used to download the packages for the vm -- This has remained mostly tested with Rancher v2.6.X - anything beyond that may encounter un-forseen provisioning issues -- In settings.yaml `rancher_config.run_single_node_rancher`, `rancher_config.run_single_node_air_gapped_rancher`, and `harvester_network_config.offline` set to something other than `true`, causes issues with provisioning -- If VMs are shutdown or restarting them (even something like an air-gapped upgrade) - currently breaks the load-out, currently if any of the VMs are shut down or restarted after provisioning `rke2-coredns` edits become lost and will need to be manually re-applied [rke2-coredns comment](https://github.com/harvester/harvester/issues/3731#issuecomment-1487866103) -- It is best to manually apply `containerd-registry` edits (via settings in Harvester) as a safeguard prior to importing Harvester into Rancher: -```json -{ - "Mirrors": { - "docker.io": { - "Endpoints": ["myregistry.local:5000"], - "Rewrites": null - } - }, - "Configs": { - "myregistry.local:5000": { - "Auth": null, - "TLS": { - "CAFile": "", - "CertFile": "", - "KeyFile": "", - "InsecureSkipVerify": true - } - } - }, - "Auths": null -} -``` -- For hostname resolution it is best to ensure on your `/etc/hosts` file there is the edit to include the hostnames associated to the ip (cross-ref: `settings.yml`): -``` -192.168.2.34 rancher-vagrant-vm.local myregistry.local -``` -- fully-air-gapped access to the private docker registry running on `settings.yml's rancher_config.registry_domain` may end up not resolving, you can navigate to the `rancher_config.node_harvester_network_ip` at: `https://myregistry.local:5000/v2/_catalog?n=500`. -- rke1/rke2 downstream provisioning using harvester might pose some problems, they're actively being investigated, some elements of [the "Test steps" in fully-argapped Harvester w/ fully-airgapped Rancher integration may work](https://harvester.github.io/tests/manual/harvester-rancher/68-fully-airgapped-rancher-integrate-harvester-no-proxy/) -- k9s tar.gz is installed if needed on the `rancher_config.node_harvester_network_ip` in `/home/vagrant/k9s_Linux_x86_64.tar.gz` - you can access the Rancher instance shell via: `ssh vagrant@192.168.2.34` with password: `vagrant` - you could change the ownership to `vagrant` on the tar.gz file, extract it, and then use it in conjunction with K3s. Something like `sudo chmod 755 /etc/rancher/k3s/k3s.yaml` then within the `/home/vagrant` directory, running `./k9s --kubeconfig /etc/rancher/k3s/k3s.yaml` - - -Troubleshooting -------------- -- validate links to: - - Rancher's private docker-registry: `https://myregistry.local:5000/v2/_catalog?n=500` (audit catalog, look for items like `rancher-agent`, that image is crucial in allowing Harvester to be imported into Rancher) - - check local cluster events on Harvester VIP: `https://192.168.2.131/dashboard/c/local/explorer#cluster-events`, (if importing Harvester into Rancher, check to see if `Pulling image "myregistries.local:5000/rancher/rancher-agent"` message exists in events on cluster) -- check disk space, depending on N nodes, provisioning **"can"** fail due to not enough disk space (again this is resource intensive) -- if issues connecting to Rancher instance, validate `/etc/hosts` for hostname resolution -- if issues surrounding `ssl` check that `containerd-registry` edits have been made to allow for `insecure` registry -- if restarted any vms by "accident", and routing is not working with prior imported Harvester & Rancher, check applying rke2-coredns edits manually [rke2-coredns comment](https://github.com/harvester/harvester/issues/3731#issuecomment-1487866103) again - -Acknowledgements ----------------- - -- The Vagrant iPXE environment idea was borrowed from - . - - -[ansible]: https://www.ansible.com -[kvm]: https://www.linux-kvm.org -[vagrant]: https://www.vagrantup.com diff --git a/vagrant-pxe-airgap-harvester/Vagrantfile b/vagrant-pxe-airgap-harvester/Vagrantfile deleted file mode 100644 index 4626731..0000000 --- a/vagrant-pxe-airgap-harvester/Vagrantfile +++ /dev/null @@ -1,126 +0,0 @@ -# vi: set ft=ruby ts=2 : - -require 'yaml' - -VAGRANTFILE_API_VERSION = "2" - -# check for required plugins -_required_plugins_list = %w{vagrant-libvirt} -exit(1) unless _required_plugins_list.all? do |plugin| - Vagrant.has_plugin?(plugin) || ( - STDERR.puts "Required plugin '#{plugin}' is missing; please install using:" - STDERR.puts " % vagrant plugin install #{plugin}" - false - ) -end - -# ensure libvirt is the default provider in case the vagrant box config -# doesn't specify it -ENV['VAGRANT_DEFAULT_PROVIDER'] = "libvirt" - -@root_dir = File.dirname(File.expand_path(__FILE__)) -@settings = YAML.load_file(File.join(@root_dir, "settings.yml")) - -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - - # continerd is taking more than 60 seconds to shutdown in SUSE platforms - # so increase the timeout to 120 seconds - config.vm.graceful_halt_timeout = 120 - - config.vm.define :pxe_server do |pxe_server| - - pxe_server.vm.box = 'generic/debian10' - pxe_server.vm.hostname = 'pxe-server' - pxe_server.vm.network 'private_network', - ip: @settings['harvester_network_config']['dhcp_server']['ip'], - libvirt__network_name: 'harvester', - # don't enable DHCP as this node will have it's now DHCP server for iPXE - # boot - libvirt__dhcp_enabled: false - - pxe_server.vm.provider :libvirt do |libvirt| - libvirt.cpu_mode = 'host-passthrough' - libvirt.memory = '4096' - libvirt.cpus = '2' - end - - # Use ansible to install server - pxe_server.vm.provision :ansible do |ansible| - ansible.playbook = 'ansible/setup_pxe_server.yml' - ansible.verbose ="vvv" - ansible.extra_vars = { - settings: @settings - } - end - end - - config.vm.define :rancher_box do |rancher_box| - rancher_box.vm.box = 'generic/ubuntu2004' - rancher_box.vm.hostname = @settings['rancher_config']['rancher_install_domain'] - #rancher_box.vm.network "public_network", auto_config: true - # default network is eth0 - # private network becomes eth1 - # rancher_box.vm.network 'public_network', - # dev: 'virbr0', - # network_name: "rancher-public", - # auto_config: true - rancher_box.vm.network 'private_network', - libvirt__network_name: 'harvester', - mac: @settings['rancher_config']['mac_address_harvester_network'] - rancher_box.vm.provider :libvirt do |libvirt| - libvirt.cpu_mode = 'host-passthrough' - libvirt.memory = @settings['rancher_config']['memory'] - libvirt.cpus = @settings['rancher_config']['cpu'] - # libvirt.channel :type => 'unix', :target_name => 'org.qemu.guest_agent.0', :target_type => 'virtio' - # libvirt.qemu_use_agent = true - libvirt.storage :file, - size: @settings['rancher_config']['node_disk_size'], - type: 'qcow2', - bus: 'virtio', - device: 'vdb', - serial: 'bdef2c36-cfab-4f75-b0f5-7bdae75417ce' - libvirt.boot 'hd' - libvirt.nic_model_type = 'e1000' - end - # We need to over-ride what vagrant would typically use to connect to ssh with ansible - # So we provide an additional inventory for vagrant that ties to the file/host - # Ensuring we can connect over the harvester net, not the temporary eth0 that will be cut - rancher_box.vm.provision :ansible do |ansible| - ansible.verbose ="vvv" - ansible.inventory_path = "inventories/vagrant" - ansible.playbook = 'ansible/setup_rancher_node.yml' - ansible.extra_vars = { - settings: @settings - } - end - end - - - cluster_node_index = @settings['harvester_cluster_nodes'] - 1 - (0..cluster_node_index).each do |node_number| - vm_name = "harvester-node-#{node_number}" - config.vm.define vm_name, autostart: false do |harvester_node| - harvester_node.vm.hostname = "harvester-node-#{node_number}" - harvester_node.vm.network 'private_network', - libvirt__network_name: 'harvester', - mac: @settings['harvester_network_config']['cluster'][node_number]['mac'] - - harvester_node.vm.provider :libvirt do |libvirt| - libvirt.cpu_mode = 'host-passthrough' - libvirt.memory = @settings['harvester_network_config']['cluster'][node_number].key?('memory') ? @settings['harvester_network_config']['cluster'][node_number]['memory'] : @settings['harvester_node_config']['memory'] - libvirt.cpus = @settings['harvester_network_config']['cluster'][node_number].key?('cpu') ? @settings['harvester_network_config']['cluster'][node_number]['cpu'] : @settings['harvester_node_config']['cpu'] - libvirt.storage :file, - size: @settings['harvester_network_config']['cluster'][node_number].key?('disk_size') ? @settings['harvester_network_config']['cluster'][node_number]['disk_size'] : @settings['harvester_node_config']['disk_size'], - type: 'qcow2', - bus: 'virtio', - device: 'vda' - boot_network = {'network' => 'harvester'} - libvirt.boot 'hd' - libvirt.boot boot_network - # NOTE: default to UEFI boot. Comment this out for legacy BIOS. - libvirt.loader = '/usr/share/qemu/OVMF.fd' - libvirt.nic_model_type = 'e1000' - end - end - end -end diff --git a/vagrant-pxe-airgap-harvester/ansible.cfg b/vagrant-pxe-airgap-harvester/ansible.cfg deleted file mode 100644 index dce6fa5..0000000 --- a/vagrant-pxe-airgap-harvester/ansible.cfg +++ /dev/null @@ -1,4 +0,0 @@ -[defaults] -stdout_callback = yaml -interpreter_python = auto_silent -host_key_checking = False \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/adjust_harvester_nodes.yml b/vagrant-pxe-airgap-harvester/ansible/adjust_harvester_nodes.yml deleted file mode 100644 index 368d874..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/adjust_harvester_nodes.yml +++ /dev/null @@ -1,61 +0,0 @@ ---- -- name: set Harvester Node IP fact for new host group built - set_fact: - harvester_node_ip: "{{ harvester_network_config['cluster'][node_number | int]['ip'] }}" - -- name: edit hosts with the docker registry domain on harvester node - shell: | - echo "" >> /etc/hosts && echo "{{ rancher_config.node_harvester_network_ip }} {{ rancher_config.registry_domain }} {{ rancher_config.rancher_install_domain }}" >> /etc/hosts - -- name: copy rke2 registries over - template: - src: roles/harvester/templates/registries-edit.yaml.j2 - dest: /etc/rancher/rke2/registries.yaml - register: copy_rke2_registries_yaml_result - delegate_to: "{{ harvester_node_ip }}" - ignore_errors: yes - ignore_unreachable: yes - -- name: restart rke2 service on node - systemd: - name: rke2-server.service - state: restarted - delegate_to: "{{ harvester_node_ip }}" - ignore_errors: yes - ignore_unreachable: yes - register: rke2_harvester_node_restart_result - - -- name: copy rke2-coredns-rke2-coredns configmap edit over - template: - src: roles/harvester/templates/configmap-rke2-coredns-rke2-coredns.yaml.j2 - dest: /etc/rancher/rke2/patch-configmap-rke2-coredns-rke2-coredns.yaml - delegate_to: "{{ harvester_node_ip }}" - ignore_errors: yes - ignore_unreachable: yes - - -- name: patch configmap rke2-coredns-rke2-coredns with updateded content - shell: | - /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml patch configmap/rke2-coredns-rke2-coredns -n kube-system \ - --patch-file /etc/rancher/rke2/patch-configmap-rke2-coredns-rke2-coredns.yaml - delegate_to: "{{ harvester_node_ip }}" - ignore_errors: yes - ignore_unreachable: yes - - -- name: copy rke2-coredns-rke2-coredns deployment edit over - template: - src: roles/harvester/templates/deployment-rke2-coredns-rke2-coredns.yaml.j2 - dest: /etc/rancher/rke2/deployment-rke2-coredns-rke2-coredns.yaml - delegate_to: "{{ harvester_node_ip }}" - ignore_errors: yes - ignore_unreachable: yes - - -- name: patch deployment rke2-coredns-rke2-coredns with updated content - shell: | - /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml patch deployment/rke2-coredns-rke2-coredns -n kube-system --patch-file /etc/rancher/rke2/deployment-rke2-coredns-rke2-coredns.yaml - delegate_to: "{{ harvester_node_ip }}" - ignore_errors: yes - ignore_unreachable: yes \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/boot_harvester_node.yml b/vagrant-pxe-airgap-harvester/ansible/boot_harvester_node.yml deleted file mode 100644 index ed48b5c..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/boot_harvester_node.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -- name: create "Booting Node {{ node_number}}" message - shell: > - figlet "Booting Node {{ node_number }}" 2>/dev/null || echo "Booting Node {{ node_number }}" - register: figlet_result - -- name: print "Booting Node {{ node_number }}" - debug: - msg: "{{ figlet_result.stdout }}" - -- name: set Harvester Node IP fact - set_fact: - harvester_node_ip: "{{ harvester_network_config['cluster'][node_number | int]['ip'] }}" - -- name: boot Harvester Node {{ node_number }} - shell: > - VAGRANT_LOG=info vagrant up harvester-node-{{ node_number }} - register: harvester_node_boot_result - -- name: wait for Harvester Node {{ harvester_node_ip }} to get ready - uri: - url: "https://{{ harvester_node_ip }}" - validate_certs: no - status_code: 200 - timeout: 120 - register: auth_modes_lookup_result - until: auth_modes_lookup_result.status == 200 - retries: 35 - delay: 120 diff --git a/vagrant-pxe-airgap-harvester/ansible/prepare_harvester_nodes.yml b/vagrant-pxe-airgap-harvester/ansible/prepare_harvester_nodes.yml deleted file mode 100644 index e1373b8..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/prepare_harvester_nodes.yml +++ /dev/null @@ -1,50 +0,0 @@ ---- -- name: Adjust Harvester Nodes If Needed - hosts: harvesternodes - connection: local - gather_facts: false - become: yes - ignore_unreachable: yes - ignore_errors: yes - - - tasks: - - name: making adjustments - shell: | - echo "starting to make adjustments to harvester nodes..." - ignore_errors: yes - ignore_unreachable: yes - when: rancher_config.run_single_node_rancher | bool - - - - - name: make Harvester nodes adjusted for rancher - include_tasks: adjust_harvester_nodes.yml - vars: - node_number: "{{ item }}" - with_sequence: 0-{{ harvester_cluster_nodes|int - 1 }} - ignore_errors: yes - ignore_unreachable: yes - when: rancher_config.run_single_node_rancher | bool - - - name: Output Additional Info - block: - - name: Remind viewer of etc hosts - ansible.builtin.debug: - msg: "Please remember, in order for hostname resolution to work for Rancher you may need to update your etc/hosts file with something like: {{ rancher_config.node_harvester_network_ip }} {{ rancher_config.rancher_install_domain }}" - ignore_errors: yes - - - name: Output The Rancher URL - ansible.builtin.debug: - msg: "The Rancher URL should be: https://{{ rancher_config.rancher_install_domain }}" - ignore_errors: yes - - - name: Output The Harvester URL - ansible.builtin.debug: - msg: "The Harvester URL should be: https://{{ harvester_network_config.vip.ip }}" - ignore_errors: yes - - - name: Output Additional Info - ansible.builtin.debug: - msg: "Additionally, if you have set this up on a remote server be mindful that the IPv4 address you will not be able to access directly - you may look into something like 'sshuttle -r user@111.222.333.444 -x IP.V4.CIDR.BLOCK -vv' of what was set up to funnel requests over sshuttle tunnel- note you will also need to update your /etc/hosts prior" - ignore_errors: yes \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/reinstall_harvester_node.yml b/vagrant-pxe-airgap-harvester/ansible/reinstall_harvester_node.yml deleted file mode 100644 index 681aba6..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/reinstall_harvester_node.yml +++ /dev/null @@ -1,27 +0,0 @@ ---- -- name: Reinstall Harvester Node - hosts: localhost - connection: local - gather_facts: false - - tasks: - - name: create "Reinstalling Harvester Node" message - shell: > - figlet "Reinstalling Harvester Node {{ node_number }}" 2>/dev/null || echo "Reinstalling Harvester Node {{ node_number }}" - register: figlet_result - - - name: print "Reinstalling Harvester Node" message - debug: - msg: "{{ figlet_result.stdout }}" - - - name: boot Harvester nodes - include_tasks: boot_harvester_node.yml - - - name: create "Installation Completed" message - shell: > - figlet "Installation Completed" 2>/dev/null || echo "Installation Completed" - register: figlet_result - - - name: print "Installation Completed" - debug: - msg: "{{ figlet_result.stdout }}" diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/files/ipxe.conf b/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/files/ipxe.conf deleted file mode 100644 index f82fb4b..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/files/ipxe.conf +++ /dev/null @@ -1,44 +0,0 @@ -option space ipxe; -option ipxe-encap-opts code 175 = encapsulate ipxe; -option ipxe.priority code 1 = signed integer 8; -option ipxe.keep-san code 8 = unsigned integer 8; -option ipxe.skip-san-boot code 9 = unsigned integer 8; -option ipxe.syslogs code 85 = string; -option ipxe.cert code 91 = string; -option ipxe.privkey code 92 = string; -option ipxe.crosscert code 93 = string; -option ipxe.no-pxedhcp code 176 = unsigned integer 8; -option ipxe.bus-id code 177 = string; -option ipxe.san-filename code 188 = string; -option ipxe.bios-drive code 189 = unsigned integer 8; -option ipxe.username code 190 = string; -option ipxe.password code 191 = string; -option ipxe.reverse-username code 192 = string; -option ipxe.reverse-password code 193 = string; -option ipxe.version code 235 = string; -option iscsi-initiator-iqn code 203 = string; -# Feature indicators -option ipxe.pxeext code 16 = unsigned integer 8; -option ipxe.iscsi code 17 = unsigned integer 8; -option ipxe.aoe code 18 = unsigned integer 8; -option ipxe.http code 19 = unsigned integer 8; -option ipxe.https code 20 = unsigned integer 8; -option ipxe.tftp code 21 = unsigned integer 8; -option ipxe.ftp code 22 = unsigned integer 8; -option ipxe.dns code 23 = unsigned integer 8; -option ipxe.bzimage code 24 = unsigned integer 8; -option ipxe.multiboot code 25 = unsigned integer 8; -option ipxe.slam code 26 = unsigned integer 8; -option ipxe.srp code 27 = unsigned integer 8; -option ipxe.nbi code 32 = unsigned integer 8; -option ipxe.pxe code 33 = unsigned integer 8; -option ipxe.elf code 34 = unsigned integer 8; -option ipxe.comboot code 35 = unsigned integer 8; -option ipxe.efi code 36 = unsigned integer 8; -option ipxe.fcoe code 37 = unsigned integer 8; -option ipxe.vlan code 38 = unsigned integer 8; -option ipxe.menu code 39 = unsigned integer 8; -option ipxe.sdi code 40 = unsigned integer 8; -option ipxe.nfs code 41 = unsigned integer 8; -# disable proxydhcp delay -option ipxe.no-pxedhcp 1; diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/files/isc-dhcp-server b/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/files/isc-dhcp-server deleted file mode 100644 index 3fb715c..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/files/isc-dhcp-server +++ /dev/null @@ -1,21 +0,0 @@ -# Defaults for isc-dhcp-server initscript -# sourced by /etc/init.d/isc-dhcp-server -# installed at /etc/default/isc-dhcp-server by the maintainer scripts - -# -# This is a POSIX shell fragment -# - -# Path to dhcpd's config file (default: /etc/dhcp/dhcpd.conf). -#DHCPD_CONF=/etc/dhcp/dhcpd.conf - -# Path to dhcpd's PID file (default: /var/run/dhcpd.pid). -#DHCPD_PID=/var/run/dhcpd.pid - -# Additional options to start dhcpd with. -# Don't use options -cf or -pf here; use DHCPD_CONF/ DHCPD_PID instead -#OPTIONS="" - -# On what interfaces should the DHCP server (dhcpd) serve DHCP requests? -# Separate multiple interfaces with spaces, e.g. "eth0 eth1". -INTERFACESv4="eth1" diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/handlers/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/handlers/main.yml deleted file mode 100644 index c225e24..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/handlers/main.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -- name: restart dhcp - systemd: - name: isc-dhcp-server - state: restarted - daemon_reload: yes - enabled: yes diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/tasks/main.yml deleted file mode 100644 index 694e23c..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/tasks/main.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -- name: install isc-dhcp-server - apt: - name: isc-dhcp-server - state: present - update_cache: yes - -- name: configure isc-dhcp-server interface - copy: - src: isc-dhcp-server - dest: /etc/default/ - notify: restart dhcp - -- name: configure ipxe.conf - copy: - src: ipxe.conf - dest: /etc/dhcp/ - notify: restart dhcp - -- name: configure dhcpd.conf - template: - src: dhcpd.conf.j2 - dest: /etc/dhcp/dhcpd.conf - notify: restart dhcp diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/templates/dhcpd.conf.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/templates/dhcpd.conf.j2 deleted file mode 100644 index fe20c27..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/dhcp/templates/dhcpd.conf.j2 +++ /dev/null @@ -1,52 +0,0 @@ -ddns-update-style none; - -# option domain-name "example.org"; -include "/etc/dhcp/ipxe.conf"; -option arch code 93 = unsigned integer 16; -option user-class code 77 = string; - -default-lease-time 600; -max-lease-time 7200; -authoritative; -log-facility local7; - -subnet {{ settings['harvester_network_config']['dhcp_server']['subnet'] }} netmask {{ settings['harvester_network_config']['dhcp_server']['netmask'] }} { - range {{ settings['harvester_network_config']['dhcp_server']['range'] }}; - option domain-name-servers {{ settings['harvester_network_config']['dhcp_server']['ip'] }}, 8.8.8.8; - {% if settings['harvester_network_config']['offline'] %} - option routers {{ settings['harvester_network_config']['dhcp_server']['ip'] }}; - {% else %} - option routers {{ settings['harvester_network_config']['dhcp_server']['subnet'][:-1] }}1; - {% endif %} - next-server {{ settings['harvester_network_config']['dhcp_server']['ip'] }}; - - if exists user-class and option user-class = "iPXE" { - filename "http://{{ settings['harvester_network_config']['dhcp_server']['ip'] }}/harvester/${net1/mac}"; - } elsif option arch != 00:00 { - filename "ipxe/ipxe.efi"; - } else { - filename "ipxe/undionly.kpxe"; - } -} - -{% if settings['rancher_config']['run_single_node_rancher'] %} -host rancher_node { - hardware ethernet {{ settings['rancher_config']['mac_address_harvester_network'] }}; - fixed-address {{ settings['rancher_config']['node_harvester_network_ip'] }}; - server-name "rancher-single-node"; -} -{% endif %} - -host harvest_vip { - hardware ethernet {{ settings['harvester_network_config']['vip']['mac'] }}; - fixed-address {{ settings['harvester_network_config']['vip']['ip'] }}; -} - -{% for node_number in range(settings['harvester_cluster_nodes']) %} -host harvest_node_{{ node_number }} { - hardware ethernet {{ settings['harvester_network_config']['cluster'][node_number]['mac'] }}; - fixed-address {{ settings['harvester_network_config']['cluster'][node_number]['ip'] }}; - server-name "harvester-node-{{ node_number }}"; -} - -{% endfor %} \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/tasks/_download_media.yml b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/tasks/_download_media.yml deleted file mode 100644 index c492087..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/tasks/_download_media.yml +++ /dev/null @@ -1,16 +0,0 @@ ---- -- name: parse Harvester media download URL - set_fact: - harvester_download_url_facts: "{{ harvester_media_url | urlsplit }}" - -- name: copy Harvester media from local directory - copy: - src: "{{ harvester_download_url_facts['path'] }}" - dest: /var/www/harvester/{{ media_filename }} - when: harvester_download_url_facts['scheme']|lower == 'file' - -- name: download Harvester media - get_url: - url: "{{ harvester_media_url }}" - dest: /var/www/harvester/{{ media_filename }} - when: harvester_download_url_facts['scheme']|lower != 'file' diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/tasks/main.yml deleted file mode 100644 index e0c258d..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/tasks/main.yml +++ /dev/null @@ -1,74 +0,0 @@ ---- -- name: create Harvester config dir - file: - path: /var/www/harvester - state: directory - -- name: copy config-create.yaml - template: - src: "config-create.yaml.j2" - dest: /var/www/harvester/config-create.yaml - owner: www-data - mode: 0640 - -# NOTE(gyee): Ansible pre-process the with_sequence variable so we have to -# make sure end sequence is at least 1 even if we have only one Harvester node -- name: set node sequence fact - set_fact: - end_sequence: "{{ settings['harvester_cluster_nodes'] - 1 if settings['harvester_cluster_nodes'] > 1 else 1 }}" - -- name: copy config-join.yaml - template: - src: "config-join.yaml.j2" - dest: /var/www/harvester/config-join-{{ item }}.yaml - owner: www-data - mode: 0640 - vars: - node_number: "{{ item }}" - with_sequence: "start=1 end={{ end_sequence }}" - -- name: chown dir - file: - path: /var/www/harvester/ - owner: www-data - recurse: yes - -- name: create boot entry for the first node - template: - src: ipxe-create.j2 - dest: /var/www/harvester/{{ settings['harvester_network_config']['cluster'][0]['mac']|lower }} - vars: - boot_interface: "{{ settings['harvester_network_config']['cluster'][0]['vagrant_interface'] }}" - -- name: create boot entry for the cluster members - template: - src: ipxe-join.j2 - dest: /var/www/harvester/{{ settings['harvester_network_config']['cluster'][item|int]['mac']|lower }} - vars: - node_number: "{{ item }}" - boot_interface: "{{ settings['harvester_network_config']['cluster'][item|int]['vagrant_interface'] }}" - with_sequence: "start=1 end={{ end_sequence }}" - -- name: download Harvester kernel - include_tasks: _download_media.yml - vars: - harvester_media_url: "{{ settings['harvester_kernel_url'] }}" - media_filename: "harvester-vmlinuz-amd64" - -- name: download Harvester ramdisk - include_tasks: _download_media.yml - vars: - harvester_media_url: "{{ settings['harvester_ramdisk_url'] }}" - media_filename: "harvester-initrd-amd64" - -- name: download Harvester ISO - include_tasks: _download_media.yml - vars: - harvester_media_url: "{{ settings['harvester_iso_url'] }}" - media_filename: "harvester-amd64.iso" - -- name: download Harvester Root FS - include_tasks: _download_media.yml - vars: - harvester_media_url: "{{ settings['harvester_rootfs_url'] }}" - media_filename: "harvester-rootfs-amd64.squashfs" diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 deleted file mode 100644 index 6bf4da3..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/config-create.yaml.j2 +++ /dev/null @@ -1,78 +0,0 @@ -# example from https://github.com/harvester/ipxe-examples/blob/main/general/config-create.yaml - -{% if (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.3/harvester-v1.0.3-amd64.iso') or (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.2/harvester-v1.0.2-amd64.iso') or (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.1/harvester-v1.0.0-amd64.iso') or (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.0/harvester-v1.0.0-amd64.iso') %} -token: {{ settings['harvester_config']['token'] }} -os: - hostname: harvester-node-0 - ssh_authorized_keys: -{% for ssh_key in settings['harvester_config']['ssh_authorized_keys'] %} - - {{ ssh_key }} -{% endfor %} - password: {{ settings['harvester_config']['password'] }} - ntp_servers: -{% for ntp_server in settings['harvester_config']['ntp_servers'] %} - - {{ ntp_server }} -{% endfor %} -{% if settings['harvester_network_config']['sftp'] %} - sshd: - sftp: {{ settings['harvester_network_config']['sftp'] }} -{% endif %} -install: - mode: create -{% if settings['harvester_network_config']['cluster'][0]['role'] != 'default' %} - role: {{ settings['harvester_network_config']['cluster'][0]['role'] }} -{% endif %} - networks: - harvester-mgmt: - interfaces: - - name: {{ settings['harvester_network_config']['cluster'][0]['mgmt_interface'] }} # The management interface name - method: dhcp - bond0: - interfaces: - - name: {{ settings['harvester_network_config']['cluster'][0]['vagrant_interface'] }} - method: dhcp - device: /dev/vda # The target disk to install - iso_url: http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/harvester-amd64.iso -# tty: ttyS1,115200n8 # For machines without a VGA console - tty: ttyS0 - vip: {{ settings['harvester_network_config']['vip']['ip'] }} - vip_mode: {{ settings['harvester_network_config']['vip']['mode'] }} - vip_hw_addr: {{ settings['harvester_network_config']['vip']['mac'] }} -{% if settings['harvester_network_config']['offline'] %} -systemSettings: - ui-source: bundled -{% endif %} -{% else %} -scheme_version: 1 -token: {{ settings['harvester_config']['token'] }} -os: - hostname: harvester-node-0 - ssh_authorized_keys: -{% for ssh_key in settings['harvester_config']['ssh_authorized_keys'] %} - - {{ ssh_key }} -{% endfor %} - password: {{ settings['harvester_config']['password'] }} - ntp_servers: -{% for ntp_server in settings['harvester_config']['ntp_servers'] %} - - {{ ntp_server }} -{% endfor %} -install: - mode: create - management_interface: - interfaces: - - name: {{ settings['harvester_network_config']['cluster'][0]['mgmt_interface'] }} # The management interface name - method: dhcp - device: /dev/vda # The target disk to install - iso_url: http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/harvester-amd64.iso -# tty: ttyS1,115200n8 # For machines without a VGA console - tty: ttyS0 - vip: {{ settings['harvester_network_config']['vip']['ip'] }} - vip_mode: {{ settings['harvester_network_config']['vip']['mode'] }} - vip_hw_addr: {{ settings['harvester_network_config']['vip']['mac'] }} -{% if settings['harvester_network_config']['offline'] %} -systemSettings: - ui-source: bundled -{% endif %} -{% endif %} - - diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/config-join.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/config-join.yaml.j2 deleted file mode 100644 index f5b7747..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/config-join.yaml.j2 +++ /dev/null @@ -1,63 +0,0 @@ -# example from https://github.com/harvester/ipxe-examples/blob/main/general/config-join.yaml -{% if (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.3/harvester-v1.0.3-amd64.iso') or (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.2/harvester-v1.0.2-amd64.iso') or (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.1/harvester-v1.0.0-amd64.iso') or (settings['harvester_iso_url'] == 'https://releases.rancher.com/harvester/v1.0.0/harvester-v1.0.0-amd64.iso') %} -server_url: https://{{ settings['harvester_network_config']['vip']['ip'] }}:443 -token: {{ settings['harvester_config']['token'] }} -os: - hostname: harvester-node-{{ node_number }} - ssh_authorized_keys: -{% for ssh_key in settings['harvester_config']['ssh_authorized_keys'] %} - - {{ ssh_key }} -{% endfor %} - password: {{ settings['harvester_config']['password'] }} - ntp_servers: -{% for ntp_server in settings['harvester_config']['ntp_servers'] %} - - {{ ntp_server }} -{% endfor %} -{% if settings['harvester_network_config']['sftp'] %} - sshd: - sftp: {{ settings['harvester_network_config']['sftp'] }} -{% endif %} -install: - mode: join -{% if settings['harvester_network_config']['cluster'][node_number |int]['role'] != 'default' %} - role: {{ settings['harvester_network_config']['cluster'][node_number |int]['role'] }} -{% endif %} - networks: - harvester-mgmt: - interfaces: - - name: {{ settings['harvester_network_config']['cluster'][node_number | int]['mgmt_interface'] }} # The management interface name - method: dhcp - bond0: - interfaces: - - name: {{ settings['harvester_network_config']['cluster'][node_number | int]['vagrant_interface'] }} - method: dhcp - device: /dev/vda # The target disk to install - iso_url: http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/harvester-amd64.iso -# tty: ttyS1,115200n8 # For machines without a VGA console - tty: ttyS0 -{% else %} -scheme_version: 1 -server_url: https://{{ settings['harvester_network_config']['vip']['ip'] }}:443 -token: {{ settings['harvester_config']['token'] }} -os: - hostname: harvester-node-{{ node_number }} - ssh_authorized_keys: -{% for ssh_key in settings['harvester_config']['ssh_authorized_keys'] %} - - {{ ssh_key }} -{% endfor %} - password: {{ settings['harvester_config']['password'] }} - ntp_servers: -{% for ntp_server in settings['harvester_config']['ntp_servers'] %} - - {{ ntp_server }} -{% endfor %} -install: - mode: join - management_interface: - interfaces: - - name: {{ settings['harvester_network_config']['cluster'][node_number | int]['mgmt_interface'] }} # The management interface name - method: dhcp - device: /dev/vda # The target disk to install - iso_url: http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/harvester-amd64.iso -# tty: ttyS1,115200n8 # For machines without a VGA console - tty: ttyS0 -{% endif %} diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/configmap-rke2-coredns-rke2-coredns.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/configmap-rke2-coredns-rke2-coredns.yaml.j2 deleted file mode 100644 index b54409e..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/configmap-rke2-coredns-rke2-coredns.yaml.j2 +++ /dev/null @@ -1,9 +0,0 @@ -data: - Corefile: ".:53 {\n errors \n health {\n lameduck 5s\n }\n ready - \n kubernetes cluster.local cluster.local in-addr.arpa ip6.arpa {\n pods - insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus - \ 0.0.0.0:9153\n hosts /etc/coredns/customdomains.db {{ rancher_config.rancher_install_domain }} {\n - \ fallthrough\n }\n forward . /etc/resolv.conf\n cache 30\n loop - \n reload \n loadbalance \n}" - customdomains.db: | - {{ rancher_config.node_harvester_network_ip }} {{ rancher_config.rancher_install_domain }} \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/deployment-rke2-coredns-rke2-coredns.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/deployment-rke2-coredns-rke2-coredns.yaml.j2 deleted file mode 100644 index 404a449..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/deployment-rke2-coredns-rke2-coredns.yaml.j2 +++ /dev/null @@ -1,13 +0,0 @@ -spec: - template: - spec: - volumes: - - configMap: - defaultMode: 420 - items: - - key: Corefile - path: Corefile - - key: customdomains.db - path: customdomains.db - name: rke2-coredns-rke2-coredns - name: config-volume diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/ipxe-create.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/ipxe-create.j2 deleted file mode 100644 index bebe8b9..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/ipxe-create.j2 +++ /dev/null @@ -1,6 +0,0 @@ -#!ipxe - -kernel http://{{ settings['harvester_network_config']['dhcp_server']['ip'] }}/harvester/harvester-vmlinuz-amd64 -initrd http://{{ settings['harvester_network_config']['dhcp_server']['ip'] }}/harvester/harvester-initrd-amd64 -imgargs harvester-vmlinuz-amd64 initrd=harvester-initrd-amd64 ip={{ boot_interface }}:dhcp net.ifnames=1 rd.cos.disable rd.live.debug=1 rd.noverifyssl root=live:http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/harvester-rootfs-amd64.squashfs console=tty1 harvester.install.automatic=true harvester.install.skipchecks=true harvester.install.config_url=http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/config-create.yaml -boot diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/ipxe-join.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/ipxe-join.j2 deleted file mode 100644 index c712c10..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/ipxe-join.j2 +++ /dev/null @@ -1,6 +0,0 @@ -#!ipxe - -kernel http://{{ settings['harvester_network_config']['dhcp_server']['ip'] }}/harvester/harvester-vmlinuz-amd64 -initrd http://{{ settings['harvester_network_config']['dhcp_server']['ip'] }}/harvester/harvester-initrd-amd64 -imgargs harvester-vmlinuz-amd64 initrd=harvester-initrd-amd64 ip={{ boot_interface }}:dhcp net.ifnames=1 rd.cos.disable rd.live.debug=1 rd.noverifyssl root=live:http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/harvester-rootfs-amd64.squashfs console=tty1 harvester.install.automatic=true harvester.install.skipchecks=true harvester.install.config_url=http://{{ hostvars['pxe_server']['ansible_eth0']['ipv4']['address'] }}/harvester/config-join-{{ node_number }}.yaml -boot diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/registries-edit.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/registries-edit.yaml.j2 deleted file mode 100644 index 1dcdba5..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/harvester/templates/registries-edit.yaml.j2 +++ /dev/null @@ -1,8 +0,0 @@ -mirrors: - docker.io: - endpoint: - - https://{{ rancher_config.registry_domain }}:5000/ -configs: - {{ rancher_config.registry_domain }}:5000: - tls: - insecure_skip_verify: true \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/http/files/default b/vagrant-pxe-airgap-harvester/ansible/roles/http/files/default deleted file mode 100644 index 212d692..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/http/files/default +++ /dev/null @@ -1,7 +0,0 @@ -server { - server_name localhost; - listen 0.0.0.0:80; - location / { - root /var/www; - } -} diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/http/handlers/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/http/handlers/main.yml deleted file mode 100644 index 28b98b0..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/http/handlers/main.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -- name: restart nginx - systemd: - name: nginx - state: restarted - daemon_reload: yes - enabled: yes diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/http/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/http/tasks/main.yml deleted file mode 100644 index 7fef9fa..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/http/tasks/main.yml +++ /dev/null @@ -1,19 +0,0 @@ ---- -- name: install nginx - apt: - update_cache: yes - name: nginx - state: present - -- name: configure default site - copy: - src: default - dest: /etc/nginx/sites-available - notify: restart nginx - -- name: enable default site - file: - src: /etc/nginx/sites-available/default - dest: /etc/nginx/sites-enabled/default - state: link - notify: restart nginx diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/https/files/https b/vagrant-pxe-airgap-harvester/ansible/roles/https/files/https deleted file mode 100644 index 99a5d4b..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/https/files/https +++ /dev/null @@ -1,9 +0,0 @@ -server { - listen 443 ssl; - listen [::]:443 ssl; - include snippets/self-signed.conf; - include snippets/ssl-params.conf; - location / { - root /var/www; - } -} diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/https/files/self-signed.conf b/vagrant-pxe-airgap-harvester/ansible/roles/https/files/self-signed.conf deleted file mode 100644 index c56d97f..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/https/files/self-signed.conf +++ /dev/null @@ -1,3 +0,0 @@ -# /etc/nginx/snippets/self-signed.conf -ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt; -ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key; diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/https/files/ssl-params.conf b/vagrant-pxe-airgap-harvester/ansible/roles/https/files/ssl-params.conf deleted file mode 100644 index 50a2657..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/https/files/ssl-params.conf +++ /dev/null @@ -1,19 +0,0 @@ -# /etc/nginx/snippets/ssl-params.conf -ssl_protocols TLSv1.2; -ssl_prefer_server_ciphers on; -ssl_dhparam /etc/nginx/dhparam.pem; -ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; -ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0 -ssl_session_timeout 10m; -ssl_session_cache shared:SSL:10m; -ssl_session_tickets off; # Requires nginx >= 1.5.9 -ssl_stapling on; # Requires nginx >= 1.3.7 -ssl_stapling_verify on; # Requires nginx => 1.3.7 -resolver 8.8.8.8 8.8.4.4 valid=300s; -resolver_timeout 5s; -# Disable strict transport security for now. You can uncomment the following -# line if you understand the implications. -# add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; -add_header X-Frame-Options DENY; -add_header X-Content-Type-Options nosniff; -add_header X-XSS-Protection "1; mode=block"; \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/https/meta/main.yaml b/vagrant-pxe-airgap-harvester/ansible/roles/https/meta/main.yaml deleted file mode 100644 index 83cecbc..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/https/meta/main.yaml +++ /dev/null @@ -1,4 +0,0 @@ ---- -dependencies: - - role: http - when: settings['harvester_network_config']['dhcp_server']['https'] | bool == true diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/https/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/https/tasks/main.yml deleted file mode 100644 index 109b401..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/https/tasks/main.yml +++ /dev/null @@ -1,41 +0,0 @@ ---- -- name: create config file - template: - src: "openssl.conf.j2" - dest: /var/www/openssl.conf - -- name: generate SSL Key - command: > - openssl req -x509 -nodes -days 365 -config /var/www/openssl.conf -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt - -- name: generate pem for nginx - command: openssl dhparam -dsaparam -out /etc/nginx/dhparam.pem 4096 - -- name: copy configuration - copy: - src: '{{item}}' - dest: '/etc/nginx/snippets/' - loop: - - self-signed.conf - - ssl-params.conf - -- name: configure https site - copy: - src: https - dest: /etc/nginx/sites-available - notify: restart nginx - -- name: enable https site - file: - src: /etc/nginx/sites-available/https - dest: /etc/nginx/sites-enabled/https - state: link - notify: restart nginx - -- name: show CA cert - command: cat /etc/ssl/certs/nginx-selfsigned.crt - register: command_output - -- name: Print to console - debug: - msg: "{{command_output.stdout}}" diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/https/templates/openssl.conf.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/https/templates/openssl.conf.j2 deleted file mode 100644 index 68c816e..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/https/templates/openssl.conf.j2 +++ /dev/null @@ -1,19 +0,0 @@ -[req] -default_bits = 4096 -default_md = sha256 -distinguished_name = req_distinguished_name -x509_extensions = v3_req -prompt = no -[req_distinguished_name] -C = US -ST = VA -L = SomeCity -O = MyCompany -OU = MyDivision -CN = {{ settings['harvester_network_config']['dhcp_server']['ip'] }} -[v3_req] -keyUsage = keyEncipherment, dataEncipherment -extendedKeyUsage = serverAuth -subjectAltName = @alt_names -[alt_names] -IP.1 = {{ settings['harvester_network_config']['dhcp_server']['ip'] }} \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/ipxe/files/init.ipxe b/vagrant-pxe-airgap-harvester/ansible/roles/ipxe/files/init.ipxe deleted file mode 100644 index fdd2200..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/ipxe/files/init.ipxe +++ /dev/null @@ -1,7 +0,0 @@ -#!ipxe -:loop -echo ipxe is working ! -sleep 5 && goto load - -:load -chain ipxe-create.ipxe || goto loop diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/ipxe/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/ipxe/tasks/main.yml deleted file mode 100644 index 8b6c1a5..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/ipxe/tasks/main.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- name: create ipxe dir - file: - path: /tftpboot/ipxe - state: directory - -- name: install ipxe firmwares - get_url: - url: '{{ item }}' - dest: /tftpboot/ipxe/ - loop: - - "https://boot.ipxe.org/{{ {'aarch64': 'arm64'}.get(ansible_architecture, ansible_architecture) }}-efi/ipxe.efi" - - "https://boot.ipxe.org/undionly.kpxe" diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/proxy/files/default b/vagrant-pxe-airgap-harvester/ansible/roles/proxy/files/default deleted file mode 100644 index 1feab63..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/proxy/files/default +++ /dev/null @@ -1,2 +0,0 @@ -http_access allow all -http_port 3128 diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/proxy/handlers/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/proxy/handlers/main.yml deleted file mode 100644 index cc370b9..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/proxy/handlers/main.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -- name: restart squid - systemd: - name: squid - state: restarted - daemon_reload: yes - enabled: yes diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/proxy/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/proxy/tasks/main.yml deleted file mode 100644 index b3df316..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/proxy/tasks/main.yml +++ /dev/null @@ -1,12 +0,0 @@ ---- -- name: install squid - apt: - update_cache: yes - name: squid - state: present - -- name: configure squid - copy: - src: default - dest: /etc/squid/squid.conf - notify: restart squid diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/create-self-signed-cert.sh b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/create-self-signed-cert.sh deleted file mode 100644 index eb46411..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/create-self-signed-cert.sh +++ /dev/null @@ -1,157 +0,0 @@ -#!/bin/bash -e - -help () -{ - echo ' ================================================================ ' - echo ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;' - echo ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;' - echo ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;' - echo ' --ssl-size: ssl加密位数,默认2048;' - echo ' --ssl-cn: 国家代码(2个字母的代号),默认CN;' - echo ' 使用示例:' - echo ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com \ ' - echo ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650' - echo ' ================================================================' -} - -case "$1" in - -h|--help) help; exit;; -esac - -if [[ $1 == '' ]];then - help; - exit; -fi - -CMDOPTS="$*" -for OPTS in $CMDOPTS; -do - key=$(echo ${OPTS} | awk -F"=" '{print $1}' ) - value=$(echo ${OPTS} | awk -F"=" '{print $2}' ) - case "$key" in - --ssl-domain) SSL_DOMAIN=$value ;; - --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;; - --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;; - --ssl-size) SSL_SIZE=$value ;; - --ssl-date) SSL_DATE=$value ;; - --ca-date) CA_DATE=$value ;; - --ssl-cn) CN=$value ;; - esac -done - -# CA相关配置 -CA_DATE=${CA_DATE:-3650} -CA_KEY=${CA_KEY:-cakey.pem} -CA_CERT=${CA_CERT:-cacerts.pem} -CA_DOMAIN=cattle-ca - -# ssl相关配置 -SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf} -SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'} -SSL_DATE=${SSL_DATE:-3650} -SSL_SIZE=${SSL_SIZE:-2048} - -## 国家代码(2个字母的代号),默认CN; -CN=${CN:-CN} - -SSL_KEY=$SSL_DOMAIN.key -SSL_CSR=$SSL_DOMAIN.csr -SSL_CERT=$SSL_DOMAIN.crt - -echo -e "\033[32m ---------------------------- \033[0m" -echo -e "\033[32m | 生成 SSL Cert | \033[0m" -echo -e "\033[32m ---------------------------- \033[0m" - -if [[ -e ./${CA_KEY} ]]; then - echo -e "\033[32m ====> 1. 发现已存在CA私钥,备份${CA_KEY}为${CA_KEY}-bak,然后重新创建 \033[0m" - mv ${CA_KEY} "${CA_KEY}"-bak - openssl genrsa -out ${CA_KEY} ${SSL_SIZE} -else - echo -e "\033[32m ====> 1. 生成新的CA私钥 ${CA_KEY} \033[0m" - openssl genrsa -out ${CA_KEY} ${SSL_SIZE} -fi - -if [[ -e ./${CA_CERT} ]]; then - echo -e "\033[32m ====> 2. 发现已存在CA证书,先备份${CA_CERT}为${CA_CERT}-bak,然后重新创建 \033[0m" - mv ${CA_CERT} "${CA_CERT}"-bak - openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}" -else - echo -e "\033[32m ====> 2. 生成新的CA证书 ${CA_CERT} \033[0m" - openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}" -fi - -echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m" -cat > ${SSL_CONFIG} <> ${SSL_CONFIG} <> ${SSL_CONFIG} - done - - if [[ -n ${SSL_TRUSTED_IP} ]]; then - ip=(${SSL_TRUSTED_IP}) - for i in "${!ip[@]}"; do - echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG} - done - fi -fi - -echo -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m" -openssl genrsa -out ${SSL_KEY} ${SSL_SIZE} - -echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m" -openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG} - -echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m" -openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \ - -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \ - -days ${SSL_DATE} -extensions v3_req \ - -extfile ${SSL_CONFIG} - -echo -e "\033[32m ====> 7. 证书制作完成 \033[0m" -echo -echo -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m" -echo "----------------------------------------------------------" -echo "ca_key: |" -cat $CA_KEY | sed 's/^/ /' -echo -echo "ca_cert: |" -cat $CA_CERT | sed 's/^/ /' -echo -echo "ssl_key: |" -cat $SSL_KEY | sed 's/^/ /' -echo -echo "ssl_csr: |" -cat $SSL_CSR | sed 's/^/ /' -echo -echo "ssl_cert: |" -cat $SSL_CERT | sed 's/^/ /' -echo - -echo -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m" -cat ${CA_CERT} >> ${SSL_CERT} -echo "ssl_cert: |" -cat $SSL_CERT | sed 's/^/ /' -echo - -echo -e "\033[32m ====> 10. 重命名服务证书 \033[0m" -echo "cp ${SSL_DOMAIN}.key tls.key" -cp ${SSL_DOMAIN}.key tls.key -echo "cp ${SSL_DOMAIN}.crt tls.crt" -cp ${SSL_DOMAIN}.crt tls.crt \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/docker-compose.yaml b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/docker-compose.yaml deleted file mode 100644 index 87ef2fb..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/docker-compose.yaml +++ /dev/null @@ -1,11 +0,0 @@ -registry: - restart: always - image: registry:2 - ports: - - 5000:5000 - environment: - REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt - REGISTRY_HTTP_TLS_KEY: /certs/domain.key - volumes: - - /home/vagrant/registry:/var/lib/registry - - /home/vagrant/certs:/certs diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/get-rancher-scripts.sh b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/get-rancher-scripts.sh deleted file mode 100644 index 4a5123e..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/get-rancher-scripts.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash -# vim get-rancher-scripts -if [[ $# -eq 0 ]] ; then - echo 'This requires you to pass a version for the url like "v2.6.5"' - exit 1 -fi -wget https://github.com/rancher/rancher/releases/download/$1/rancher-images.txt -wget https://github.com/rancher/rancher/releases/download/$1/rancher-load-images.sh -wget https://github.com/rancher/rancher/releases/download/$1/rancher-save-images.sh diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/registries-yaml-edit.yaml b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/registries-yaml-edit.yaml deleted file mode 100644 index e61d166..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/files/registries-yaml-edit.yaml +++ /dev/null @@ -1,10 +0,0 @@ - -mirrors: - docker.io: - endpoint: - - "https://myregistry.local:5000/" -configs: - "myregistry.local:5000": - tls: - insecure_skip_verify: true - diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/tasks/main.yml deleted file mode 100644 index ac27cd9..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/tasks/main.yml +++ /dev/null @@ -1,845 +0,0 @@ ---- -# TODO: Rip this apart into smaller tasks yaml files, as to avoid having one massive main.yml file... :/ - - name: Setup Networking - block: - - name: update sshd config - ansible.builtin.shell: | - echo "ListenAddress {{ settings.rancher_config.node_harvester_network_ip }}" >> /etc/ssh/sshd_config - register: shifted_network_sshd - - - name: restart sshd - ansible.builtin.service: - name: ssh - state: restarted - - # since harvester network cant communicate outbound, we need to get packages and such prior - # to network cutover - - name: remove ip route - ansible.builtin.shell: | - ip route del to default via 192.168.2.254 - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - - - name: Initial package setup, partitioning, host file modification, docker mount building - block: - - name: update /etc/hosts - ansible.builtin.shell: > - echo "" >> /etc/hosts && echo "{{ settings.rancher_config.node_harvester_network_ip }} {{ settings.rancher_config.registry_domain }}" >> /etc/hosts \ - && echo "" >> /etc/hosts && echo "{{ settings.rancher_config.node_harvester_network_ip }} {{ settings.rancher_config.rancher_install_domain }}" >> /etc/hosts - register: host_update_result - - # - name: flip off eht1 temporarily - # ansible.builtin.shell: > - # ip link set eth1 down - - - name: check connection to google first...so apt update doesn't fail - ansible.builtin.uri: - url: "https://www.google.com" - retries: 5 - delay: 10 - register: outbound_connection_result - until: outbound_connection_result is success - - # NOTE: running 'update_cache' on the builtin apt module is not 'reliable', has mixed success over a series of runs - # had tried to originally implement that way, was presented with problems where sometimes it would work - # other times it simply wouldn't - - name: Run the equivalent of "apt-get update" as a separate step, first - ansible.builtin.apt: - update_cache: yes - update_cache_retries: 10 - - - - name: grab base packages for vagrant rancher single node - retries: 30 - delay: 10 - ansible.builtin.apt: - pkg: - - libnss-mdns - - avahi-daemon - - gnupg2 - - vim - - ca-certificates - - curl - - gnupg - - lsb-release - - wget - - openssl - - net-tools - - htop - - software-properties-common - - parted - - qemu-guest-agent - state: present - register: apt_init_result - until: apt_init_result is success - - - name: copy avahi-daemon.conf over - ansible.builtin.template: - src: "avahi-daemon.conf.j2" - dest: /etc/avahi/avahi-daemon.conf - force: yes - - - name: restart avahi-daemon - ansible.builtin.systemd: - name: avahi-daemon.service - state: reloaded - - - name: Output apt_init_result Debug Msg - ansible.builtin.debug: - msg: "{{ apt_init_result.stdout_lines }}" - verbosity: 2 - ignore_errors: yes - - # TODO: Start grouping things with 'blocks'! - that way there can be logical grouping to this massive script, block/rescue/always - - name: Create a new ext4 primary partition - community.general.parted: - device: /dev/vdb - number: 1 - state: present - label: gpt - part_type: primary - part_start: 0% - part_end: 50% - register: partion_formated_result - - - name: Output partition_formated_result Debug Msg - ansible.builtin.debug: - msg: "{{ partion_formated_result.stdout_lines }}" - verbosity: 2 - ignore_errors: yes - - - name: format drive as ext4 - community.general.filesystem: - fstype: ext4 - dev: /dev/vdb1 - register: format_vdb_result - - - name: create mount directory for bigger volume - ansible.builtin.file: - path: /mnt/docker - state: directory - - - name: create a new ext4 secondary partition for registry - community.general.parted: - device: /dev/vdb - number: 2 - state: present - label: gpt - part_start: 50% - part_end: 100% - register: seconday_partition_result - - - name: format secondary drive as ext4 - community.general.filesystem: - fstype: ext4 - dev: /dev/vdb2 - register: secondary_partition_format_result - - - name: mount primary vdb partion device - ansible.builtin.command: mount /dev/vdb1 /mnt/docker - register: mount_result - - - name: make docker var lib directory - ansible.builtin.file: - path: /var/lib/docker - state: directory - - - name: make registry on vagrant home - ansible.builtin.file: - path: /home/vagrant/registry - state: directory - # TODO: Refactor Mount Commands to use builtin module...if possible with consitency - - name: mount secondary vdb partion device - ansible.builtin.command: mount /dev/vdb2 /home/vagrant/registry - register: mount_result_secondary - - - name: docker storage shift to vdb - ansible.builtin.command: mount --rbind /mnt/docker /var/lib/docker - register: docker_vdb_storage_shift_result - - - - name: Download K3s, Docker, Helm and setup - block: - - name: download k3s air gap image - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/k3s-io/k3s/releases/download/{{ settings.rancher_config.k3s_url_escaped_version }}/k3s-airgap-images-amd64.tar - dest: /home/vagrant/k3s-airgap-images-amd64.tar - register: wget_k3s_airgap_tar_result - until: wget_k3s_airgap_tar_result is success - - - - name: download k3s - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/k3s-io/k3s/releases/download/{{ settings.rancher_config.k3s_url_escaped_version }}/k3s - dest: /home/vagrant/k3s - register: wget_k3s_result - until: wget_k3s_result is success - - - name: download k3s install shell file - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://get.k3s.io - dest: /home/vagrant/install.sh - register: wget_k3s_install_shell_result - until: wget_k3s_install_shell_result is success - - - name: make executable k3s install shell - ansible.builtin.file: - dest: /home/vagrant/install.sh - mode: a+x - register: k3s_install_shell_executable_modify_result - - - name: run curl to snag docker linux ubuntu gpg - ansible.builtin.shell: | - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg - retries: 30 - delay: 10 - register: result_of_curl_snag_gpg_docker - until: result_of_curl_snag_gpg_docker is success - - - name: add docker to apt sources lists - ansible.builtin.shell: | - echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null - register: result_of_apt_sources_list_docker_add - - - name: Run the equivalent of "apt-get update" as a separate step, second - ansible.builtin.apt: - update_cache: yes - update_cache_retries: 10 - - - name: install docker - retries: 30 - delay: 10 - ansible.builtin.apt: - pkg: - - docker-ce - - docker-ce-cli - - containerd.io - - docker-compose - state: present - register: apt_install_docker_deps - until: apt_install_docker_deps is success - - - name: add vagrant to docker - ansible.builtin.user: - name: vagrant - groups: docker - append: yes - register: result_docker_group_add_user_mod - - - name: add helm apt signing key - ansible.builtin.apt_key: - url: https://baltocdn.com/helm/signing.asc - state: present - register: helm_key_result - retries: 30 - delay: 10 - until: helm_key_result is success - - - name: Run the equivalent of "apt-get update" as a separate step, third - ansible.builtin.apt: - update_cache: yes - update_cache_retries: 10 - - - name: snag apt transport https - retries: 30 - delay: 10 - ansible.builtin.apt: - pkg: - - apt-transport-https - state: present - register: apt_transport_https_pkg_result - until: apt_transport_https_pkg_result is success - - - name: modify helm ubuntu srcs - ansible.builtin.shell: | - echo "deb https://baltocdn.com/helm/stable/debian/ all main" | tee /etc/apt/sources.list.d/helm-stable-debian.list - register: modify_helm_ubuntu_srcs_result - - - - name: Run the equivalent of "apt-get update" as a separate step, fourth - ansible.builtin.apt: - update_cache: yes - update_cache_retries: 10 - - - name: install helm - retries: 30 - delay: 10 - ansible.builtin.apt: - pkg: - - helm - state: present - register: acquire_helm_pkg_status - until: acquire_helm_pkg_status is success - - - name: Set up certs with openssl - block: - - name: Output acquire_helm_pkg_status Debug Msg - ansible.builtin.debug: - msg: "{{ acquire_helm_pkg_status.stdout_lines }}" - verbosity: 2 - ignore_errors: yes - - - name: create certs dir - ansible.builtin.file: - path: /home/vagrant/certs - state: directory - - - name: create registry dir - ansible.builtin.file: - path: /home/vagrant/registry - state: directory - - - name: build openssl registry certs task - ansible.builtin.command: openssl req -newkey rsa:4096 -nodes -sha256 -keyout /home/vagrant/certs/domain.key -addext "subjectAltName = DNS:{{ settings.rancher_config.registry_domain }}" -subj '/CN=www.mydom.com/O=My Company Name LTD./C=US' -x509 -days 365 -out /home/vagrant/certs/domain.crt - register: result_openssl_docker_reg_certs - - - name: create certs docker dir - ansible.builtin.file: - path: /etc/docker/certs.d/{{ settings.rancher_config.registry_domain }}:5000 - state: directory - register: result_docker_certs_dir - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: move certs - ansible.builtin.command: cp -v /home/vagrant/certs/domain.crt /etc/docker/certs.d/{{ settings.rancher_config.registry_domain }}:5000/domain.crt - register: certs_moved_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: Output certs_moved_result Debug Msg - ansible.builtin.debug: - msg: "{{ certs_moved_result.stdout_lines }}" - verbosity: 2 - ignore_errors: yes - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: Move Over Docker Registry Content and Setup Rancher Images on Registry - block: - - name: copy docker-compose.yaml over - ansible.builtin.copy: - src: files/docker-compose.yaml - dest: /home/vagrant/ - register: copy_docker_compose_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: start docker registry - ansible.builtin.command: docker-compose -f /home/vagrant/docker-compose.yaml up -d - register: docker_start_info - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: copy get-rancher-scripts over - ansible.builtin.copy: - src: files/get-rancher-scripts.sh - dest: /home/vagrant/ - register: copy_rancher_script_status - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: make rancher scripts executable - ansible.builtin.file: - dest: /home/vagrant/get-rancher-scripts.sh - mode: a+x - register: rancher_scripts_executable_adj_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: run rancher script of getting images - ansible.builtin.shell: | - cd /home/vagrant && ./get-rancher-scripts.sh {{ settings.rancher_config.rancher_version }} && ls -alh /home/vagrant - register: result_images - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: make executable script rancher save images - ansible.builtin.file: - dest: /home/vagrant/rancher-save-images.sh - mode: a+x - register: result_save_image_script_executable - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: make executable script rancher load images - ansible.builtin.file: - dest: /home/vagrant/rancher-load-images.sh - mode: a+x - register: result_load_images_script_executable - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: Fetch Cert-Manager with Helm and Donwload Rancher AirGap Images - block: - - name: add cert manager helm repo for rancher - ansible.builtin.shell: | - helm repo add jetstack https://charts.jetstack.io/ - register: helm_repo_cert_manager_add_result - - - name: update helm repo - ansible.builtin.shell: | - helm repo update - register: helm_repo_update_result - - - name: fetch cert manager via helm - ansible.builtin.shell: | - helm fetch jetstack/cert-manager --version {{ settings.rancher_config.cert_manager_version }} - register: helm_fetch_cert_manager_result - - - name: append rancher-images.txt with helm info for cert-manager - ansible.builtin.shell: | - helm template ./cert-manager-{{ settings.rancher_config.cert_manager_version }}.tgz | awk '$1 ~ /image:/ {print $2}' | sed s/\"//g >> /home/vagrant/rancher-images.txt - register: helm_images_added_to_rancher_images_for_cert_manager_result - - - name: helm repo add rancher-latest - ansible.builtin.shell: | - helm repo add rancher-latest https://releases.rancher.com/server-charts/latest - register: helm_repo_add_rancher_charts - - - name: update helm repo post rancher charts adding - ansible.builtin.shell: | - helm repo update - register: helm_repo_update_result_post_rancher_charts - - - name: download rancher 2.6.4 - ansible.builtin.shell: | - helm fetch rancher-latest/rancher --version=v2.6.4 - register: download_rancher_result - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version != "v2.6.4") - - - name: download rancher desired version - ansible.builtin.shell: | - helm fetch rancher-latest/rancher --version={{ settings.rancher_config.rancher_version }} - register: download_rancher_result - - - name: sort rancher-images.txt - ansible.builtin.command: sort -u /home/vagrant/rancher-images.txt -o /home/vagrant/rancher-images.txt - register: sort_result_of_rancher_images_txt - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: acquire rancher images - ansible.builtin.command: /home/vagrant/rancher-save-images.sh --image-list /home/vagrant/rancher-images.txt - register: rancher_image_acquired_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: Grab Rancher 2.6.4 Images When Desired Version is Not 2.6.4 and Load - block: - - name: acquire rancher-images v2.6.4 txt - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/rancher/rancher/releases/download/v2.6.4/rancher-images.txt - dest: /home/vagrant/rancher-images-v264.txt - register: v264_rancher_images_txt - until: v264_rancher_images_txt is success - - - name: acquire rancher-load-images v2.6.4 shell file - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/rancher/rancher/releases/download/v2.6.4/rancher-load-images.sh - dest: /home/vagrant/rancher-load-images-v264.sh - register: v264_rancher_load_images_sh - until: v264_rancher_load_images_sh is success - - - name: acquire rancher-save-images v2.6.4 shell file - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/rancher/rancher/releases/download/v2.6.4/rancher-save-images.sh - dest: /home/vagrant/rancher-save-images-v264.sh - register: v264_rancher_save_images_sh - until: v264_rancher_save_images_sh is success - - - name: make executable script rancher load images v2.6.4 - ansible.builtin.file: - dest: /home/vagrant/rancher-load-images-v264.sh - mode: a+x - - - name: make executable script rancher save images v2.6.4 - ansible.builtin.file: - dest: /home/vagrant/rancher-save-images-v264.sh - mode: a+x - - - name: acquire rancher images for v2.6.4 - ansible.builtin.command: /home/vagrant/rancher-save-images-v264.sh --image-list /home/vagrant/rancher-images-v264.txt - - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version != "v2.6.4") - - - name: Acquire K9s - block: - - name: Download K9s with - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/derailed/k9s/releases/download/{{ settings.rancher_config.k9s_version }}/k9s_Linux_x86_64.tar.gz - dest: /home/vagrant/k9s_Linux_x86_64.tar.gz - register: download_k9s_result - until: download_k9s_result is success - - - - name: Move over K3s and build Cert-Manager CRDS - block: - - name: copy over k3s - ansible.builtin.shell: | - cd /home/vagrant && chmod +x k3s && cp -v k3s /usr/local/bin/ && chown $USER /usr/local/bin/k3s - register: copy_k3s_result - - - name: create cert-manager dir - ansible.builtin.file: - path: /home/vagrant/cert-manager - state: directory - - - name: download cert manager crds - retries: 30 - delay: 10 - ansible.builtin.get_url: - force: yes - timeout: 30 - url: https://github.com/jetstack/cert-manager/releases/download/{{ settings.rancher_config.cert_manager_version }}/cert-manager.crds.yaml - dest: /home/vagrant/cert-manager/cert-manager-crd.yaml - register: download_cert_manager_crds - until: download_cert_manager_crds is success - - - - name: Network Cutover AirGap the VM and Load Images into Registry - block: - # - name: ip link eth1 back up - # ansible.builtin.shell: | - # ip link set eth1 up - # register: eth1_back_up - - # turn off the eth0, which is default network - - name: disable eth0, default network, switch pure to harvester - ansible.builtin.shell: | - ip link set eth0 down - register: ifconfig_eth0_disabling_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: ip route enable back for harvester - ansible.builtin.shell: | - ip route replace default via 192.168.2.254 dev eth1 - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - # - name: delete ip route from other interface - # ansible.builtin.shell: | - # ip route delete $(ip -f inet addr show eth0 | sed -En -e 's/.*inet ([0-9.]+).*/\1/p')/24 dev eth0 - - # # TODO: implement a better fix, this is to prevent rke-metadata-config calls from taking place - - name: disable rancher.com access - ansible.builtin.shell: | - echo "ALL : .rancher.com" >> /etc/hosts.deny - register: disable_rancher_access - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: ip tables drop output eth0 - ansible.builtin.shell: | - iptables -A OUTPUT -o eth0 -j DROP - register: ip_tbl_output_drp - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: ip tables drop forward eth0 - ansible.builtin.shell: | - iptables -A FORWARD -o eth0 -j DROP - register: ip_tbl_forward_drp - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: test disabling didn't break pipe - ansible.builtin.command: echo "testing..." - register: result_of_eth0_disable_test - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: load in rancher images to private registry - ansible.builtin.command: /home/vagrant/rancher-load-images.sh --image-list /home/vagrant/rancher-images.txt --registry {{ settings.rancher_config.registry_domain }}:5000 - register: load_to_private_registry_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: create rancher k3s agent images directory - ansible.builtin.file: - path: /var/lib/rancher/k3s/agent/images/ - state: directory - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: copy over airgap images amd64 tar - ansible.builtin.shell: | - cd /home/vagrant && cp -v ./k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/ - register: airgap_k3s_image_copy_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - - name: Setup K3s Copy Kubeconfig - block: - - name: install k3s - ansible.builtin.shell: | - cd /home/vagrant && INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh - register: install_k3s_result - - - name: copy registries-yaml-edit.yaml over - ansible.builtin.copy: - src: files/registries-yaml-edit.yaml - dest: /etc/rancher/k3s/ - register: copy_rancher_registries_edit_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: append registries.yaml with edits - ansible.builtin.shell: | - cat /etc/rancher/k3s/registries-yaml-edit.yaml >> /etc/rancher/k3s/registries.yaml - register: append_registries_yaml_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: restart k3s - ansible.builtin.service: - name: k3s - state: restarted - register: k3s_restart_result - - - name: copy over kubeconfig to vagrant home - ansible.builtin.shell: | - mkdir -p /home/vagrant/.kube/config \ - && cp -v /etc/rancher/k3s/k3s.yaml /home/vagrant/.kube/config \ - && chown -R vagrant /home/vagrant/.kube/config \ - && export KUBECONFIG=/home/vagrant/.kube/config - register: kubeconfig_copy_result - - - - name: Begin Helm Installs of Cert Manager and Build Self Signed Certificate - block: - - name: generate cert-manager yaml files airgapped - ansible.builtin.shell: | - cd /home/vagrant && helm template cert-manager ./cert-manager-{{ settings.rancher_config.cert_manager_version }}.tgz --output-dir . \ - --namespace cert-manager \ - --set image.repository={{ settings.rancher_config.registry_domain }}:5000/quay.io/jetstack/cert-manager-controller \ - --set webhook.image.repository={{ settings.rancher_config.registry_domain }}:5000/quay.io/jetstack/cert-manager-webhook \ - --set cainjector.image.repository={{ settings.rancher_config.registry_domain }}:5000/quay.io/jetstack/cert-manager-cainjector \ - --set startupapicheck.image.repository={{ settings.rancher_config.registry_domain }}:5000/quay.io/jetstack/cert-manager-ctl - register: cert_manager_yaml_file_generation_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: generate cert-manager yaml files non-airgapped - ansible.builtin.shell: | - cd /home/vagrant && helm template cert-manager ./cert-manager-{{ settings.rancher_config.cert_manager_version }}.tgz --output-dir . \ - --namespace cert-manager - register: cert_manager_yaml_file_generation_result - when: not settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: create cert-manager namespace - ansible.builtin.shell: | - kubectl create namespace cert-manager - register: cert_manager_namespace_create_result - - - name: apply cert manger crd - ansible.builtin.shell: | - cd /home/vagrant && kubectl apply -f cert-manager/cert-manager-crd.yaml - register: cert_manager_crd_k8s_apply_result - - - name: apply additional cert manager - ansible.builtin.shell: | - cd /home/vagrant && kubectl apply -R -f ./cert-manager - - - name: copy rancher create-self-signed-cert.sh over - ansible.builtin.copy: - src: files/create-self-signed-cert.sh - dest: /home/vagrant/ - register: copy_rancher_purposed_create_self_signed_cert_result - - - name: make rancher create-self-signed-cert.sh executable - ansible.builtin.file: - dest: /home/vagrant/create-self-signed-cert.sh - mode: a+x - register: rancher_create_self_signed_script_made_executable_result - - - name: create self signed cert - ansible.builtin.shell: | - cd /home/vagrant && ./create-self-signed-cert.sh --ssl-domain={{ settings.rancher_config.rancher_install_domain }} --ssl-trusted-ip={{ settings.rancher_config.node_harvester_network_ip }} - register: self_signed_cert_creation_result - - - - name: Install Rancher Via Helm - block: - - name: create cattle-system namespace - ansible.builtin.shell: | - kubectl create ns cattle-system - register: create_cattle_system_namespace_result - - - name: create tls-sa secret - ansible.builtin.shell: | - cd /home/vagrant && kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem=./cacerts.pem - register: tls_sa_secret_from_pem_result - - - name: create tls rancher ingress secret - ansible.builtin.shell: | - cd /home/vagrant && kubectl -n cattle-system create secret tls tls-rancher-ingress \ - --cert=tls.crt \ - --key=tls.key - register: rancher_ingress_secret_result - - - name: Make /etc/rancher/k3s/k3s.yaml Open To all - ansible.builtin.shell: | - chmod 755 /etc/rancher/k3s/k3s.yaml - - # UPDATE: 12/07/22, this is still present, re-opening 37779 https://github.com/rancher/rancher/issues/37779#issuecomment-1341919319 - # TODO: find out the cause of why the: Error: chart requires kubeVersion: < 1.24.0-0 which is incompatible with Kubernetes v1.24.0 - # ends up being displayed, it's however fixed with `--validate` added to the end of the template command - # ATTN: bootstrapPassword is broken in 2.6.5 Rancher, x-ref: https://github.com/rancher/rancher/issues/37779 - # --set bootstrapPassword={{ rancher_config.bootstrap_password }} \ - # Seems like the "--no-hooks" might thave been the culprit - # UPDATE: even though removing '--no-hooks' in template command with v2.6.5, there are still problems, surfaced more info in #37779 - # There are some open issues in Rancher that concern this: - # - https://github.com/rancher/rancher/issues/37993 - # - https://github.com/rancher/rancher/pull/37772 - # - https://github.com/rancher/qa-tasks/issues/392 - # - https://github.com/rancher/rancher/issues/37779 - # We will be at versions greater than 2.6.4, installing 2.6.4 then upgrading - - name: install Rancher v2.6.4 - ansible.builtin.command: - helm --kubeconfig /etc/rancher/k3s/k3s.yaml template rancher /home/vagrant/rancher-{{ settings.rancher_config.rancher_version_no_prefix }}.tgz --output-dir /home/vagrant --no-hooks --namespace cattle-system \ - --set hostname={{ settings.rancher_config.rancher_install_domain }} \ - --set rancherImageTag={{ settings.rancher_config.rancher_version }} \ - --set rancherImage={{ settings.rancher_config.registry_domain }}:5000/rancher/rancher \ - --set systemDefaultRegistry={{ settings.rancher_config.registry_domain }}:5000 \ - --set bootstrapPassword={{ settings.rancher_config.bootstrap_password }} \ - --set useBundledSystemChart=true \ - --set replicas={{ settings.rancher_config.rancher_replicas }} \ - --set ingress.tls.source=secret \ - --set privateCA=true --validate - register: helm_rancher_generate_template_command_result - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version == "v2.6.4") - - - name: install Rancher v2.6.4 before upgrading to desired version - ansible.builtin.command: - helm --kubeconfig /etc/rancher/k3s/k3s.yaml template rancher /home/vagrant/rancher-2.6.4.tgz --output-dir /home/vagrant --no-hooks --namespace cattle-system \ - --set hostname={{ settings.rancher_config.rancher_install_domain }} \ - --set rancherImageTag={{ settings.rancher_config.rancher_version }} \ - --set rancherImage={{ settings.rancher_config.registry_domain }}:5000/rancher/rancher \ - --set systemDefaultRegistry={{ settings.rancher_config.registry_domain }}:5000 \ - --set bootstrapPassword={{ settings.rancher_config.bootstrap_password }} \ - --set useBundledSystemChart=true \ - --set replicas=0 \ - --set ingress.tls.source=secret \ - --set privateCA=true --validate - register: helm_rancher_generate_template_command_result - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version != "v2.6.4") - - - name: install rancher airgapped - ansible.builtin.command: kubectl -n cattle-system apply -R -f /home/vagrant/rancher - register: install_rancher_result - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: install rancher non-airgapped - ansible.builtin.command: - helm --kubeconfig /etc/rancher/k3s/k3s.yaml install rancher rancher-latest/rancher --devel \ - --version {{ settings.rancher_config.rancher_version }} \ - --namespace cattle-system \ - --set hostname={{ settings.rancher_config.rancher_install_domain }} \ - --set rancherImageTag={{ settings.rancher_config.rancher_version }} \ - --set bootstrapPassword={{ settings.rancher_config.bootstrap_password }} \ - --set replicas={{ settings.rancher_config.rancher_replicas }} \ - --set ingress.tls.source=secret \ - --set privateCA=true - register: install_rancher_result - retries: 5 - delay: 10 - until: install_rancher_result is success - when: not (settings.rancher_config.run_single_node_air_gapped_rancher | bool) - - - name: Output install_rancher_result Debug Msg - ansible.builtin.debug: - msg: "{{ install_rancher_result.stdout_lines }}" - ignore_errors: yes - - - name: Wait For Rollout - ansible.builtin.shell: | - kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml rollout status deployment/rancher -n cattle-system - - - name: template desired rancher over existing rancher - ansible.builtin.command: - helm --kubeconfig /etc/rancher/k3s/k3s.yaml template rancher /home/vagrant/rancher-{{ settings.rancher_config.rancher_version_no_prefix }}.tgz --output-dir /home/vagrant --no-hooks --namespace cattle-system \ - --set hostname={{ settings.rancher_config.rancher_install_domain }} \ - --set rancherImageTag={{ settings.rancher_config.rancher_version }} \ - --set rancherImage={{ settings.rancher_config.registry_domain }}:5000/rancher/rancher \ - --set systemDefaultRegistry={{ settings.rancher_config.registry_domain }}:5000 \ - --set bootstrapPassword={{ settings.rancher_config.bootstrap_password }} \ - --set useBundledSystemChart=true \ - --set replicas={{ settings.rancher_config.rancher_replicas }} \ - --set ingress.tls.source=secret \ - --set privateCA=true --validate - register: helm_rancher_generate_template_command_result - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version != "v2.6.4") - - - name: install rancher airgapped desired version over existing 2.6.4 - ansible.builtin.command: kubectl -n cattle-system apply -R -f /home/vagrant/rancher - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version != "v2.6.4") - - - - name: Wait For Second Rollout To Adjust from 2.6.4 to desired version - ansible.builtin.shell: | - kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml rollout status deployment/rancher -n cattle-system - when: (settings.rancher_config.run_single_node_air_gapped_rancher | bool) and (settings.rancher_config.rancher_version != "v2.6.4") - - - - name: Wait for Rancher To Become Available - block: - - name: Capture the Rancher node's password - ansible.builtin.shell: kubectl get secret --namespace cattle-system bootstrap-secret -o go-template={% raw %}'{{.data.bootstrapPassword|base64decode}}{{"\n"}}'{% endraw %} - retries: 30 - delay: 10 - register: rancher_node_default_password - until: rancher_node_default_password is success - - - name: Wait For Single Rancher Node To Become Available Again Post Rollout Restart - ansible.builtin.uri: - url: "https://{{ settings.rancher_config.rancher_install_domain }}/dashboard/auth/login" - validate_certs: no - status_code: 200 - timeout: 120 - force: yes - register: rancher_url_helm_installed_replicas_result_again - until: rancher_url_helm_installed_replicas_result_again.status == 200 - retries: 30 - delay: 30 - - - name: Set Rancher Node Default Password As Fact - ansible.builtin.set_fact: default_rancher_password="{{ rancher_node_default_password.stdout_lines }}" - - - name: Make Coredns Deployment and Configmap Edits - block: - - name: copy coredns configmap over - template: - src: "configmap-coredns.yaml.j2" - dest: /tmp/configmap-coredns.yaml - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: patch configmap coredns - shell: | - /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml patch configmap/coredns -n kube-system \ - --patch-file /tmp/configmap-coredns.yaml - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: copy coredns deployment over - template: - src: "deployment-coredns.yaml.j2" - dest: /tmp/coredns-deployment.yaml - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: patch deployment coredns - shell: | - /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml patch deployment/coredns -n kube-system \ - --patch-file /tmp/coredns-deployment.yaml - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool - - - name: restart coredns - shell: | - /usr/local/bin/kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml rollout restart deployment/coredns -n kube-system - when: settings.rancher_config.run_single_node_air_gapped_rancher | bool \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/avahi-daemon.conf.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/avahi-daemon.conf.j2 deleted file mode 100644 index 02ce4a5..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/avahi-daemon.conf.j2 +++ /dev/null @@ -1,68 +0,0 @@ -# This file is part of avahi. -# -# avahi is free software; you can redistribute it and/or modify it -# under the terms of the GNU Lesser General Public License as -# published by the Free Software Foundation; either version 2 of the -# License, or (at your option) any later version. -# -# avahi is distributed in the hope that it will be useful, but WITHOUT -# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -# License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with avahi; if not, write to the Free Software -# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 -# USA. - -# See avahi-daemon.conf(5) for more information on this configuration -# file! - -[server] -#host-name=foo -#domain-name=local -#browse-domains=0pointer.de, zeroconf.org -use-ipv4=yes -use-ipv6=yes -allow-interfaces=eth0,eth1 -#deny-interfaces=eth0 -#check-response-ttl=no -#use-iff-running=no -#enable-dbus=yes -#disallow-other-stacks=no -#allow-point-to-point=no -#cache-entries-max=4096 -#clients-max=4096 -#objects-per-client-max=1024 -#entries-per-entry-group-max=32 -ratelimit-interval-usec=1000000 -ratelimit-burst=1000 - -[wide-area] -enable-wide-area=yes - -[publish] -#disable-publishing=no -#disable-user-service-publishing=no -#add-service-cookie=no -#publish-addresses=yes -publish-hinfo=no -publish-workstation=no -#publish-domain=yes -#publish-dns-servers=192.168.50.1, 192.168.50.2 -#publish-resolv-conf-dns-servers=yes -#publish-aaaa-on-ipv4=yes -#publish-a-on-ipv6=no - -[reflector] -#enable-reflector=no -#reflect-ipv=no - -[rlimits] -#rlimit-as= -#rlimit-core=0 -#rlimit-data=8388608 -#rlimit-fsize=0 -#rlimit-nofile=768 -#rlimit-stack=8388608 -#rlimit-nproc=3 diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/configmap-coredns.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/configmap-coredns.yaml.j2 deleted file mode 100644 index e988d45..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/configmap-coredns.yaml.j2 +++ /dev/null @@ -1,9 +0,0 @@ -data: - Corefile: ".:53 {\n errors \n health {\n lameduck 5s\n }\n ready - \n kubernetes cluster.local cluster.local in-addr.arpa ip6.arpa {\n pods - insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus - \ 0.0.0.0:9153\n hosts /etc/coredns/customdomains.db {{ settings.rancher_config.rancher_install_domain }} {\n - \ fallthrough\n }\n forward . /etc/resolv.conf\n cache 30\n loop - \n reload \n loadbalance \n}" - customdomains.db: | - {{ settings.rancher_config.node_harvester_network_ip }} {{ settings.rancher_config.rancher_install_domain }} \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/deployment-coredns.yaml.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/deployment-coredns.yaml.j2 deleted file mode 100644 index b113c38..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/rancher/templates/deployment-coredns.yaml.j2 +++ /dev/null @@ -1,13 +0,0 @@ -spec: - template: - spec: - volumes: - - configMap: - defaultMode: 420 - items: - - key: Corefile - path: Corefile - - key: customdomains.db - path: customdomains.db - name: coredns - name: config-volume diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/tftp/handlers/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/tftp/handlers/main.yml deleted file mode 100644 index 117165e..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/tftp/handlers/main.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -- name: restart tftp - systemd: - name: tftpd-hpa - state: restarted - daemon_reload: yes - enabled: yes diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/tftp/tasks/main.yml b/vagrant-pxe-airgap-harvester/ansible/roles/tftp/tasks/main.yml deleted file mode 100644 index 3b23e57..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/tftp/tasks/main.yml +++ /dev/null @@ -1,16 +0,0 @@ ---- -- name: install tftpd-hpa - apt: - name: tftpd-hpa - state: present - -- name: create tftp root - file: - path: /tftpboot - state: directory - -- name: configure tftp - template: - src: tftpd-hpa.j2 - dest: /etc/default/tftpd-hpa - notify: restart tftp diff --git a/vagrant-pxe-airgap-harvester/ansible/roles/tftp/templates/tftpd-hpa.j2 b/vagrant-pxe-airgap-harvester/ansible/roles/tftp/templates/tftpd-hpa.j2 deleted file mode 100644 index 340a7bb..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/roles/tftp/templates/tftpd-hpa.j2 +++ /dev/null @@ -1,4 +0,0 @@ -TFTP_USERNAME="tftp" -TFTP_DIRECTORY="/tftpboot" -TFTP_ADDRESS="{{ settings['harvester_network_config']['dhcp_server']['ip'] }}:69" -TFTP_OPTIONS="--secure" diff --git a/vagrant-pxe-airgap-harvester/ansible/setup_harvester.yml b/vagrant-pxe-airgap-harvester/ansible/setup_harvester.yml deleted file mode 100644 index dcc2c51..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/setup_harvester.yml +++ /dev/null @@ -1,143 +0,0 @@ ---- -- name: Setup Harvester - hosts: localhost - connection: local - gather_facts: false - - tasks: - - name: create "Installing PXE Server" message - shell: > - figlet "Installing PXE Server" 2>/dev/null || echo "Installing PXE Server" - register: figlet_result - - - name: print Installing PXE Server - debug: - msg: "{{ figlet_result.stdout }}" - - - name: install PXE server - shell: > - VAGRANT_LOG=info vagrant up pxe_server - register: pxe_server_installation_result - - - name: display PXE server installation result - debug: - msg: "{{ pxe_server_installation_result.stdout }}" - - - name: get the IP address of pxe_server - shell: | - vagrant ssh-config pxe_server 2>/dev/null | grep HostName | awk '{ print $2 }' - register: get_pxe_server_ip_result - until: get_pxe_server_ip_result != "" - retries: 10 - delay: 60 - - - name: set pxe_server_ip fact - set_fact: - pxe_server_ip: "{{ get_pxe_server_ip_result.stdout }}" - - - name: wait for PXE server HTTP port to get ready - uri: - url: "http://{{ pxe_server_ip }}/harvester/config-create.yaml" - status_code: 200 - register: pxe_server_http_result - until: pxe_server_http_result.status == 200 - retries: 10 - delay: 30 - - - name: spin up single node rancher - shell: | - VAGRANT_LOG=info vagrant up rancher_box - register: rancher_box_vagrant_up_result - when: rancher_config.run_single_node_rancher | bool - - - name: display single node air-gapped rancher install result - debug: var=rancher_box_vagrant_up_result.stdout_lines - ignore_errors: yes - - - name: boot Harvester nodes - include_tasks: boot_harvester_node.yml - vars: - node_number: "{{ item }}" - with_sequence: 0-{{ harvester_cluster_nodes|int - 1 }} - - - name: Get the public VIP of the harvester cluster - set_fact: - harvester_public_endpoint: "{{ harvester_network_config.vip.ip }}" - - - name: get original admin token - ansible.builtin.uri: - url: https://{{ harvester_public_endpoint }}/v3-public/localProviders/local?action=login - method: POST - body: '{"username": "{{harvester_dashboard.admin_user}}", "password": "admin","responseType": "json", "description": "grab-initial-token to programatically set Harvester Rancher UI Admin Password"}' - body_format: json - follow_redirects: all - force: yes - status_code: 201 - use_proxy: no - headers: - Content-Type: "application/json" - validate_certs: no - register: token - until: token.status == 201 - retries: 30 - delay: 30 - - - debug: - msg: | - original-token that was grabbed {{ token.json.token }} - ignore_errors: yes - - - name: set admin token - ansible.builtin.uri: - url: https://{{ harvester_public_endpoint }}/v3/users?action=changepassword - method: POST - body: '{"currentPassword": "admin","newPassword": "{{harvester_dashboard.admin_password}}"}' - body_format: json - follow_redirects: all - force: yes - status_code: 200 - use_proxy: no - validate_certs: no - headers: - Content-Type: "application/json" - Accept: "application/json" - Authorization: "Bearer {{ token.json.token }}" - register: admin_token_set - until: admin_token_set.status == 200 - retries: 20 - delay: 30 - - - name: get new admin token - ansible.builtin.uri: - url: https://{{ harvester_public_endpoint }}/v3-public/localProviders/local?action=login - method: POST - body: '{"username": "{{harvester_dashboard.admin_user}}", "password": "{{harvester_dashboard.admin_password}}","responseType": "json", "description": "get a token using new set Harvester Rancher UI Credentials"}' - body_format: json - follow_redirects: all - force: yes - status_code: 201 - use_proxy: no - headers: - Content-Type: "application/json" - validate_certs: no - register: new_token_created - until: new_token_created.status == 201 - retries: 20 - delay: 30 - - - debug: - msg: | - new admin token: {{ new_token_created.json.token }} - ignore_errors: yes - - - name: create "Installation Completed" message - shell: > - figlet "Installation Completed" 2>/dev/null || echo "Installation Completed" - register: figlet_result - when: not rancher_config.run_single_node_rancher | bool - - - name: print "Installation Completed" - debug: - msg: "{{ figlet_result.stdout }}" - when: not rancher_config.run_single_node_rancher | bool - diff --git a/vagrant-pxe-airgap-harvester/ansible/setup_pxe_server.yml b/vagrant-pxe-airgap-harvester/ansible/setup_pxe_server.yml deleted file mode 100644 index 1f513f2..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/setup_pxe_server.yml +++ /dev/null @@ -1,16 +0,0 @@ -- hosts: all - become: yes - tasks: - - name: install kitty terminfo - apt: - name: kitty-terminfo - roles: - - role: dhcp - - role: tftp - - role: https - when: settings['harvester_network_config']['dhcp_server']['https'] - - role: http - - role: ipxe - - role: harvester - - role: proxy - when: settings['harvester_network_config']['offline'] diff --git a/vagrant-pxe-airgap-harvester/ansible/setup_rancher_node.yml b/vagrant-pxe-airgap-harvester/ansible/setup_rancher_node.yml deleted file mode 100644 index 321342b..0000000 --- a/vagrant-pxe-airgap-harvester/ansible/setup_rancher_node.yml +++ /dev/null @@ -1,8 +0,0 @@ -- hosts: all - become: yes - tasks: - - name: just debug - ansible.builtin.debug: - msg: "starting rancher provisioning" - roles: - - role: rancher diff --git a/vagrant-pxe-airgap-harvester/inventories/vagrant b/vagrant-pxe-airgap-harvester/inventories/vagrant deleted file mode 100644 index 0053b64..0000000 --- a/vagrant-pxe-airgap-harvester/inventories/vagrant +++ /dev/null @@ -1,10 +0,0 @@ -[rancher_box] -arancherbox ansible_host=192.168.2.34 ansible_private_key=.vagrant/machines/rancher_box/libvirt/private_key - -[vagrant] -arancherbox - -[all:vars] -host_key_checking=False -ansible_connection=ssh -ansible_user=vagrant \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/inventory b/vagrant-pxe-airgap-harvester/inventory deleted file mode 100644 index 422cc70..0000000 --- a/vagrant-pxe-airgap-harvester/inventory +++ /dev/null @@ -1,5 +0,0 @@ -[harvesternodes] -192.168.2.30 ansible_host=192.168.2.30 ansible_user=rancher ansible_connection=smart ansible_password=p@ssword -192.168.2.31 ansible_host=192.168.2.31 ansible_user=rancher ansible_connection=smart ansible_password=p@ssword -192.168.2.32 ansible_host=192.168.2.32 ansible_user=rancher ansible_connection=smart ansible_password=p@ssword -192.168.2.33 ansible_host=192.168.2.33 ansible_user=rancher ansible_connection=smart ansible_password=p@ssword \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/makefile-helper.sh b/vagrant-pxe-airgap-harvester/makefile-helper.sh deleted file mode 100755 index 31f6c79..0000000 --- a/vagrant-pxe-airgap-harvester/makefile-helper.sh +++ /dev/null @@ -1,104 +0,0 @@ -#!/usr/bin/env bash - -main () { - echo -e "\n...running checks...\n"; - - if ! command -v virsh --version &> /dev/null - then - echo "coludn't find virsh - please install" - else - default_network_search_result=$(virsh net-list --persistent --autostart --name | grep -e "default") - if [ "$default_network_search_result" = "default" ]; then - echo "a network named 'default' is present, with autostart and persistence, looks great, continuing checks..." - else - echo "we must have a network named 'default' that exists with persistence and autostart" - echo -e "something like: \n" - cat << EOF - - default - 9521fb90-03f5-4ebb-bb62-afb50bc77f2d - - - - - - - - - - - - - -EOF - fi - fi - - if ! command -v sshpass &> /dev/null - then - ## todo: make it dynamically install based on distros - echo "sshpass couldn't be found - please install at version 1.0.9 or higher"; - else - echo "sshpass installed running version check..."; - sshpass_current_version=$(sshpass -V | cut -d ' ' -f 2 | head -1); - requiredver="1.0.9" - if [ "$(printf '%s\n' "$requiredver" "$sshpass_current_version" | sort -V | head -n1)" = "$requiredver" ]; then - echo "Greater than or equal to ${requiredver} - we're going to continue, since this should be fine... for sshpass" - else - echo "Less than ${requiredver} for sshpass" - fi - fi - - # ansible galaxy can't be installed without ansible - if ! command -v ansible-galaxy &> /dev/null - then - ## todo: make it dynamically install based on distros - echo "ansible-galaxy could not be found - please install at version 2.13.0 or higher, you'll need ansible as a dependency"; - else - ## check the version of ansible-galaxy is at the same or higher version - echo "ansible-galaxy is isntalled, running version check..." - check_ansible_galaxy_version=$(ansible-galaxy --version | tr -s ' ' | cut -d ' ' -f 3 | awk '{ printf $0 }' | cut -d ']' -f 1); - requiredver="2.13.0" - if [ "$(printf '%s\n' "$requiredver" "$check_ansible_galaxy_version" | sort -V | head -n1)" = "$requiredver" ]; then - echo "Greater than or equal to ${requiredver} - we're going to continue, since this should be fine... for ansible/ansible-galaxy" - else - echo "Less than ${requiredver} for ansible / ansible-galaxy" - fi - ## check ansible-galaxy has the community.general installed and at a same or higher version - result=$(ansible-galaxy collection list | awk '{ printf $0 }' | tr -d '[=-=]' | grep -Eoh "community.general [0-9].[0-9].[0-9]"); - if test -z "$result" - then - echo -e "\nwe need to install community.general... please install: \n ansible-galaxy collection install community.general; \n at a version 5.4.0 or higher \n"; - else - echo -e "\ncommunity.general is installed, checking version...\n"; - currentver=$(echo $result | tr -s ' ' | cut -d ' ' -f 2); - requiredver="5.4.0" - if [ "$(printf '%s\n' "$requiredver" "$currentver" | sort -V | head -n1)" = "$requiredver" ]; then - echo "Greater than or equal to ${requiredver} - we're going to continue, since this should be fine... for community.general" - else - echo "Less than ${requiredver} community.general" - fi - fi - fi - - if ! command -v vagrant &> /dev/null - then - # todo: make it dynamically install vagrant on distro - echo "please install vagrant at version: 2.2.19 or higher" - else - current_version_vagrant_libvirt=$(vagrant plugin list | grep -Eoh "vagrant-libvirt \([0-9].[0-9].[0-9], system\)" | tr -d '[=(=]' | tr -d '[=,=]' | cut -d ' ' -f 2); - requiredver="0.7.0" - if [ "$(printf '%s\n' "$requiredver" "$current_version_vagrant_libvirt" | sort -V | head -n1)" = "$requiredver" ]; then - echo "Greater than or equal to ${requiredver} - we're going to continue, since this should be fine... for vagrant's vagrant-libvirt plugin" - else - echo "Less than ${requiredver} for vagrant's vagrant-libvirt plugin" - fi - fi - - echo -e "\n...finished checks...\n"; - exit 0 -} - -echo -e "\n ...starting... \n" - -main \ No newline at end of file diff --git a/vagrant-pxe-airgap-harvester/reinstall_harvester_node.sh b/vagrant-pxe-airgap-harvester/reinstall_harvester_node.sh deleted file mode 100755 index de353d6..0000000 --- a/vagrant-pxe-airgap-harvester/reinstall_harvester_node.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash - -MYNAME=$0 -ROOTDIR=$(dirname $(readlink -e $MYNAME)) - -USAGE="${0}: - -Where: - - : node to re-install. Node number starts with zero (0). For - example, if you want to re-install the 3rd node, the node - number given should be 2. -" - -if [ $# -ne 1 ] ; then - echo "$USAGE" - exit 1 -fi - -NODE_NUMBER=$1 -NODE_NAME="harvester-node-${NODE_NUMBER}" - -# check to make sure the node has not been created -NOT_CREATED=$(vagrant status ${NODE_NAME} | grep "^${NODE_NAME}" | grep "not created" || true) - -if [ "${NOT_CREATED}" == "" ] ; then - echo "Harvester node ${NODE_NAME} already created." - exit 1 -fi - -pushd $ROOTDIR -ansible-playbook ansible/reinstall_harvester_node.yml --extra-vars "@settings.yml" --extra-vars "node_number=${NODE_NUMBER}" -popd diff --git a/vagrant-pxe-airgap-harvester/settings.yml b/vagrant-pxe-airgap-harvester/settings.yml deleted file mode 100644 index d81100a..0000000 --- a/vagrant-pxe-airgap-harvester/settings.yml +++ /dev/null @@ -1,185 +0,0 @@ ---- -########################################################################## -# NOTE: this is a YAML file so please pay close attention to the leading # -# spaces as they are significant. # -########################################################################## - -# -# harvester_iso_url -# harvester_kernel_url -# harvester_initrd_url -# -# Harvester media to install. The URL scheme can be either 'http', 'https', or -# 'file'. If the URL scheme is 'file', the given media will be copied from the -# local file system instead of downloading from a remote location. -harvester_iso_url: https://releases.rancher.com/harvester/master/harvester-master-amd64.iso -harvester_kernel_url: https://releases.rancher.com/harvester/master/harvester-master-vmlinuz-amd64 -harvester_ramdisk_url: https://releases.rancher.com/harvester/master/harvester-master-initrd-amd64 -harvester_rootfs_url: https://releases.rancher.com/harvester/master/harvester-master-rootfs-amd64.squashfs - -# -# harvester_cluster_nodes -# -# NOTE: keep in mind that you need at least 3 nodes to make a cluster -# -harvester_cluster_nodes: 4 - -# -# network_config -# -# Harvester network configurations. Make sure the cluster IPs are on the same -# subnet as the DHCP server. Pre-assign the IPs and MACs for the Harvester -# nodes. -# -# NOTE: Random MAC addresses are generated with the following command: -# printf '02:00:00:%02X:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256)) $((RANDOM%256)) -# Thanks to https://stackoverflow.com/questions/8484877/mac-address-generator-in-python -# If any of the generated MAC addresses is in conflict with an existing one in -# your environment, please use the above command to regenerate and replace -# the conflicting one. - -harvester_network_config: - # Run as an airgapped environment that only has internet connectivity through an HTTP proxy. - # The HTTP proxy runs on DHCP server using port 3128 - offline: true - - dhcp_server: - ip: 192.168.2.254 - subnet: 192.168.2.0 - netmask: 255.255.255.0 - range: 192.168.2.50 192.168.2.130 - https: false - # Reserve these IPs for the Harvester cluster. Make sure these are outside - # the range of DHCP so they don't get served out by the DHCP server - # The Harvester cluster IPs are also represented in the 'inventory' file, so editing these - # you would also want to make updates / edits to the inventory file - vip: - ip: 192.168.2.131 - mode: DHCP - mac: 02:00:00:03:3D:61 - sftp: true - cluster: - - ip: 192.168.2.30 - mac: 02:00:00:0D:62:E2 - cpu: 8 - memory: 16354 - disk_size: 500G - vagrant_interface: ens5 - mgmt_interface: ens6 - - ip: 192.168.2.31 - mac: 02:00:00:35:86:92 - cpu: 8 - memory: 16354 - disk_size: 500G - vagrant_interface: ens5 - mgmt_interface: ens6 - - ip: 192.168.2.32 - mac: 02:00:00:2F:F2:2A - cpu: 8 - memory: 16354 - disk_size: 500G - vagrant_interface: ens5 - mgmt_interface: ens6 - - ip: 192.168.2.33 - mac: 02:00:00:A7:E6:FF - cpu: 8 - memory: 16354 - disk_size: 500G - vagrant_interface: ens5 - mgmt_interface: ens6 - -# -# harvester_config -# -# Harvester system configurations. -# -harvester_config: - # static token for cluster authentication - token: token - - # Public keys to add to authorized_keys of each node. - ssh_authorized_keys: - # Vagrant default unsecured SSH public key & additionally add somewhere to have public keys available - - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key - # Additionally add somewhere to have public keys available - #- github:your_github_username - - # password to for the `rancher` user to login to the Harvester nodes - password: p@ssword - - # NTP servers - ntp_servers: - - 0.suse.pool.ntp.org - - 1.suse.pool.ntp.org - -# -# harvester_dashboard -# -# This sets the admin password for the Harvester Dashboard/web-ui upon provisioning -# -harvester_dashboard: - admin_user: admin - # NOTE: admin_password must be greater than or equal to 12 characters in length - admin_password: testtesttest - - -# -# rancher_config -# -# Rancher configurations -# (see Troubleshooting & Known Issues in README) -rancher_config: - # disk size of single instance rancher node, this is split into two partitions at 50/50 - node_disk_size: 350G - run_single_node_rancher: true - # to determine to run air_gapped rancher, run_single_node_rancher must be enabled, if set to: false - - # it will create a non-airgapped single node rancher instance - # harvester offline must be true to fully air-gap both - run_single_node_air_gapped_rancher: true - # cert-manager version, for the jetstack.io repo - cert_manager_version: v1.7.1 - # url escaped k3s version for grabbing k3s - k3s_url_escaped_version: v1.23.6%2Bk3s1 - # K9s version - k9s_version: v0.26.3 - # IMPORTANT: "TRUE" RANCHER AIRGAPPED ONLY WORKS WITH 2.6.4 -> 2.6.4-rc-* - # IMPORTANT: NOTE - it will work with 2.6.5 & 2.6.6, but it "installs" by first installing - # 2.6.4, then upgrading to either 2.6.5 or 2.6.6 - # the rancher version with it's prefix - rancher_version: v2.6.11 - # the rancher version without it's prefix - rancher_version_no_prefix: 2.6.11 - # k9s_version: v0.25.18 - # mac address of the harvester network card for dhcp on harvester network to work - mac_address_harvester_network: 02:29:F9:43:92:95 - # if this ip changes, update it in the inventories/vagrant file for the rancher_box - # we interact with the libvirt vm on this IP via ansible, then shut off eth0 which provides a temp network out - # you'll also need to change the ip, listed in the DHCP configuration, since 192.168.0.34 is directly tied to it - node_harvester_network_ip: 192.168.2.34 - cpu: 8 - memory: 20000 - # rancher_install_domain, where the domain of the rancher install via helm templating and kubectl -R -f applying is - # NOTE: as long as it ends in *.local it should work, hostname resolution on the rancher instance comes from avahi-daemon - rancher_install_domain: rancher-vagrant-vm.local - # registry_domain is the docker domain that get's set up, it can be anything as long as it ends in a ".local" - registry_domain: myregistry.local - # bootstrap password - bootstrap_password: rancher - # replicas desired - rancher_replicas: 3 - - -# -# harvester_node_config -# -# Harvester node-specific configurations. -# -harvester_node_config: - # number of CPUs assigned to each node - cpu: 8 - - # memory size for each node, in MBytes - memory: 16354 - - # disk size for each node - disk_size: 500G diff --git a/vagrant-pxe-airgap-harvester/setup_harvester.sh b/vagrant-pxe-airgap-harvester/setup_harvester.sh deleted file mode 100755 index 5f23eb8..0000000 --- a/vagrant-pxe-airgap-harvester/setup_harvester.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/bin/bash -# the vms default ipv4 addresses, found in settings.yml -default_vm_ipv4_addrs_arry=( 192.168.2.30 192.168.2.31 192.168.2.32 192.168.2.33 192.168.2.34 ) -# the file location of the ssh known hosts -ssh_known_hosts_file= -# boolean to represent whether or not we will clean up IPs from VMs in the ssh hosts file -cleanup_known_ssh_hosts_bool=false -# Usage info -show_help() { -cat << EOF -Usage: ${0##*/} [-h/--help] [-skhf/--ssh-known-hosts-file] [FILE] [-cskh/--clean-ssh-known-hosts]... -Running the setup_harvester script we can specify whether or not to clean ssh known hosts, -removing the IPs for each associated VM that is running, the list of VM's IPs are directly tied to settings.yml -IF there are any changes to settings.yml for IPs you must take into account that this functionality -does not automatically parse the settings.yml file at runtime, as these values are hardcoded. -If the -skhf | --ssh-known-hosts-file arguement is not provided but the -skhf | --ssh-known-hosts-file is -not provided, we will 'optimistically' use the default known hosts of ~/.ssh/known_hosts - - -h | --help display this help and exit - -s | --ssh-known-hosts-file FILE the file to be used to for ssh known hosts, defaults to ~/.ssh/known_hosts. - -c | --clean-ssh-known-hosts when this flag is passed we will clean up the IPs associated with VMs on your known_hosts file. -EOF -} -# bails out of program -die() { - printf '%s\n' "$1" >&2 - exit 1 -} - -while :; do - case $1 in - -h|-\?|--help) - show_help # Display a usage synopsis. - exit - ;; - -s|--ssh-known-hosts-file) # Takes an option argument; ensure it has been specified. - if [ "$2" ]; then - ssh_known_hosts_file=$2 - shift - else - die 'ERROR: "-skhf / --ssh-known-hosts-file" requires a non-empty option argument.' - fi - ;; - -c|--clean-ssh-known-hosts) - cleanup_known_ssh_hosts_bool=true # Each -v adds 1 to verbosity. - ;; - --) # End of all options. - shift - break - ;; - -?*) - printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2 - ;; - *) # Default case: No more options, so break out of the loop. - break - esac - - shift -done - -if [ "$cleanup_known_ssh_hosts_bool" = true ] ; then - echo 'Cleaning up VM IPs from ssh hosts file...' - for ip_to_clean in "${default_vm_ipv4_addrs_arry[@]}" - do - if [ ! -z "${ssh_known_hosts_file}" ]; then - ssh-keygen -f $ssh_known_hosts_file -R $ip_to_clean; - else - ssh-keygen -f ~/.ssh/known_hosts -R $ip_to_clean; - fi - done -fi - -printf 'Starting Ansible Playbooks, running with the configuration on settings.yml...' -echo "" -MYNAME=$0 -ROOTDIR=$(dirname $(readlink -e $MYNAME)) - -pushd $ROOTDIR -VAGRANT_LOG=info DEFAULT_DEBUG=True ansible-playbook ansible/setup_harvester.yml --extra-vars "@settings.yml" && VAGRANT_LOG=info DEFAULT_DEBUG=True ansible-playbook ansible/prepare_harvester_nodes.yml --extra-vars "@settings.yml" -i inventory -ANSIBLE_PLAYBOOK_RESULT=$? -popd -exit $ANSIBLE_PLAYBOOK_RESULT diff --git a/vagrant-pxe-harvester/README.md b/vagrant-pxe-harvester/README.md index 7cfa0c8..850dde3 100644 --- a/vagrant-pxe-harvester/README.md +++ b/vagrant-pxe-harvester/README.md @@ -43,6 +43,31 @@ At the time of writing this, latest version of Vagrant is 2.4.3. ``` > Note: plugin installation does not require sudo +### ipxe examples shifting to being vendor-ized +- ipxe doesn't publish anything with checksums +- we can however try to utilize a file-server that exists somewhere to provide a "tag" we create when we download the files +- we don't necessarily have to build from source +- each file is given a checksum +- we pull that file and audit the checksum across the wire +- discouraged: but will provide doing default boot.ipxe.org +- downloading of the ipxe artifacts as a dependency that is vendored w/ a sha256 to a local file-server or something different will work too, adjusting the settings.yml will need to be updated + +If wanting to bump to a newer/latest version of ipxe binaries we will just vendor them here on our artifact server + +1. Download the new binaries in a new folder like 2026-03-27: + ```bash + curl -o ipxe-x86_64.efi https://boot.ipxe.org/x86_64-efi/ipxe.efi + curl -o ipxe-arm64.efi https://boot.ipxe.org/arm64-efi/ipxe.efi + curl -o undionly.kpxe https://boot.ipxe.org/undionly.kpxe + ``` + +2. Calculate checksums: + ```bash + sha256sum ipxe-x86_64.efi ipxe-arm64.efi undionly.kpxe + ``` + +3. Update `settings.yml` with new checksums + ### Troubleshooting A common error while attempting to install vagrant-libvirt plugin is: @@ -97,7 +122,7 @@ It asks to look at `mkmf.log` file for more details. Exact path might differ on If you do see that message, delete the file in question and try installing the pluging again: ```sh rm /opt/vagrant/embedded/lib/libreadline.so.8 && - vagrant plugin install vagrant-libvirt + vagrant plugin install vagrant-libvirt ``` Quick Start @@ -152,10 +177,10 @@ There might be situations where the playbook silently fails to download the nece 2. Check the status of the virtual machines. One might be in a bad state: ```bash sudo vagrant global-status - id name provider state directory + id name provider state directory ------------------------------------------------------------------------------------------------------------------------ - 3ac6daa pxe_server libvirt running /github.com/harvester/ipxe-examples/vagrant-pxe-harvester - c7a0673 harvester-node-0 libvirt shutoff /github.com/harvester/ipxe-examples/vagrant-pxe-harvester + 3ac6daa pxe_server libvirt running /github.com/harvester/ipxe-examples/vagrant-pxe-harvester + c7a0673 harvester-node-0 libvirt shutoff /github.com/harvester/ipxe-examples/vagrant-pxe-harvester ``` 3. You can then use `virt-manager` to open GUI and check the virtual machine terminal to determine why `harvester-node-0` is in `shutoff` state. 4. Make sure all necessary `harvester-*` files are inside the `pxe_server` folder note the `id` of the vm and ssh to it: diff --git a/vagrant-pxe-harvester/Vagrantfile b/vagrant-pxe-harvester/Vagrantfile index 00cec7a..f75dd19 100644 --- a/vagrant-pxe-harvester/Vagrantfile +++ b/vagrant-pxe-harvester/Vagrantfile @@ -30,6 +30,8 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| # PXE Server config.vm.define :pxe_server do |pxe_server| pxe_server.vm.box = 'generic/debian11' + pxe_server.vm.box_version = '4.3.12' + pxe_server.vm.hostname = 'pxe-server' pxe_server.vm.network 'private_network', ip: @settings['harvester_network_config']['dhcp_server']['ip'], diff --git a/vagrant-pxe-harvester/ansible/roles/harvester/tasks/_download_media.yml b/vagrant-pxe-harvester/ansible/roles/harvester/tasks/_download_media.yml index c492087..9f05649 100644 --- a/vagrant-pxe-harvester/ansible/roles/harvester/tasks/_download_media.yml +++ b/vagrant-pxe-harvester/ansible/roles/harvester/tasks/_download_media.yml @@ -3,14 +3,48 @@ set_fact: harvester_download_url_facts: "{{ harvester_media_url | urlsplit }}" +- name: extract filename from URL for checksum lookup + set_fact: + media_basename: "{{ harvester_media_url | basename }}" + +- name: get expected checksum for this media file + set_fact: + expected_checksum: "{{ harvester_checksums[media_basename] | default('') }}" + +- name: verify checksum is available + fail: + msg: "No SHA512 checksum found for {{ media_basename }} in the checksum file" + when: expected_checksum == '' + - name: copy Harvester media from local directory copy: src: "{{ harvester_download_url_facts['path'] }}" dest: /var/www/harvester/{{ media_filename }} when: harvester_download_url_facts['scheme']|lower == 'file' -- name: download Harvester media +- name: verify checksum of copied local file + stat: + path: /var/www/harvester/{{ media_filename }} + checksum_algorithm: sha512 + register: local_file_stat + when: harvester_download_url_facts['scheme']|lower == 'file' + +- name: fail if local file checksum doesn't match + fail: + msg: "Checksum mismatch for {{ media_filename }}. Expected: {{ expected_checksum }}, Got: {{ local_file_stat.stat.checksum }}" + when: + - harvester_download_url_facts['scheme']|lower == 'file' + - local_file_stat.stat.checksum != expected_checksum + +- name: debug - downloading from remote + debug: + msg: "Downloading from: {{ harvester_media_url }} to /var/www/harvester/{{ media_filename }} with checksum verification" + when: harvester_download_url_facts['scheme']|lower != 'file' + +- name: download Harvester media with checksum verification get_url: url: "{{ harvester_media_url }}" dest: /var/www/harvester/{{ media_filename }} + checksum: "sha512:{{ expected_checksum }}" + timeout: "{{ harvester_iso_download_and_sha512_check_timeout_seconds }}" when: harvester_download_url_facts['scheme']|lower != 'file' diff --git a/vagrant-pxe-harvester/ansible/roles/harvester/tasks/main.yml b/vagrant-pxe-harvester/ansible/roles/harvester/tasks/main.yml index 036a568..5fa166e 100644 --- a/vagrant-pxe-harvester/ansible/roles/harvester/tasks/main.yml +++ b/vagrant-pxe-harvester/ansible/roles/harvester/tasks/main.yml @@ -33,6 +33,33 @@ owner: www-data recurse: yes +- name: parse Harvester SHA512 URL + set_fact: + harvester_sha512_url_facts: "{{ settings['harvester_sha512_url'] | urlsplit }}" + +- name: download Harvester SHA512 checksum file + get_url: + url: "{{ settings['harvester_sha512_url'] }}" + dest: /tmp/harvester.sha512 + when: harvester_sha512_url_facts['scheme']|lower != 'file' + +- name: copy Harvester SHA512 checksum file from local + copy: + src: "{{ harvester_sha512_url_facts['path'] }}" + dest: /tmp/harvester.sha512 + when: harvester_sha512_url_facts['scheme']|lower == 'file' + +- name: read SHA512 checksum file + slurp: + src: /tmp/harvester.sha512 + register: sha512_content + +- name: parse SHA512 checksums into dictionary (line splitting) + set_fact: + harvester_checksums: "{{ harvester_checksums | default({}) | combine({item.split()[1]: item.split()[0]}) }}" + loop: "{{ (sha512_content['content'] | b64decode).splitlines() }}" + when: item | trim | length > 0 + - name: create boot entry for the first node template: src: ipxe-create.j2 diff --git a/vagrant-pxe-harvester/ansible/roles/ipxe/tasks/main.yml b/vagrant-pxe-harvester/ansible/roles/ipxe/tasks/main.yml index 8b6c1a5..d9ea8f9 100644 --- a/vagrant-pxe-harvester/ansible/roles/ipxe/tasks/main.yml +++ b/vagrant-pxe-harvester/ansible/roles/ipxe/tasks/main.yml @@ -4,10 +4,38 @@ path: /tftpboot/ipxe state: directory -- name: install ipxe firmwares +- name: determine architecture for iPXE EFI binary + set_fact: + ipxe_arch: "{{ {'aarch64': 'arm64'}.get(ansible_architecture, ansible_architecture) }}" + +- name: determine which iPXE EFI binary to use + set_fact: + ipxe_efi_binary: "{{ 'arm64_efi' if ipxe_arch == 'arm64' else 'x86_64_efi' }}" + +# Download from artifact server with checksum verification +- name: download iPXE EFI binary from artifact server + get_url: + url: "{{ settings['ipxe_config']['artifact_server'] }}/{{ settings['ipxe_config']['binaries'][ipxe_efi_binary]['filename'] }}" + dest: /tftpboot/ipxe/ipxe.efi + checksum: "sha256:{{ settings['ipxe_config']['binaries'][ipxe_efi_binary]['sha256'] }}" + when: settings['ipxe_config']['artifact_server'] != '' + +- name: download iPXE undionly binary from artifact server + get_url: + url: "{{ settings['ipxe_config']['artifact_server'] }}/{{ settings['ipxe_config']['binaries']['undionly']['filename'] }}" + dest: /tftpboot/ipxe/undionly.kpxe + checksum: "sha256:{{ settings['ipxe_config']['binaries']['undionly']['sha256'] }}" + when: settings['ipxe_config']['artifact_server'] != '' + +# Fallback to boot.ipxe.org without verification we want to avoid this if at all possible +- name: download iPXE EFI binary from boot.ipxe.org (no checksum verification) + get_url: + url: "https://boot.ipxe.org/{{ ipxe_arch }}-efi/ipxe.efi" + dest: /tftpboot/ipxe/ipxe.efi + when: settings['ipxe_config']['artifact_server'] == '' + +- name: download iPXE undionly binary from boot.ipxe.org (no checksum verification) get_url: - url: '{{ item }}' - dest: /tftpboot/ipxe/ - loop: - - "https://boot.ipxe.org/{{ {'aarch64': 'arm64'}.get(ansible_architecture, ansible_architecture) }}-efi/ipxe.efi" - - "https://boot.ipxe.org/undionly.kpxe" + url: "https://boot.ipxe.org/undionly.kpxe" + dest: /tftpboot/ipxe/undionly.kpxe + when: settings['ipxe_config']['artifact_server'] == '' diff --git a/vagrant-pxe-harvester/ansible/roles/s3/tasks/main.yml b/vagrant-pxe-harvester/ansible/roles/s3/tasks/main.yml index 173b5da..74ce277 100644 --- a/vagrant-pxe-harvester/ansible/roles/s3/tasks/main.yml +++ b/vagrant-pxe-harvester/ansible/roles/s3/tasks/main.yml @@ -17,7 +17,7 @@ path: /data owner: root group: root - mode: '0755' + mode: "0755" state: directory - name: mount backup target filesystem @@ -30,7 +30,7 @@ - name: start minio container containers.podman.podman_container: name: minio - image: quay.io/minio/minio:latest + image: quay.io/minio/minio@{{ settings['s3']['tagged_release_sha256sum'] }} command: - server - /data diff --git a/vagrant-pxe-harvester/ansible/setup_rancher.yml b/vagrant-pxe-harvester/ansible/setup_rancher.yml index aa04779..0d7348c 100644 --- a/vagrant-pxe-harvester/ansible/setup_rancher.yml +++ b/vagrant-pxe-harvester/ansible/setup_rancher.yml @@ -8,15 +8,45 @@ any_errors_fatal: true vars: kubeconfig: /etc/rancher/k3s/k3s.yaml - + tasks: - - name: setup k3s - shell: curl -sfL https://get.k3s.io | sh - + - name: Download k3s binary + get_url: + url: "https://github.com/k3s-io/k3s/releases/download/{{ settings.rancher_config.k3s_version }}/k3s" + dest: /usr/local/bin/k3s + mode: "0755" + checksum: "{{ settings.rancher_config.k3s_checksum }}" + + - name: Download k3s install script + get_url: + url: https://get.k3s.io + dest: /tmp/k3s-install.sh + mode: "0755" + + - name: Install k3s using install script with pre-downloaded binary + shell: INSTALL_K3S_SKIP_DOWNLOAD=true /tmp/k3s-install.sh environment: - INSTALL_K3S_CHANNEL: "{{ settings.rancher_config.k3s_channel }}" + INSTALL_K3S_VERSION: "{{ settings.rancher_config.k3s_version }}" + INSTALL_K3S_BIN_DIR: /usr/local/bin + + - name: Download helm tarball + get_url: + url: "https://get.helm.sh/helm-{{ settings.rancher_config.helm_version }}-linux-amd64.tar.gz" + dest: /tmp/helm.tar.gz + checksum: "{{ settings.rancher_config.helm_checksum }}" + + - name: Extract helm tarball + unarchive: + src: /tmp/helm.tar.gz + dest: /tmp + remote_src: yes - - name: setup helm - shell: curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash + - name: Install helm binary + copy: + src: /tmp/linux-amd64/helm + dest: /usr/local/bin/helm + mode: "0755" + remote_src: yes - name: update repo shell: | diff --git a/vagrant-pxe-harvester/settings.yml b/vagrant-pxe-harvester/settings.yml index 5d11185..6e4473d 100644 --- a/vagrant-pxe-harvester/settings.yml +++ b/vagrant-pxe-harvester/settings.yml @@ -12,10 +12,51 @@ # # Harvester media location. URL schema can be either 'http', 'https' or 'file'. # +# +# harvester_sha512_url +# +# URL to the SHA512 checksum file for verifying downloaded artifacts. +# All downloaded artifacts (ISO, kernel, initrd, rootfs) will be verified against +# the checksums in this file. The download will fail if checksums don't match. +# +# Examples: +# - For stable releases: https://releases.rancher.com/harvester/v1.7.1/harvester-v1.7.1-amd64.sha512 +# - For master branch: https://releases.rancher.com/harvester/master/harvester-master-amd64.sha512 +# - For dev branches: https://releases.rancher.com/harvester/v1.8/harvester-v1.8-amd64.sha512 +# +harvester_sha512_url: https://releases.rancher.com/harvester/master/harvester-master-amd64.sha512 + harvester_iso_url: https://releases.rancher.com/harvester/master/harvester-master-amd64.iso harvester_kernel_url: https://releases.rancher.com/harvester/master/harvester-master-vmlinuz-amd64 harvester_ramdisk_url: https://releases.rancher.com/harvester/master/harvester-master-initrd-amd64 harvester_rootfs_url: https://releases.rancher.com/harvester/master/harvester-master-rootfs-amd64.squashfs +# Defaults to 45 minutes to allow for iso to be downloaded and checksum sha512 to be utilized +harvester_iso_download_and_sha512_check_timeout_seconds: 2700 +# +# ipxe_config +# +# iPXE bootloader configuration. These binaries are vendored (downloaded once and +# hosted on the artifact server) with SHA256 checksums for verification. +# +# Version: iPXE 2.0.0+ (g6d2f6) - downloaded from boot.ipxe.org on 2026-03-27 +# +# Set ipxe_artifact_server to empty string to use boot.ipxe.org directly , we shouldn't +# +ipxe_config: + # Empty = use boot.ipxe.org directly (not recommended) + artifact_server: "http://10.115.1.6/iso/ipxe-binaries-for-vagrant/2026-03-27/" + + # iPXE binaries and their SHA256 checksums + binaries: + x86_64_efi: + filename: ipxe-x86_64.efi + sha256: 1870e795866bb51b93093379d7d277245d60cdd622315695a5502077e20ed853 + arm64_efi: + filename: ipxe-arm64.efi + sha256: 12dfb3b107e3d2212d44f60ce930d06990bcbf9dde1f7f2bbe2531933d4cb922 + undionly: + filename: undionly.kpxe + sha256: 67ef19d18c6b4b9f00fa0429ade8b6a282bd1c222db2fa4a83e836ffe0c06706 # # harvester_cluster_nodes @@ -48,7 +89,7 @@ harvester_network_config: dns_server: 1.1.1.1 dns_servers: - # This is a list of DNS servers that will be included in your Harvester OS config + # This is a list of DNS servers that will be included in your Harvester OS config - 192.168.0.254 - 8.8.8.8 @@ -124,14 +165,21 @@ harvester_node_config: # # Rancher setup on K3s cluster. # Refer support matrix on https://ranchermanager.docs.rancher.com/versions -# +# rancher_config: enabled: false - version: v2.11.0 - k3s_channel: v1.32 + version: v2.13.0 repo: https://releases.rancher.com/server-charts/latest + # K3s installation + k3s_version: v1.34.5+k3s1 + k3s_checksum: sha256:efaa84416cf59f36f7c1b45bd12988dcf0112288f588a9fd5c0fbca6d309e9d9 + + # Helm installation + helm_version: v4.1.3 + helm_checksum: sha256:02ce9722d541238f81459938b84cf47df2fdf1187493b4bfb2346754d82a4700 + # Reserved IP for Rancher. Do not conflict with DHCP range and other reserved IPs. ip: 192.168.0.141 hostname: rancher.192.168.0.141.sslip.io @@ -145,8 +193,10 @@ rancher_config: # The S3 service will be available at http://:9000 s3: enabled: false - capacity: '300G' + capacity: "300G" port: 9000 console_port: 9001 username: admin password: password1234 + # RELEASE.2025-09-07T16-13-09Z + tagged_release_sha256sum: sha256:a1a8bd4ac40ad7881a245bab97323e18f971e4d4cba2c2007ec1bedd21cbaba2