Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions asciidoc/components/upgrade-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,9 @@ The Upgrade Plan resource's status can be viewed in the following way:
kubectl get upgradeplan <upgradeplan_name> -n upgrade-controller-system -o yaml
----

[#ex-running-upgrade-plan]
.Running Upgrade Plan example:
====
[,yaml,subs="attributes"]
----
apiVersion: lifecycle.suse.com/v1alpha1
Expand Down Expand Up @@ -376,6 +378,7 @@ status:
observedGeneration: 1
sucNameSuffix: 90315a2b6d
----
====

Here you can view every component that the Upgrade Controller will try to schedule an upgrade for. Each condition follows the below template:

Expand Down Expand Up @@ -412,7 +415,9 @@ An Upgrade Plan scheduled by the Upgrade Controller can be marked as `successful

. The `lastSuccessfulReleaseVersion` property points to the `releaseVersion` that is specified in the Upgrade Plan's configuration. _This property is added to the Upgrade Plan's status by the Upgrade Controller once the upgrade process is successful._

[#ex-succesful-upgrade-plan]
.Successful `UpgradePlan` example:
====
[,yaml,subs="attributes"]
----
apiVersion: lifecycle.suse.com/v1alpha1
Expand Down Expand Up @@ -493,6 +498,7 @@ status:
observedGeneration: 1
sucNameSuffix: 90315a2b6d
----
====

[#components-upgrade-controller-how-track-helm]
=== Helm Controller
Expand Down
50 changes: 40 additions & 10 deletions asciidoc/components/virtualization.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,21 @@ DESCRIPTION:

Now that KubeVirt and CDI are deployed, let us define a simple virtual machine based on https://get.opensuse.org/tumbleweed/[openSUSE Tumbleweed]. This virtual machine has the most simple of configurations, using standard "pod networking" for a networking configuration identical to any other pod. It also employs non-persistent storage, ensuring the storage is ephemeral, just like in any container that does not have a https://kubernetes.io/docs/concepts/storage/persistent-volumes/[PVC].

[,shell]
----
[,shell, literal]
----
$ cat <<EOF > user-data.yaml
#cloud-config
disable_root: false
ssh_pwauth: True
users:
- default
- name: suse
groups: sudo
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: False
plain_text_passwd: 'suse'
EOF
$ kubectl apply -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
Expand All @@ -211,7 +224,7 @@ spec:
image: quay.io/containerdisks/opensuse-tumbleweed:1.0.0
name: tumbleweed-containerdisk-0
- cloudInitNoCloud:
userDataBase64: I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRydWUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IHN1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRfcGFzc3dkOiAnc3VzZScK
userDataBase64: $(cat user-data.yaml)
name: cloudinitdisk
EOF
----
Expand Down Expand Up @@ -338,11 +351,15 @@ $ chmod a+x /usr/local/bin/virtctl

You can then use the `virtctl` command-line tool to create virtual machines. Let us replicate our previous virtual machine, noting that we are piping the output directly into `kubectl apply`:

[,shell]
[,shell, literal]
----
$ virtctl create vm --name virtctl-example --memory=1Gi \
--volume-containerdisk=src:quay.io/containerdisks/opensuse-tumbleweed:1.0.0 \
--cloud-init-user-data "I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRydWUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IHN1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRfcGFzc3dkOiAnc3VzZScK" | kubectl apply -f -
--cloud-init-user-data "I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRyd\
WUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IH\
N1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9\
QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRf\
cGFzc3dkOiAnc3VzZScK" | kubectl apply -f -
Comment thread
ranjinimn marked this conversation as resolved.
----

This should then show the virtual machine running (it should start a lot quicker this time given that the container image will be cached):
Expand Down Expand Up @@ -429,8 +446,21 @@ In the example environment, another openSUSE Tumbleweed virtual machine is deplo

Let us create this virtual machine now:

[,shell]
----
[,shell, literal]
----
$ cat <<EOF > user-data.yaml
#cloud-config
disable_root: false
ssh_pwauth: True
users:
- default
- name: suse
groups: sudo
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: False
plain_text_passwd: 'suse'
EOF
$ kubectl apply -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
Expand All @@ -456,7 +486,7 @@ spec:
image: quay.io/containerdisks/opensuse-tumbleweed:1.0.0
name: tumbleweed-containerdisk-0
- cloudInitNoCloud:
userDataBase64: I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRydWUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IHN1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRfcGFzc3dkOiAnc3VzZScKcnVuY21kOgogIC0genlwcGVyIGluIC15IG5naW54CiAgLSBzeXN0ZW1jdGwgZW5hYmxlIC0tbm93IG5naW54CiAgLSBlY2hvICJJdCB3b3JrcyEiID4gL3Nydi93d3cvaHRkb2NzL2luZGV4Lmh0bQo=
userDataBase64: $(cat user-data.yaml)
name: cloudinitdisk
EOF
----
Expand Down Expand Up @@ -524,7 +554,7 @@ The extension allows you to directly interact with KubeVirt Virtual Machine reso
2. Navigate to *KubeVirt > Virtual Machines* page and click `Create from YAML` in the upper right of the screen.
3. Fill in or paste a virtual machine definition and press `Create`. Use virtual machine definition from Deploying Virtual Machines section as an inspiration.

image::virtual-machines-page.png[]
image::virtual-machines-page.png[scaledwidth=100%]

==== Virtual Machine Actions

Expand All @@ -538,7 +568,7 @@ The "Virtual machines" list provides a `Console` drop-down list that allows to c

In some cases, it takes a short while before the console is accessible on a freshly started virtual machine.

image::vnc-console-ui.png[]
image::vnc-console-ui.png[scaledwidth=100%]

== Installing with Edge Image Builder

Expand Down
9 changes: 5 additions & 4 deletions asciidoc/edge-book/releasenotes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -192,10 +192,11 @@ The following table describes the individual components that make up the 3.5.0 r
|======
| Name | Version | Helm Chart Version | Artifact Location (URL/Image)
| SUSE Linux Micro | 6.2 (latest) | N/A | https://www.suse.com/download/sle-micro/[SUSE Linux Micro Download Page] +
SL-Micro.x86_64-6.2-Base-SelfInstall-GM.install.iso (sha256 76390cec4537821d90975d924bf9d7575c9912da89fa460383eff0c623fd403e) +
SL-Micro.x86_64-6.2-Base-RT-SelfInstall-GM.install.iso (sha256 8f4d03c6953966174efb4c4b176cbe6dd5cab6e2a406033fc5c2bf6884da8b5d) +
SL-Micro.x86_64-6.2-Base-GM.raw.xz (sha256 bea3ac202b54ee29fabc28568f50c86d34939cf861679f21a91dfbae9c54c92e) +
SL-Micro.x86_64-6.2-Base-RT-GM.raw.xz (sha256 b9af7763c8d63143e31e5f80eb8d06fcc9477a576de8e358ab2e9835ab39d305) +
| SUSE Linux Micro | 6.2 (latest) | N/A |Checksums and signatures are available for download at https://www.suse.com/download/sle-micro/[SUSE Linux Micro Download Page] +
SL-Micro.x86_64-6.2-Base-SelfInstall-GM.install.iso +
SL-Micro.x86_64-6.2-Base-RT-SelfInstall-GM.install.iso +
SL-Micro.x86_64-6.2-Base-GM.raw.xz +
SL-Micro.x86_64-6.2-Base-RT-GM.raw.xz +
| SUSE Multi-Linux Manager | 5.0.6 | N/A | https://www.suse.com/download/suse-manager/[SUSE Multi-Linux Manager Download Page]
| K3s | 1.34.2 | N/A | https://github.com/k3s-io/k3s/releases/tag/v1.34.2%2Bk3s1[Upstream K3s Release]
| RKE2 | 1.34.2 | N/A | https://github.com/rancher/rke2/releases/tag/v1.34.2%2Brke2r1[Upstream RKE2 Release]
Expand Down
6 changes: 3 additions & 3 deletions asciidoc/edge-book/welcome.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ SUSE Edge is comprised of both existing SUSE and Rancher components along with a

==== Management Cluster

image::suse-edge-management-cluster.svg[scaledwidth=100%]
image::suse-edge-management-cluster.png[scaledwidth=100%]

* *Management*: This is the centralized part of SUSE Edge that is used to manage the provisioning and lifecycle of connected downstream clusters. The management cluster typically includes the following components:
** Multi-cluster management with <<components-rancher,Rancher Prime>>, enabling a common dashboard for downstream cluster onboarding and ongoing lifecycle management of infrastructure and applications, also providing comprehensive tenant isolation and `IDP` (Identity Provider) integrations, a large marketplace of third-party integrations and extensions, and a vendor-neutral API.
Expand All @@ -49,7 +49,7 @@ image::suse-edge-management-cluster.svg[scaledwidth=100%]

==== Downstream Clusters

image::suse-edge-downstream-cluster.svg[scaledwidth=100%]
image::suse-edge-downstream-cluster.png[scaledwidth=100%]

* *Downstream*: This is the distributed part of SUSE Edge that is used to run the user workloads at the Edge, i.e. the software that is running at the edge location itself, and is typically comprised of the following components:
** A choice of Kubernetes distributions, with secure and lightweight distributions like <<components-k3s,K3s>> and <<components-rke2,RKE2>> (`RKE2` is hardened, certified and optimized for usage in government and regulated industries).
Expand All @@ -60,7 +60,7 @@ image::suse-edge-downstream-cluster.svg[scaledwidth=100%]

=== Connectivity

image::suse-edge-connected-architecture.svg[scaledwidth=100%]
image::suse-edge-connected-architecture.png[scaledwidth=100%]

The above image provides a high-level architectural overview for *connected* downstream clusters and their attachment to the management cluster. The management cluster can be deployed on a wide variety of underlying infrastructure platforms, in both on-premises and cloud capacities, depending on networking availability between the downstream clusters and the target management cluster. The only requirement for this to function are API and callback URL's to be accessible over the network that connects downstream cluster nodes to the management infrastructure.

Expand Down
Binary file added asciidoc/images/product-atip-requirements1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added asciidoc/images/suse-edge-downstream-cluster.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added asciidoc/images/suse-edge-management-cluster.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion asciidoc/product/atip-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The hardware requirements for SUSE Telco Cloud are as follows:

As a reference for the network architecture, the following diagram shows a typical network architecture for a Telco environment:

image::product-atip-requirements1.svg[scaledwidth=100%]
image::product-atip-requirements1.png[scaledwidth=100%]

The network architecture is based on the following components:

Expand Down
8 changes: 4 additions & 4 deletions asciidoc/quickstart/eib.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ This will output something similar to:

[,console]
----
$6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
$6$G392FCbxVgn[...]Y7zTXnC1
----

We can then add a section in the definition file called `operatingSystem` with a `users` array inside it. The resulting file should look like:
Expand All @@ -178,7 +178,7 @@ image:
operatingSystem:
users:
- username: root
encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
encryptedPassword: $6$G392FCbxVgn[...]Y7zTXnC1
----

[NOTE]
Expand Down Expand Up @@ -291,7 +291,7 @@ image:
operatingSystem:
users:
- username: root
encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
encryptedPassword: $6$G392FCbxVgn[...]Y7zTXnC1
packages:
packageList:
- nvidia-container-toolkit
Expand Down Expand Up @@ -354,7 +354,7 @@ image:
operatingSystem:
users:
- username: root
encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
encryptedPassword: $6$G392FCbxVgn[...]Y7zTXnC1
packages:
packageList:
- nvidia-container-toolkit
Expand Down
2 changes: 1 addition & 1 deletion asciidoc/quickstart/elemental.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This approach can be useful in scenarios where the devices that you want to cont

== High-level architecture

image::quickstart-elemental-architecture.svg[scaledwidth=100%]
image::quickstart-elemental-architecture.png[scaledwidth=100%]
Comment thread
ranjinimn marked this conversation as resolved.

== Resources needed

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/quickstart/metal3.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ cluster bare-metal servers, including automated inspection, cleaning and provisi

== High-level architecture

image::quickstart-metal3-architecture.svg[scaledwidth=100%]
image::quickstart-metal3-architecture.png[scaledwidth=100%]

== Prerequisites

Expand Down
Loading