Add volume snapshot test with vm part#2531
Conversation
There was a problem hiding this comment.
Pull request overview
Adds an e2e integration test covering volume snapshots in the context of a VM workflow (write data → snapshot volume → restore to new volume → boot new VM from restored volume → verify data), addressing Issue #472’s request for snapshot support coverage.
Changes:
- Updated the
ubuntu_vmfixture to inject an SSH public key via cloud-init so tests can SSH into the VM. - Added a VM+volume snapshot/restore test (
test_snapshot_of_volume_on_vm) underTestVolumeWithVM. - Introduced a helper (
vm_shell_do) to SSH into a VM via a host jumphost and run actions/assertions.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
12ef35e to
1b9b624
Compare
lanfon72
left a comment
There was a problem hiding this comment.
and it looks we should refactor the class class TestVolumeWithVM.
3 test cases: pause_vm, stop_vm and delete_vm are duplicated in test_3_vm_functions, it looks like we are not testing additional volumes on those test cases.
and remaining test cases: test_delete_volume_on_existing_vm and test_delete_volume_on_deleted_vm should be moved to test_3_vm_functions as well.
Maybe we can move them into class TestVMWithVolumes in test_3_vm_functions or test_3_vm?
@albinsun the class was added by you, could you work on this?
| # source | ||
| assert ubuntu_image["id"] == annotations['harvesterhci.io/imageId'], (code, data) | ||
|
|
||
| def test_snapshot_of_volume_on_vm(self, api_client, ubuntu_vm, ubuntu_image, |
There was a problem hiding this comment.
I think this test case should move to test_4_vm_snapshot as it hardly depends on VM. (then we won't need to re-implement the vm_shell_do again.)
There was a problem hiding this comment.
pause_vm, stop_vm and delete_vm are functions for test cases test_delete_volume_on_existing_vm and test_delete_volume_on_deleted_vm.
These 2 test cases focuses on volume deletion which used by a VM, so I put them in this test_1_volumes suite.
As volume and VM frequently work together, so it's easy to have overlaped scenarios.
I'm ok to refactor it once we have clear guideline for how to classify volume related test case into test_1_volumes, test_3_vm_functions or test_3_vm.
There was a problem hiding this comment.
The test cases under TestVMWithVolumes focus on volume not vm, so I prefer to remain those test cases in test_1_volumes.
For vm_shell_do, maybe we can move it under VirtualMachineManager?
There was a problem hiding this comment.
@albinsun @asc5543, we did described how to determine the test case in https://github.com/harvester/tests/blob/main/harvester_e2e_tests/integrations/README.md, as you mentioned, those test cases are hardly depend on VM, so they should be VM related test case.
It means, if VM's basic functionality is not working (for example, not able to be running status), those test cases' result will be affected.
pause_vm, stop_vm and delete_vm are functions for test cases
but the delete_vm not be used, and pause_vm/stop_vm could be used in the class only, consider we would moved to test_3_vm, we can use existing fixture vmchecker.wait_status_stopped to reduce the maintain effort. (we don't have fixture for pause status, maybe it good to have one?)
For vm_shell_do, maybe we can move it under VirtualMachineManager?
I didn't get the point, do you mean move to _apiclient`'s manager? could you explain more about you thought?
There was a problem hiding this comment.
@albinsun @asc5543, we did described how to determine the test case in https://github.com/harvester/tests/blob/main/harvester_e2e_tests/integrations/README.md, as you mentioned, those test cases are hardly depend on VM, so they should be VM related test case.
It means, if VM's basic functionality is not working (for example, not able to be running status), those test cases' result will be affected.
I see, I'm okay to refactor it to test_4_vm_snapshot with the definition on https://github.com/harvester/tests/blob/main/harvester_e2e_tests/integrations/README.md
I didn't get the point, do you mean move to _apiclient`'s manager? could you explain more about you thought?
Yes, move to the VirtualMachineManager, which will be called as api_client.vms
Such as implement a function in VirtualMachineManager like
class VirtualMachineManager(BaseManager):
...
def shell_do(self, name, namespace, ...):
# Do what vm_shell_do
...
And in integration test cases, we can just call
api_client.vms.vm_shell_do(vm_name, api_client,
host_shell, vm_shell,
ubuntu_image['ssh_user'], ssh_keypair,
action, wait_timeout)
There was a problem hiding this comment.
I guess it might not be a good idea.
Managers reflect to our UI's pages (hosts, images, volumes, vms, cluster network...etc)
and models align to backend api's data model, the vm_shell_do more like the human behavior, if we move it to managers, which means related fixtures would also need to be placed in.
There was a problem hiding this comment.
Moved test_snapshot_of_volume_on_vm from test_1_volumes to test_4_vm_snapshot
0741580 to
fc3891c
Compare
6f79bdc to
b4811be
Compare
1a7313d to
2e21315
Compare
Signed-off-by: asc5543 <asc5543@gmail.com>
Signed-off-by: asc5543 <asc5543@gmail.com>
Which issue(s) this PR fixes:
Issue #472
What this PR does / why we need it:
Add volume snapshot test with vm part.
Additional documentation or context
Test Pass in

v1.8.0-rc5in the lastest changes