-
Notifications
You must be signed in to change notification settings - Fork 0
Analysis
As explained in the Method section, the measurements were taken on the server. We defined two scenarios in order to compare the two different transcoding modes on the considered parameters.
The goal is to capture the evolution of three parameters (CPU, disk usage and bandwidth) through time while five clients are watching the same video. The difference between the scenarios lies in the transcoding mode the server was:
- In Sc1, the video was not transcoded when it was requested by the clients, it was transcoded in real time while they were watching it. Afterwards, the transcoded video was erased from storage in order to conduct the same experiment again (function explain in the [Clear streams](Clear streams) section).
- In Sc2, the video was already transcoded and available on the server's storage when the clients requested it. The content was broadcasted entirely and then the clients disconnected before starting another experiment.
Here are the curves we expected to obtain.

Figure 1 - Expected measures
In this scenario, we expect to see the used storage space to increase progressively while the video is being transcoded. In this regard, the CPU load should be high during this period as transcoding necessitates important calculations. Concerning the bandwidth, we expected to be able to correlate the curve with the buffer length of the clients, however, as we did not measure it, we can only expect to see intense traffic activity in the beginning and sometimes during the transcoding as more chunks become available.
Considering our expectations, we virtually separated each experiment into 4 steps:
- Clients request the video
- Server starts the transcoding
- Server finishes the transcoding
- Clients finish the video
Using the time.sh script (cf. Method), we timestamped the 1st, 3rd and 4th steps in order to correlate these events with what the results will show.
The following curves display the superpositions of the same parameter measured during the four experiments. They were adjusted horizontally to have the event “Clients request the video” at the same time.

Figure 2 - CPU load in % from the 4 experiments
The CPU load plateaus to around 80% after the clients requested the video.
This correspond to the transcoding happening on the server.
There are a few seconds before the the load skyrockets, this may correspond to the time the server processes the requests and evaluates the transcoding parameters.
After some time, the load comes back to lower values, probably because the video is now fully transcoded.

Figure 3 - Disk usage in % from the 4 experiments
As expected, the used space increases progressively after the clients requested the video. The stair-shaped curve can be explained by two factors: the fact the video is transcoded into multiple chunks or that the variation is so small the tool we used could not be more precise than a hundredth of a percent. This second one seems more likely as there would be more chunks than we see stairs.

Figure 4 - Network out in kbps from the 4 experiments
Packets are sent from the server all along the duration of the viewing. The shapes of the four curves are very similar, however, we can see some peaks appear at different time.
All the experiments start with an important outgoing data transfer: as the first chunks are being sent and the clients’ buffers are filling, the demand can be important.

Figure 5 - Network out in kbps and CPU load in % from the 1st experiment
Both curves are highly correlated, there is no doubt there is a causality between the CPU processing the video and the server sending high amount of data. It is important to note the CPU load increases before the first peak of network write. Indeed, the chunks need to be created before they can be sent.
When the CPU load decreases, we do not observe any change on the network’s curve. This is not surprising as the server only deals with the clients demand when it can, the end of the plateau only means the transcoding ended, however some chunks remain to be sent to the clients.

Figure 6 - Disk usage in % and CPU load in % from the 1st experiment
The DiskUsage and CPU are really correlate. When the video have to be transcode and given at the user at the same time, disk usage plateaus at a max. Then, for each time the server need to send chunk, we have little spike.

Figure 7 - Disk usage in % and Network out in kbps from the 1st experiment
(WIP)
At the opposite of the first scenario, we expected the storage usage stay at the same level during and after the test phase. This idea is link as the fact that offline transcoding have video chunk already transcode. Concerning the bandwidth, we expected to be able to correlate the curve with the buffer length of the clients, however, as we did not measure it, we can only expect to see intense traffic activity in the beginning and much less in the end as the clients would have downloaded all the chunks
The following curves display the superpositions of the same parameter measured during the four experiments. They were adjusted horizontally to have the event “Clients request the video” at the same time.

Figure 8 - CPU load in % from the 4 experiments
As the fact that video is already transcode, we don’t have any huge spike of CPU load. However, there is some small spike, we think this is due to video chunk send at client.

Figure 9 - Disk usage in % from the 4 experiments
As predicted, the curve of this diagram is perfectly flat. This is totally logical due to the way that offline transcoding work.

Figure 10 - Network out in kbps from the 4 experiments
Packets are sent from the server all along the duration of the viewing. The shapes of the four curves are very similar, however, we can see some peaks appear at different time.
All the experiments start with an important outgoing data transfer: as the first chunks are being sent and the clients’ buffers are filling, the demand can be important.

Figure 11 - Network out in kbps and CPU load in % from the 3rd experiment
(WIP)