Code used in paper DeepJSCC-f: Deep Joint Source-Channel Coding of Images with Feedback, appearing in IEEE Journal on Selected Areas in Information Theory (JSAIT).
Authors: David Burth Kurka and Deniz Gündüz
This guide will help you to setup kurka's into google collab.
To setup the development evrioment we will begin by checking our current working directory.
import os
print(os.getcwd())
!ls
By defaut google collab run in /content in order save our progress, we will store all the data into our google drive.
Run this code to mount your google drive
from google.colab import drive
drive.mount('/content/drive')
Once the drive is mounted you will drive see a new folder in the files sections on left side of the window. You will now need to change your current working directory by running this code cell
%cd /content/drive/My Drive/Colab Notebooks/
YOU SHOULD RUN THIS STEP ONLY IF YOU HAVE NOT FETCHED THE THIS REPOSITORY BEFORE
This command will fetch the repository into your google drive.
! git clone https://github.com/kurka/deepJSCC-feedback.git
Next we will assign ourself a GPU.
Click Edit in the menu located on the top left of the window, then click to Notebook Settings then change the Hardware setting to GPU -> click save.
Congratualtion ! You have done alot ! but we still need to few more things.
Now we need to install the compatiable libaraires by running the following code into the cell.
!pip install ConfigArgParse
!pip install tensorflow==1.15
!pip install tensorflow-compression
Once the installation is completed, Make sure you are in the following directory
!ls
#change directory to
%cd /content/drive/My Drive/Colab Notebooks/deepJSCC-feedback/
Once tensorflow installations is finshed, you should confirm that a gpu has been be allocated to you by run the following code below, the output should produce 1.
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
Now if you have followed all the steps correctly, then you should be able to run the following following command.
!python jscc.py --help
OUTPUT SHOULD LOOK LIKE THIS
usage: jscc.py [-h] [-c MY_CONFIG] --conv_depth CONV_DEPTH --n_layers N_LAYERS
[--channel {awgn,fading,fading-real}] [--model_dir MODEL_DIR]
[--eval_dir EVAL_DIR] [--delete_previous_model]
[--channel_snr_train CHANNEL_SNR_TRAIN]
[--channel_snr_eval CHANNEL_SNR_EVAL] [--feedback_noise]
[--feedback_snr_train FEEDBACK_SNR_TRAIN]
[--feedback_snr_eval FEEDBACK_SNR_EVAL]
[--learn_rate LEARN_RATE] [--run_eval_once]
[--train_epochs TRAIN_EPOCHS]
[--batch_size_train BATCH_SIZE_TRAIN]
[--batch_size_eval BATCH_SIZE_EVAL]
[--epochs_between_evals EPOCHS_BETWEEN_EVALS]
[--dataset_train {cifar,imagenet,kodak}]
[--dataset_eval {cifar,imagenet,kodak}]
[--data_dir_train DATA_DIR_TRAIN]
[--data_dir_eval DATA_DIR_EVAL]
[--pretrained_base_layer PRETRAINED_BASE_LAYER]
[--target_analysis]
If the help output is produced without any error, then you can run this simple command code to fire your modal. So far this modal is running on some extent but you see additional errors, you will need to make some changes in the jssc.py file accordingly. I have test this modal on 3 layers and it was working fine however it crashed as I change the image dataset forexample the imagenet dataset is not working properly.
!python jscc.py --dataset_train kodak --conv_depth 3 --n_layers 3
OUTPUT SHOULD LOOK LIKE THIS
#######################################
Current execution paramenters:
batch_size_eval: 128
batch_size_train: 128
channel: awgn
channel_snr_eval: 1
channel_snr_train: 1
conv_depth: 3.0
data_dir_eval: /tmp/train_data
data_dir_train: /tmp/train_data
dataset_eval: cifar
dataset_train: kodak
delete_previous_model: False
epochs_between_evals: 30
eval_dir: /tmp/train_logs/eval
feedback_noise: False
feedback_snr_eval: 20
feedback_snr_train: 20
learn_rate: 0.0001
model_dir: /tmp/train_logs
my_config: None
n_layers: 3
pretrained_base_layer: None
run_eval_once: False
target_analysis: False
train_epochs: 10000
...................................
...................................
...................................