Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
32d8035
course started
mrartemev-oss Jan 21, 2021
1972a1e
addede custom config
mrartemev-oss Jan 21, 2021
965b387
Update README.md
MaximArtemev Jan 21, 2021
57a3fd4
added intro seminar
mrartemev-oss Jan 21, 2021
65d1256
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
mrartemev-oss Jan 21, 2021
261e299
test upload new seminar
MaximArtemev Jan 21, 2021
c773019
new seminar file
MaximArtemev Jan 21, 2021
15cd208
Delete seminars/seminar-0-pytorch_intro/seminar-1-autoencoders directory
MaximArtemev Jan 21, 2021
17adbba
lecture-2
Jan 21, 2021
f190b88
Seminar 2 added
HolyBayes Jan 27, 2021
7a860ae
GAN-lecture
Jan 29, 2021
a34d2a1
Seminar notebooks fixed
HolyBayes Feb 3, 2021
c44dac9
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
HolyBayes Feb 3, 2021
ae8a56c
Seminar 3
HolyBayes Feb 4, 2021
63cdf53
lec-4
Feb 7, 2021
c9b3c49
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
Feb 7, 2021
12a8bb2
Create .gitkeep
MaximArtemev Feb 10, 2021
b171c36
hw-1
MaximArtemev Feb 10, 2021
984c0b2
Seminar 4 added
HolyBayes Feb 11, 2021
1b53d3a
References updated
HolyBayes Feb 11, 2021
d97f06b
VAE-lec
Feb 12, 2021
2946f97
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
Feb 12, 2021
2722869
GP fixed
HolyBayes Feb 12, 2021
18f3e18
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
HolyBayes Feb 12, 2021
254292e
Fixed train test split
leshanbog Feb 12, 2021
0753e7a
Merge pull request #2 from leshanbog/fix_train_test_split
MaximArtemev Feb 12, 2021
ec103e9
Fixed accuracy calculation
leshanbog Feb 12, 2021
97146e8
Merge pull request #3 from leshanbog/fix_accuracy_calc
MaximArtemev Feb 12, 2021
1e6db34
Seminar 5
HolyBayes Feb 18, 2021
3e7a933
Solved seminar
HolyBayes Feb 19, 2021
a279c27
Create .gitkeep
MaximArtemev Feb 24, 2021
ee7433f
added hw-2
MaximArtemev Feb 24, 2021
637a56a
Seminar 6
HolyBayes Feb 25, 2021
884c560
Code added
HolyBayes Feb 25, 2021
df8b0ba
лекции
Mar 3, 2021
297608b
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
Mar 3, 2021
26be24c
Seminar 7
HolyBayes Mar 3, 2021
7e6058e
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
HolyBayes Mar 3, 2021
8f5d248
nflib added
HolyBayes Mar 3, 2021
8cbfdb4
sound generation
Mar 9, 2021
ae0f2b2
NF
Mar 9, 2021
0e0c6d5
hw1
arinaruck Mar 10, 2021
1c8ddc6
Create .gitkeep
MaximArtemev Mar 11, 2021
e099776
Add files via upload
MaximArtemev Mar 11, 2021
20f54bd
Create .gitkeep
MaximArtemev Mar 13, 2021
9c60a47
added hw3
MaximArtemev Mar 13, 2021
30a852d
Update README.md
dendee1 Mar 16, 2021
c1e14cd
Update README.md
dendee1 Mar 16, 2021
79aaa37
NF2
Mar 16, 2021
928ff3f
Exam program added
dendee1 Mar 17, 2021
fd31a5a
Seminar 9
HolyBayes Mar 18, 2021
6020988
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
HolyBayes Mar 18, 2021
adfffd9
Seminar 9 + HW 4
HolyBayes Mar 18, 2021
731d990
Seminar 9 updated
HolyBayes Mar 18, 2021
cf17da7
HW4 updated
HolyBayes Mar 18, 2021
4d68791
Homework 4 updated
HolyBayes Mar 18, 2021
0777830
instance norm -> batch norm
arinaruck Mar 18, 2021
9cfbaec
Merge branch 'spring-2021' of https://github.com/HSE-LAMBDA/DeepGener…
arinaruck Mar 18, 2021
0078b0e
hw3 + hw4 added
arinaruck Apr 4, 2021
db31ba1
Update README.md
arinaruck Apr 27, 2021
cc24b3f
Update README.md
arinaruck Apr 27, 2021
4c1a240
Update README.md
arinaruck Apr 27, 2021
fbd7f9b
Update README.md
arinaruck Apr 27, 2021
32d79e5
Update README.md
arinaruck Apr 29, 2021
d395391
Update README.md
arinaruck Apr 29, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
.gitignore
.gitconfig

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down
Binary file removed Exam.pdf
Binary file not shown.
Binary file added Exam2021.pdf
Binary file not shown.
61 changes: 60 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,60 @@
# DeepGenerativeModels
# Deep Generative Models

Course for the students of HSE University AMI program [Moscow](https://www.hse.ru/en/ba/ami/) and [Saint-Petersburg](https://spb.hse.ru/en/ba/appmath/) campuses, and [Yandex School of Data Analysis](https://yandexdataschool.com/).

Lecture: Denis Derkach

Seminars: Maksim Artemev, Artem Ryzhikov

Assistents: Aleksander Markovich, Sergey Chervontsev

Notion: https://www.notion.so/mrartemevstudents/Generative-models-HSE-4ba9fa3db4f341d98cfa2bfe2c04ad1f

# Results:

<details>
<summary> HW1 Autoencoders </summary>

## Report (in Russian)
[CLick here](https://wandb.ai/arinaruck/gen%20models%20hw1/reports/-1-AE--Vmlldzo0OTA1OTU)
## Results:
Omniglot reconstruction

![image](https://user-images.githubusercontent.com/22507422/116169116-a84d0c80-a70c-11eb-9ba5-d6502b66fe9f.png)

</details>

<details>
<summary> HW2 StarGAN</summary>

## Report(in Russian):
[CLick here](https://wandb.ai/arinaruck/gen%20models%20hw2/reports/-2--Vmlldzo1MjIxOTU)
## Results:
Data example:

![image](https://user-images.githubusercontent.com/22507422/116218664-6c896580-a753-11eb-8a7e-5420e066f6b3.png)


CelebA feature translation

![image](https://user-images.githubusercontent.com/22507422/116219337-2c76b280-a754-11eb-86fb-14f3568109ed.png)


</details>

<details>
<summary> HW3 VAE + Glow </summary>

## Report (in Russian)
[CLick here](https://wandb.ai/arinaruck/gen%20models%20hw3/reports/HW3--Vmlldzo1NDYzNDQ)
## Results:
Data example:

![image](https://user-images.githubusercontent.com/22507422/116218664-6c896580-a753-11eb-8a7e-5420e066f6b3.png)

VAE Generation
![image](https://user-images.githubusercontent.com/22507422/116169679-eeef3680-a70d-11eb-8819-57820207d793.png)
Glow:
![image](https://user-images.githubusercontent.com/22507422/116169635-d848df80-a70d-11eb-987c-045b63decbc3.png)

</details>
Binary file added homework/1-AE/.DS_Store
Binary file not shown.
1 change: 1 addition & 0 deletions homework/1-AE/.gitkeep
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

12 changes: 12 additions & 0 deletions homework/1-AE/.idea/1-AE.iml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions homework/1-AE/.idea/inspectionProfiles/profiles_settings.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions homework/1-AE/.idea/misc.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions homework/1-AE/.idea/modules.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 6 additions & 0 deletions homework/1-AE/.idea/vcs.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

38 changes: 38 additions & 0 deletions homework/1-AE/autoencoder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import torch
import torch.nn as nn
import torch.nn.functional as F
from base import DownsampleBlock, UpsampleBlock


class AutoEncoder(nn.Module):
def __init__(self, hidden_size):
super().__init__()
self.hidden_size = hidden_size
self.encoder = nn.Sequential(
DownsampleBlock(1, 32), # 64 x 64 x 1 -> 32 x 32 x 16
DownsampleBlock(32, 64), # 32 x 32 x 16 -> 16 x 16 x 32
DownsampleBlock(64, 128), # 16 x 16 x 32 -> 8 x 8 x 64
DownsampleBlock(128, 256), # 8 x 8 x 64 -> 4 x 4 x 256
DownsampleBlock(256, 128), # 4 x 4 x 256 -> 2 x 2 x 128
DownsampleBlock(128, hidden_size) # 2 x 2 x 128 -> 1 x 1 x hidden
)

self.decoder = nn.Sequential(
UpsampleBlock(self.hidden_size, 16), # 1 x 1 x hidden -> 2 x 2 x 4
UpsampleBlock(16, 32), # 2 x 2 x 4 -> 4 x 4 x 8
UpsampleBlock(32, 64), # 4 x 4 x 8 -> 8 x 8 x 16
UpsampleBlock(64, 128), # 8 x 8 x 16 -> 16 x 16 x 32
UpsampleBlock(128, 256), # 16 x 16 x 32 -> 32 x 32 x 64
UpsampleBlock(256, 256), # 32 x 32 x 64 -> 64 x 64 x 128
nn.Conv2d(256, 1, kernel_size=1),
nn.Tanh()
)

def forward(self, x):
z = self.encoder(x)
x = self.decoder(z)
return x

def get_latent_features(self, x):
z = self.encoder(x)
return z
35 changes: 35 additions & 0 deletions homework/1-AE/base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import torch
import torch.nn as nn


class DownsampleBlock(nn.Module):
def __init__(self, channels_in, channels_out):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(channels_in, channels_out, kernel_size=3, padding=1, stride=2),
nn.BatchNorm2d(channels_out),
nn.LeakyReLU(0.2),
nn.Conv2d(channels_out, channels_out, kernel_size=3, padding=1),
nn.BatchNorm2d(channels_out),
nn.LeakyReLU(0.2)
)

def forward(self, x):
return self.net(x)


class UpsampleBlock(nn.Module):
def __init__(self, channels_in, channels_out):
super().__init__()
self.net = nn.Sequential(
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
nn.Conv2d(channels_in, channels_out, kernel_size=3, padding=1),
nn.BatchNorm2d(channels_out),
nn.LeakyReLU(0.2),
nn.Conv2d(channels_out, channels_out, kernel_size=3, padding=1),
nn.BatchNorm2d(channels_out),
nn.LeakyReLU(0.2)
)

def forward(self, x):
return self.net(x)
93 changes: 93 additions & 0 deletions homework/1-AE/calculate_fid.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
import numpy as np
import torch
from scipy import linalg
import numpy as np
from torch.nn.functional import adaptive_avg_pool2d
from tqdm import tqdm


def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
"""Numpy implementation of the Frechet Distance.
The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)
and X_2 ~ N(mu_2, C_2) is
d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).

Stable version by Dougal J. Sutherland.

Params:
-- mu1 : Numpy array containing the activations of a layer of the
inception net (like returned by the function 'get_predictions')
for generated samples.
-- mu2 : The sample mean over activations, precalculated on an
representative data set.
-- sigma1: The covariance matrix over activations for generated samples.
-- sigma2: The covariance matrix over activations, precalculated on an
representative data set.

Returns:
-- : The Frechet Distance.
"""

mu1 = np.atleast_1d(mu1)
mu2 = np.atleast_1d(mu2)

sigma1 = np.atleast_2d(sigma1)
sigma2 = np.atleast_2d(sigma2)

assert mu1.shape == mu2.shape, \
'Training and test mean vectors have different lengths'
assert sigma1.shape == sigma2.shape, \
'Training and test covariances have different dimensions'

diff = mu1 - mu2

# Product might be almost singular
covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
if not np.isfinite(covmean).all():
msg = ('fid calculation produces singular product; '
'adding %s to diagonal of cov estimates') % eps
print(msg)
offset = np.eye(sigma1.shape[0]) * eps
covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))

# Numerical error might give slight imaginary component
if np.iscomplexobj(covmean):
if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
m = np.max(np.abs(covmean.imag))
raise ValueError('Imaginary component {}'.format(m))
covmean = covmean.real

tr_covmean = np.trace(covmean)

return (diff.dot(diff) + np.trace(sigma1) +
np.trace(sigma2) - 2 * tr_covmean)


@torch.no_grad()
def calculate_activation_statistics(dataloader, model, classifier):
classifier.eval()
model.eval()
device = next(model.parameters()).device
batch_size = dataloader.batch_size
examples = len(dataloader) * batch_size
input_acts = np.zeros((examples, classifier.hidden))
output_acts = np.zeros((examples, classifier.hidden))

for i, (image, _) in enumerate(dataloader):
input_img = image.to(device)
output_img = model(input_img)
input_act = classifier.get_activations(input_img)
output_act = classifier.get_activations(output_img)
input_acts[i * batch_size: (i + 1) * batch_size] = input_act.cpu().numpy()
output_acts[i * batch_size: (i + 1) * batch_size] = output_act.cpu().numpy()
mu1, sigma1 = input_acts.mean(axis=0), np.cov(input_acts, rowvar=False)
mu2, sigma2 = output_acts.mean(axis=0), np.cov(output_acts, rowvar=False)
return mu1, sigma1, mu2, sigma2


@torch.no_grad()
def calculate_fid(dataloader, model, classifier):
m1, s1, m2, s2 = calculate_activation_statistics(dataloader, model, classifier)
fid_value = calculate_frechet_distance(m1, s1, m2, s2)

return fid_value.item()
46 changes: 46 additions & 0 deletions homework/1-AE/classifier.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
import torch
import torch.nn as nn
from base import DownsampleBlock


class Classifier(nn.Module):
def __init__(self, n_classes=10, mode='mnist'):
super().__init__()
self.n_classes = n_classes
self.hidden = 2 * 2 * 32
if mode == 'mnist':
self.encode = nn.Sequential(
DownsampleBlock(1, 16), # 64 x 64 x 1 -> 32 x 32 x 16
DownsampleBlock(16, 32), # 32 x 32 x 16 -> 16 x 16 x 32
DownsampleBlock(32, 64), # 16 x 16 x 32 -> 8 x 8 x 64
DownsampleBlock(64, 32), # 8 x 8 x 64 -> 4 x 4 x 32
DownsampleBlock(32, 32), # 4 x 4 x 32 -> 2 x 2 x 32
nn.Flatten()
)
self.classify = nn.Sequential(
nn.Linear(2 * 2 * 32, 64),
nn.LeakyReLU(0.2),
nn.Linear(64, self.n_classes)
)
else:
self.encode = nn.Sequential(
DownsampleBlock(1, 32), # 64 x 64 x 1 -> 32 x 32 x 32
DownsampleBlock(32, 64), # 32 x 32 x 32 -> 16 x 16 x 64
DownsampleBlock(64, 128), # 16 x 16 x 64 -> 8 x 8 x 128
DownsampleBlock(128, 256), # 8 x 8 x 128 -> 4 x 4 x 256
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.Flatten()
)
self.classify = nn.Sequential(
nn.Linear(4 * 4 * 256, 1024),
nn.LeakyReLU(0.2),
nn.Linear(1024, self.n_classes)
)

def forward(self, x):
x = self.encode(x)
x = self.classify(x)
return x

def get_activations(self, x):
return self.encode(x)
Loading