master
YunMao 3 years ago
commit c697e42da4

@ -0,0 +1,3 @@
{
"python.pythonPath": "C:\\Users\\YunMao\\.conda\\envs\\ml\\python.exe"
}

@ -0,0 +1,5 @@
import torch
print(torch.cuda.is_available())
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"

@ -0,0 +1,813 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Coursework1: Convolutional Neural Networks "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## instructions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please submit a version of this notebook containing your answers **together with your trained model** on CATe as CW2.zip. Write your answers in the cells below each question."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting up working environment "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this coursework you will need to train a large network, therefore we recommend you work with Google Colaboratory, which provides free GPU time. You will need a Google account to do so. \n",
"\n",
"Please log in to your account and go to the following page: https://colab.research.google.com. Then upload this notebook.\n",
"\n",
"For GPU support, go to \"Edit\" -> \"Notebook Settings\", and select \"Hardware accelerator\" as \"GPU\".\n",
"\n",
"You will need to install pytorch by running the following cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install torch torchvision"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this coursework you will implement one of the most commonly used model for image recognition tasks, the Residual Network. The architecture is introduced in 2015 by Kaiming He, et al. in the paper [\"Deep residual learning for image recognition\"](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf). \n",
"<br>\n",
"\n",
"In a residual network, each block contains some convolutional layers, plus \"skip\" connections, which allow the activations to by pass a layer, and then be summed up with the activations of the skipped layer. The image below illustrates a building block in residual networks."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![resnet-block](utils/resnet-block.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Depending on the number of building blocks, resnets can have different architectures, for example ResNet-50, ResNet-101 and etc. Here you are required to build ResNet-18 to perform classification on the CIFAR-10 dataset, therefore your network will have the following architecture:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![resnet](utils/resnet.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 (40 points)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this part, you will use basic pytorch operations to define the 2D convolution, max pooling operation, linear layer as well as 2d batch normalization. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### YOUR TASK"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- implement the forward pass for Conv2D, MaxPool2D, Linear and BatchNorm2d\n",
"- You are **NOT** allowed to use the torch.nn modules"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"\n",
"class Conv2d(nn.Module):\n",
" def __init__(self,\n",
" in_channels,\n",
" out_channels,\n",
" kernel_size,\n",
" stride=1,\n",
" padding=0,\n",
" bias=True):\n",
"\n",
" super(Conv2d, self).__init__()\n",
" \"\"\"\n",
" An implementation of a convolutional layer.\n",
"\n",
" The input consists of N data points, each with C channels, height H and\n",
" width W. We convolve each input with F different filters, where each filter\n",
" spans all C channels and has height HH and width WW.\n",
"\n",
" Parameters:\n",
" - w: Filter weights of shape (F, C, HH, WW)\n",
" - b: Biases, of shape (F,)\n",
" - kernel_size: Size of the convolving kernel\n",
" - stride: The number of pixels between adjacent receptive fields in the\n",
" horizontal and vertical directions.\n",
" - padding: The number of pixels that will be used to zero-pad the input.\n",
" \"\"\"\n",
"\n",
" ########################################################################\n",
" # TODO: Define the parameters used in the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" self.in_channels = in_channels\n",
" self.out_channels = out_channels\n",
" self.kernel_size = kernel_size\n",
" self.stride = stride\n",
" self.padding = padding\n",
"\n",
" self.w = nn.Parameter(torch.Tensor(out_channels, in_channels, kernel_size, kernel_size))\n",
" self.w.data.normal_(-0.1, 0.1)\n",
"\n",
"\n",
" if bias:\n",
" self.b = nn.Parameter(torch.Tensor(outchannel, ))\n",
" self.b.data.normal_(-0.1, 0.1)\n",
" else:\n",
" self.b = None\n",
"\n",
"\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" def forward(self, x):\n",
" \"\"\"\n",
" Input:\n",
" - x: Input data of shape (N, C, H, W)\n",
" Output:\n",
" - out: Output data, of shape (N, F, H', W').\n",
" \"\"\"\n",
"\n",
" ########################################################################\n",
" # TODO: Implement the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" return out"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"class MaxPool2d(nn.Module):\n",
" def __init__(self, kernel_size):\n",
" super(MaxPool2d, self).__init__()\n",
" \"\"\"\n",
" An implementation of a max-pooling layer.\n",
"\n",
" Parameters:\n",
" - kernel_size: the size of the window to take a max over\n",
" \"\"\"\n",
" ########################################################################\n",
" # TODO: Define the parameters used in the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" def forward(self, x):\n",
" \"\"\"\n",
" Input:\n",
" - x: Input data of shape (N, C, H, W)\n",
" Output:\n",
" - out: Output data, of shape (N, F, H', W').\n",
" \"\"\"\n",
" ########################################################################\n",
" # TODO: Implement the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" return out"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"class Linear(nn.Module):\n",
" def __init__(self, in_channels, out_channels, bias=True):\n",
" super(Linear, self).__init__()\n",
" \"\"\"\n",
" An implementation of a Linear layer.\n",
"\n",
" Parameters:\n",
" - weight: the learnable weights of the module of shape (in_channels, out_channels).\n",
" - bias: the learnable bias of the module of shape (out_channels).\n",
" \"\"\"\n",
" ########################################################################\n",
" # TODO: Define the parameters used in the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" def forward(self, x):\n",
" \"\"\"\n",
" Input:\n",
" - x: Input data of shape (N, *, H) where * means any number of additional\n",
" dimensions and H = in_channels\n",
" Output:\n",
" - out: Output data of shape (N, *, H') where * means any number of additional\n",
" dimensions and H' = out_channels\n",
" \"\"\"\n",
" ########################################################################\n",
" # TODO: Implement the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" return out"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"class BatchNorm2d(nn.Module):\n",
" def __init__(self, num_features, eps=1e-05, momentum=0.1):\n",
" super(BatchNorm2d, self).__init__()\n",
" \"\"\"\n",
" An implementation of a Batch Normalization over a mini-batch of 2D inputs.\n",
"\n",
" The mean and standard-deviation are calculated per-dimension over the\n",
" mini-batches and gamma and beta are learnable parameter vectors of\n",
" size num_features.\n",
"\n",
" Parameters:\n",
" - num_features: C from an expected input of size (N, C, H, W).\n",
" - eps: a value added to the denominator for numerical stability. Default: 1e-5\n",
" - momentum: momentum the value used for the running_mean and running_var\n",
" computation. Default: 0.1\n",
" - gamma: the learnable weights of shape (num_features).\n",
" - beta: the learnable bias of the module of shape (num_features).\n",
" \"\"\"\n",
" ########################################################################\n",
" # TODO: Define the parameters used in the forward pass #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" def forward(self, x):\n",
" \"\"\"\n",
" During training this layer keeps running estimates of its computed mean and\n",
" variance, which are then used for normalization during evaluation.\n",
" Input:\n",
" - x: Input data of shape (N, C, H, W)\n",
" Output:\n",
" - out: Output data of shape (N, C, H, W) (same shape as input)\n",
" \"\"\"\n",
" ########################################################################\n",
" # TODO: Implement the forward pass #\n",
" # (be aware of the difference for training and testing) #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
"\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this part, you will train a ResNet-18 defined on the CIFAR-10 dataset. Code for training and evaluation are provided. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Your Task"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Train your network to achieve the best possible test set accuracy after a maximum of 10 epochs of training.\n",
"\n",
"2. You can use techniques such as optimal hyper-parameter searching, data pre-processing\n",
"\n",
"3. If necessary, you can also use another optimizer\n",
"\n",
"4. **Answer the following question:**\n",
"Given such a network with a large number of trainable parameters, and a training set of a large number of data, what do you think is the best strategy for hyperparameter searching? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from torch.nn import Conv2d, MaxPool2d\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we define ResNet-18:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# define resnet building blocks\n",
"\n",
"class ResidualBlock(nn.Module): \n",
" def __init__(self, inchannel, outchannel, stride=1): \n",
" \n",
" super(ResidualBlock, self).__init__() \n",
" \n",
" self.left = nn.Sequential(Conv2d(inchannel, outchannel, kernel_size=3, \n",
" stride=stride, padding=1, bias=False), \n",
" nn.BatchNorm2d(outchannel), \n",
" nn.ReLU(inplace=True), \n",
" Conv2d(outchannel, outchannel, kernel_size=3, \n",
" stride=1, padding=1, bias=False), \n",
" nn.BatchNorm2d(outchannel)) \n",
" \n",
" self.shortcut = nn.Sequential() \n",
" \n",
" if stride != 1 or inchannel != outchannel: \n",
" \n",
" self.shortcut = nn.Sequential(Conv2d(inchannel, outchannel, \n",
" kernel_size=1, stride=stride, \n",
" padding = 0, bias=False), \n",
" nn.BatchNorm2d(outchannel) ) \n",
" \n",
" def forward(self, x): \n",
" \n",
" out = self.left(x) \n",
" \n",
" out += self.shortcut(x) \n",
" \n",
" out = F.relu(out) \n",
" \n",
" return out\n",
"\n",
"\n",
" \n",
" # define resnet\n",
"\n",
"class ResNet(nn.Module):\n",
" \n",
" def __init__(self, ResidualBlock, num_classes = 10):\n",
" \n",
" super(ResNet, self).__init__()\n",
" \n",
" self.inchannel = 64\n",
" self.conv1 = nn.Sequential(Conv2d(3, 64, kernel_size = 3, stride = 1,\n",
" padding = 1, bias = False), \n",
" nn.BatchNorm2d(64), \n",
" nn.ReLU())\n",
" \n",
" self.layer1 = self.make_layer(ResidualBlock, 64, 2, stride = 1)\n",
" self.layer2 = self.make_layer(ResidualBlock, 128, 2, stride = 2)\n",
" self.layer3 = self.make_layer(ResidualBlock, 256, 2, stride = 2)\n",
" self.layer4 = self.make_layer(ResidualBlock, 512, 2, stride = 2)\n",
" self.maxpool = MaxPool2d(4)\n",
" self.fc = nn.Linear(512, num_classes)\n",
" \n",
" \n",
" def make_layer(self, block, channels, num_blocks, stride):\n",
" \n",
" strides = [stride] + [1] * (num_blocks - 1)\n",
" \n",
" layers = []\n",
" \n",
" for stride in strides:\n",
" \n",
" layers.append(block(self.inchannel, channels, stride))\n",
" \n",
" self.inchannel = channels\n",
" \n",
" return nn.Sequential(*layers)\n",
" \n",
" \n",
" def forward(self, x):\n",
" \n",
" x = self.conv1(x)\n",
" \n",
" x = self.layer1(x)\n",
" x = self.layer2(x)\n",
" x = self.layer3(x)\n",
" x = self.layer4(x)\n",
" \n",
" x = self.maxpool(x)\n",
" \n",
" x = x.view(x.size(0), -1)\n",
" \n",
" x = self.fc(x)\n",
" \n",
" return x\n",
" \n",
" \n",
"def ResNet18():\n",
" return ResNet(ResidualBlock)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading dataset\n",
"We will import images from the [torchvision.datasets](https://pytorch.org/docs/stable/torchvision/datasets.html) library <br>\n",
"First, we need to define the alterations (transforms) we want to perform to our images - given that transformations are applied when importing the data. <br>\n",
"Define the following transforms using the torchvision.datasets library -- you can read the transforms documentation [here](https://pytorch.org/docs/stable/torchvision/transforms.html): <br>\n",
"1. Convert images to tensor\n",
"2. Normalize mean and std of images with values:mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch.optim as optim\n",
"from torch.utils.data import DataLoader\n",
"from torch.utils.data import sampler\n",
"\n",
"import torchvision.datasets as dset\n",
"\n",
"import numpy as np\n",
"\n",
"import torchvision.transforms as T\n",
"\n",
"##############################################################\n",
"# YOUR CODE HERE # \n",
"##############################################################\n",
"\n",
"\n",
"\n",
"##############################################################\n",
"# END OF YOUR CODE #\n",
"##############################################################\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now load the dataset using the transform you defined above, with batch_size = 64<br>\n",
"You can check the documentation [here](https://pytorch.org/docs/stable/torchvision/datasets.html).\n",
"Then create data loaders (using DataLoader from torch.utils.data) for the training and test set"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"##############################################################\n",
"# YOUR CODE HERE # \n",
"##############################################################\n",
"\n",
"data_dir = './data'\n",
"\n",
"\n",
"\n",
"##############################################################\n",
"# END OF YOUR CODE # \n",
"##############################################################\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"USE_GPU = True\n",
"dtype = torch.float32 \n",
"\n",
"if USE_GPU and torch.cuda.is_available():\n",
" device = torch.device('cuda')\n",
"else:\n",
" device = torch.device('cpu')\n",
" \n",
" \n",
"\n",
"print_every = 100\n",
"def check_accuracy(loader, model):\n",
" # function for test accuracy on validation and test set\n",
" \n",
" if loader.dataset.train:\n",
" print('Checking accuracy on validation set')\n",
" else:\n",
" print('Checking accuracy on test set') \n",
" num_correct = 0\n",
" num_samples = 0\n",
" model.eval() # set model to evaluation mode\n",
" with torch.no_grad():\n",
" for x, y in loader:\n",
" x = x.to(device=device, dtype=dtype) # move to device\n",
" y = y.to(device=device, dtype=torch.long)\n",
" scores = model(x)\n",
" _, preds = scores.max(1)\n",
" num_correct += (preds == y).sum()\n",
" num_samples += preds.size(0)\n",
" acc = float(num_correct) / num_samples\n",
" print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))\n",
"\n",
" \n",
"\n",
"def train_part(model, optimizer, epochs=1):\n",
" \"\"\"\n",
" Train a model on CIFAR-10 using the PyTorch Module API.\n",
" \n",
" Inputs:\n",
" - model: A PyTorch Module giving the model to train.\n",
" - optimizer: An Optimizer object we will use to train the model\n",
" - epochs: (Optional) A Python integer giving the number of epochs to train for\n",
" \n",
" Returns: Nothing, but prints model accuracies during training.\n",
" \"\"\"\n",
" model = model.to(device=device) # move the model parameters to CPU/GPU\n",
" for e in range(epochs):\n",
" print(len(loader_train))\n",
" for t, (x, y) in enumerate(loader_train):\n",
" model.train() # put model to training mode\n",
" x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n",
" y = y.to(device=device, dtype=torch.long)\n",
"\n",
" scores = model(x)\n",
" loss = F.cross_entropy(scores, y)\n",
"\n",
" # Zero out all of the gradients for the variables which the optimizer\n",
" # will update.\n",
" optimizer.zero_grad()\n",
"\n",
" loss.backward()\n",
"\n",
" # Update the parameters of the model using the gradients\n",
" optimizer.step()\n",
"\n",
" if t % print_every == 0:\n",
" print('Epoch: %d, Iteration %d, loss = %.4f' % (e, t, loss.item()))\n",
" #check_accuracy(loader_val, model)\n",
" print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# code for optimising your network performance\n",
"\n",
"##############################################################\n",
"# YOUR CODE HERE # \n",
"##############################################################\n",
"\n",
"\n",
"\n",
"##############################################################\n",
"# END OF YOUR CODE #\n",
"##############################################################\n",
"\n",
"\n",
"# define and train the network\n",
"model = ResNet18()\n",
"optimizer = optim.Adam(model.parameters())\n",
"\n",
"train_part(model, optimizer, epochs = 10)\n",
"\n",
"\n",
"# report test set accuracy\n",
"\n",
"check_accuracy(loader_test, model)\n",
"\n",
"\n",
"# save the model\n",
"torch.save(model.state_dict(), 'model.pt')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code provided below will allow you to visualise the feature maps computed by different layers of your network. Run the code (install matplotlib if necessary) and **answer the following questions**: \n",
"\n",
"1. Compare the feature maps from low-level layers to high-level layers, what do you observe? \n",
"\n",
"2. Use the training log, reported test set accuracy and the feature maps, analyse the performance of your network. If you think the performance is sufficiently good, explain why; if not, what might be the problem and how can you improve the performance?\n",
"\n",
"3. What are the other possible ways to analyse the performance of your network?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**YOUR ANSWER FOR PART 3 HERE**\n",
"\n",
"A:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install matplotlib\n",
"\n",
"import matplotlib.pyplot as plt\n",
"\n",
"plt.tight_layout()\n",
"\n",
"\n",
"activation = {}\n",
"def get_activation(name):\n",
" def hook(model, input, output):\n",
" activation[name] = output.detach()\n",
" return hook\n",
"\n",
"vis_labels = ['conv1', 'layer1', 'layer2', 'layer3', 'layer4']\n",
"\n",
"for l in vis_labels:\n",
"\n",
" getattr(model, l).register_forward_hook(get_activation(l))\n",
" \n",
" \n",
"data, _ = cifar10_test[0]\n",
"data = data.unsqueeze_(0).to(device = device, dtype = dtype)\n",
"\n",
"output = model(data)\n",
"\n",
"\n",
"\n",
"for idx, l in enumerate(vis_labels):\n",
"\n",
" act = activation[l].squeeze()\n",
"\n",
" if idx < 2:\n",
" ncols = 8\n",
" else:\n",
" ncols = 32\n",
" \n",
" nrows = act.size(0) // ncols\n",
" \n",
" fig, axarr = plt.subplots(nrows, ncols)\n",
" fig.suptitle(l)\n",
"\n",
"\n",
" for i in range(nrows):\n",
" for j in range(ncols):\n",
" axarr[i, j].imshow(act[i * nrows + j].cpu())\n",
" axarr[i, j].axis('off')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pyg",
"language": "python",
"name": "pyg"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Loading…
Cancel
Save