{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from sklearn.model_selection import train_test_split\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[name: \"/device:CPU:0\"\n",
      "device_type: \"CPU\"\n",
      "memory_limit: 268435456\n",
      "locality {\n",
      "}\n",
      "incarnation: 11253827645091602098\n",
      ", name: \"/device:XLA_GPU:0\"\n",
      "device_type: \"XLA_GPU\"\n",
      "memory_limit: 17179869184\n",
      "locality {\n",
      "}\n",
      "incarnation: 13866281303298628756\n",
      "physical_device_desc: \"device: XLA_GPU device\"\n",
      ", name: \"/device:XLA_GPU:1\"\n",
      "device_type: \"XLA_GPU\"\n",
      "memory_limit: 17179869184\n",
      "locality {\n",
      "}\n",
      "incarnation: 13397785900969373157\n",
      "physical_device_desc: \"device: XLA_GPU device\"\n",
      ", name: \"/device:XLA_CPU:0\"\n",
      "device_type: \"XLA_CPU\"\n",
      "memory_limit: 17179869184\n",
      "locality {\n",
      "}\n",
      "incarnation: 7127729592685840599\n",
      "physical_device_desc: \"device: XLA_CPU device\"\n",
      "]\n"
     ]
    }
   ],
   "source": [
    "from tensorflow.python.client import device_lib \n",
    "print(device_lib.list_local_devices())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## Dataflow\n",
    "\n",
    "Датафлоу - концепция программирования: программа или модель представляется в форме направленного графа (_вычислений_).\n",
    "\n",
    "Такой подход обладает следущими преимуществами: \n",
    "* Простота параллелизации программы: по графу легко понять, какие операции можно выполнять одновременно\n",
    "* Распределенные вычисления (кластеры видеокарт, CPU, TPU)\n",
    "* Компиляция графа: создается быстрые оптимизированный код для вычислений\n",
    "* Граф вычислений - универсальное представление, которое является портируемым между различными языками и платформами\n",
    "\n",
    "### Мы будет говорить об интерфейсе на python\n",
    "\n",
    "### Граф\n",
    "\n",
    "В python представлен классом `tf.Graph`\n",
    "\n",
    "У графа есть следующие \"основные\" составляющие:\n",
    "* структура графа - ребра и узлы (edges и nodes)\n",
    "* коллекции, связанные с графом (подробности далее)\n",
    "\n",
    "### Узлы и ребра\n",
    "\n",
    "* Узлы графа - это операции `tf.Operation`\n",
    "* Ребра графа - значения, представленные наследниками класса `tf.Tensor`\n",
    "\n",
    "\n",
    "#### Добавление значений в граф\n",
    "\n",
    "* Базовый \"кирпичик\" - функция `tf.constant(x)`, или операция, всегда возвращающая x. \n",
    "\n",
    "> Например, операция `tf.constant(13)` создает `tf.Tensor` (ребро) со значением $13$\n",
    "\n",
    "* Другой базовый элемент - `tf.Variable(x)`, создающий узел, в котором хранится _изменяемое_ значение. Это может быть полезно, например, при обучении модели: в переменной будут храниться веса модели. \n",
    "* Над тензорами можно проводить операции, создавая новые узлы.\n",
    "* Для оптимизации нужно вызвать `tf.train.Optimizer.minimize` - и ко всем операциям в графе будут добавлены операции (и связанные с ними тензоры), вычисляющие градиенты.\n",
    "\n",
    "\n",
    "#### Пример\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tensor(\"Const:0\", shape=(), dtype=int32)\n",
      "Tensor(\"Const_1:0\", shape=(), dtype=int32)\n",
      "Tensor(\"add:0\", shape=(), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "a = tf.constant(2)  # Создаем узлы графа c константами\n",
    "b = tf.constant(2)  \n",
    "c = a + b  # Складываем значения - создаем новый узел\n",
    "\n",
    "print(a)\n",
    "print(b)\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tensor(\"Const:0\", shape=(), dtype=int32)\n",
      "Tensor(\"Const_1:0\", shape=(), dtype=int32)\n",
      "Tensor(\"add:0\", shape=(), dtype=int32)\n"
     ]
    }
   ],
   "source": [
    "print(a)\n",
    "print(b)\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Создаем сессию и сохраняем результаты \n",
    "\n",
    "> Скоро разберемся с сессиями и tensorboard"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4\n"
     ]
    }
   ],
   "source": [
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)  # logs - имя директории, где будут храниться результаты\n",
    "    print(sess.run(c))  # Получаем результат вычислений в сессии\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "#### Пример графа\n",
    "<img src=\"files/img/simple_graph.png\">\n",
    "\n",
    "#### Обозначения\n",
    "<img src=\"files/img/legend.png\" width=\"400\">\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### tf.Session\n",
    "\n",
    "> \"Просто так\" значения в графе не вычисляются, нужно создавать и запускать сессию, чтобы получить результаты\n",
    "\n",
    "Несколько фактов:\n",
    "* Класс `tf.Session`\n",
    "* Сайт tensorflow сообщает, что \"сессия инкапсулирует окружение, в котором выполняются `tf.Operation` и вычисляются значения `tf.Tensor`\"\n",
    "* Можно активировать eager mode, в котором вычисления осуществляются \"на лету\", тогда `tf.Session` не нужен\n",
    "* В tensorflow 2.0 от `tf.Session` отказались \n",
    "\n",
    "#### Как корректно использовать tf.Session\n",
    "\n",
    "> Не сосвсем корректно:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "a = tf.constant(1)\n",
    "b = tf.constant(2)\n",
    "c = a + b\n",
    "\n",
    "sess = tf.Session()\n",
    "sess.run(c)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Чего не хватает?\n",
    "\n",
    "Сессия может \"захватывать\" ресурсы, после завершения вычислений их надо освободить. Есть два способа:\n",
    "\n",
    "> * `sess.close()` \n",
    "* `with tf.Session() as sess: ...`\n",
    "* Второй способ закрывает сессию автоматически и предпочтительнее\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Session is closed\n"
     ]
    }
   ],
   "source": [
    "sess.close()\n",
    "\n",
    "try:\n",
    "    print(sess.run(c))\n",
    "except RuntimeError:\n",
    "    print(\"Session is closed\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n",
      "Session is closed\n"
     ]
    }
   ],
   "source": [
    "a = tf.constant(1)\n",
    "b = tf.constant(2)\n",
    "c = a + b\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    result = sess.run(c)\n",
    "\n",
    "print(result)\n",
    "\n",
    "try:\n",
    "    sess.run(c)\n",
    "except RuntimeError:\n",
    "    print(\"Session is closed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### Скоупы и имена\n",
    "\n",
    "Важная часть работы tensorflow - скоупы (scope) и имена переменных. Используются для удобства работы с графом\n",
    "\n",
    "#### Имена переменных\n",
    "\n",
    "Все операции, создающие новые операции (`tf.Operation`) или новый `tf.Tensor` - могут получить имя: \n",
    "\n",
    "> `zero = tf.constant(0, name='zero') `\n",
    "    \n",
    "Имена не работают в _eager mode_!\n",
    "\n",
    "Повторяющиеся имена tensorflow \"за вас\" делает различимыми:\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'zero_1:0'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "zero_1 = tf.constant(0, name='zero')\n",
    "zero_1.name"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    \n",
    "## Scope\n",
    "\n",
    "(\"Рамки\"? Области действия?)\n",
    "\n",
    "> Нужны для группировки переменных и тензоров. В общем, для наведения порядка в том коде, который вы пишите.\n",
    "\n",
    "* Скоупы организованы иерархически, как вложенные директории\n",
    "* Имена в разных скоупах могут повторяться (как имена файлов во вложеннх директориях)\n",
    "\n",
    "_Добавим еще две переменных внутри скоупов к графу_:\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "with tf.name_scope('outer_scope'):\n",
    "    zero_outer = tf.constant(0, name='zero')\n",
    "    \n",
    "    with tf.name_scope('inner_scope'):\n",
    "        inner_scope = tf.constant(0, name='zero')    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "zero\n",
      "zero_1\n",
      "outer_scope/zero\n",
      "outer_scope/inner_scope/zero\n"
     ]
    }
   ],
   "source": [
    "# tf.get_default_graph().as_graph_def() - граф, представленный как JSON\n",
    "for node in tf.get_default_graph().as_graph_def().node:\n",
    "    print(node.name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "    \n",
    "Разберем выражение по строкам:\n",
    "    \n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Граф по умолчанию:\n",
      " <tensorflow.python.framework.ops.Graph object at 0x7f4ff86bb588>\n",
      "Проверка типа: точно ли это tf.Graph?\n",
      " True\n"
     ]
    }
   ],
   "source": [
    "print(f\"Граф по умолчанию:\\n {tf.get_default_graph()}\")\n",
    "\n",
    "def_graph = tf.get_default_graph()\n",
    "print(f\"Проверка типа: точно ли это tf.Graph?\\n {isinstance(def_graph, tf.Graph)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "    \n",
    "Полезная команда: \n",
    "\n",
    "`tf.reset_default_graph()`\n",
    "\n",
    "* \"Сбрасывает\" все, что есть в графе по умолчанию\n",
    "* Создает новый граф, пустой\n",
    "* Старый граф удаляется, при этом освобождается память. \n",
    "  * Освобождение памяти полезно, особенно при ограниченных ресурсах (например, GPU), и работе в ноутбуках\n",
    "  * Можно \"случайно\" потерять обученную модель, поэтому не стоит делать reset необдуманно\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.reset_default_graph()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "for node in tf.get_default_graph().as_graph_def().node:\n",
    "    print(node.name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Граф по умолчанию:\n",
      " <tensorflow.python.framework.ops.Graph object at 0x7f4ff86eabe0>\n",
      "Проверка типа: точно ли это tf.Graph?\n",
      " True\n"
     ]
    }
   ],
   "source": [
    "print(f\"Граф по умолчанию:\\n {tf.get_default_graph()}\")\n",
    "\n",
    "def_graph = tf.get_default_graph()\n",
    "print(f\"Проверка типа: точно ли это tf.Graph?\\n {isinstance(def_graph, tf.Graph)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "    \n",
    "Обратите внимание на то, что адреса графов в памяти до и после `tf.reset_default_graph()` различаются!\n",
    "\n",
    "### Применение tensorboard\n",
    "\n",
    "> Не забываем чистить граф при необходимости!\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.reset_default_graph()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "a = tf.constant(2, name='a')\n",
    "b = tf.constant(2, name='b')\n",
    "c = tf.add(a, b, name=\"a_plus_b\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Tensor(\"a:0\", shape=(), dtype=int32)\n",
      "Tensor(\"b:0\", shape=(), dtype=int32)\n",
      "Tensor(\"a_plus_b:0\", shape=(), dtype=int32)\n",
      "a + b = 4\n"
     ]
    }
   ],
   "source": [
    "print(a)\n",
    "print(b)\n",
    "print(c)\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    print(f\"a + b = {sess.run(c)}\")\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "### Граф вычисления a + b\n",
    "\n",
    "<img src=\"files/img/a_plus_b.png\" width=\"400\">\n",
    "\n",
    "### Как посмотреть граф вычислений?\n",
    "\n",
    "Для этого существует tensorboard (устанавливается вместе с tensorflow), который реализует веб-интерфейс для изучения результатов вычислений.\n",
    "\n",
    "* По умолчанию запускается на порте 6006\n",
    "* Результаты экспериментов пишет `tf.summary.FileWriter(dirname, sess.graph)` в директорию `<dirname>`\n",
    "* tensorboard запускается через командную строку. Нужно указывать директорию с логами `FileWriter`:\n",
    "> `tensorboard --logdir=<dirname>`\n",
    "* tensorboard работает с логами, и после `tf.reset_default_graph()` реузльтаты не пропадают\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "## Подробнее о переменных\n",
    "\n",
    "tensorflow - слишком сложный интерфейс для \"просто\" математических операций. Более интересная вещь - переменные, `tf.Variable`\n",
    "\n",
    "\n",
    "Предположим, что мы хотим найти минимум функции $f(x) = 2x^2 + 5x - 9$. Как записать эту функцию на tensorflow?\n",
    "\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.reset_default_graph()\n",
    "a = tf.constant(2., name='a')\n",
    "b = tf.constant(5., name='b')\n",
    "c = tf.constant(-9., name='c')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "x - ?  \n",
    "f(x) - ???\n",
    "\n",
    "> `x` удобно представить с помощью переменной: `x = tf.Variable(<initial-value>, name=<name>)`\n",
    "\n",
    "\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = tf.Variable(0, name='x', dtype='float32')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "metadata": {},
   "outputs": [],
   "source": [
    "f_x = a*x**2 + b * x + c"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<tf.Tensor 'add_1:0' shape=() dtype=float32>"
      ]
     },
     "execution_count": 100,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "f_x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a\n",
      "b\n",
      "c\n",
      "x/initial_value\n",
      "x\n",
      "x/Assign\n",
      "x/read\n",
      "pow/y\n",
      "pow\n",
      "mul\n",
      "mul_1\n",
      "add\n",
      "add_1\n"
     ]
    }
   ],
   "source": [
    "for node in tf.get_default_graph().as_graph_def().node:\n",
    "    print(node.name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "metadata": {},
   "outputs": [],
   "source": [
    "opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)\n",
    "sgd_operation = opt.minimize(loss=f_x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a\n",
      "b\n",
      "c\n",
      "x/initial_value\n",
      "x\n",
      "x/Assign\n",
      "x/read\n",
      "pow/y\n",
      "pow\n",
      "mul\n",
      "mul_1\n",
      "add\n",
      "add_1\n",
      "gradients/Shape\n",
      "gradients/grad_ys_0\n",
      "gradients/Fill\n",
      "gradients/add_1_grad/tuple/group_deps\n",
      "gradients/add_1_grad/tuple/control_dependency\n",
      "gradients/add_1_grad/tuple/control_dependency_1\n",
      "gradients/add_grad/tuple/group_deps\n",
      "gradients/add_grad/tuple/control_dependency\n",
      "gradients/add_grad/tuple/control_dependency_1\n",
      "gradients/mul_grad/Mul\n",
      "gradients/mul_grad/Mul_1\n",
      "gradients/mul_grad/tuple/group_deps\n",
      "gradients/mul_grad/tuple/control_dependency\n",
      "gradients/mul_grad/tuple/control_dependency_1\n",
      "gradients/mul_1_grad/Mul\n",
      "gradients/mul_1_grad/Mul_1\n",
      "gradients/mul_1_grad/tuple/group_deps\n",
      "gradients/mul_1_grad/tuple/control_dependency\n",
      "gradients/mul_1_grad/tuple/control_dependency_1\n",
      "gradients/pow_grad/Shape\n",
      "gradients/pow_grad/Shape_1\n",
      "gradients/pow_grad/BroadcastGradientArgs\n",
      "gradients/pow_grad/sub/y\n",
      "gradients/pow_grad/sub\n",
      "gradients/pow_grad/Pow\n",
      "gradients/pow_grad/mul\n",
      "gradients/pow_grad/MulNoNan\n",
      "gradients/pow_grad/Sum\n",
      "gradients/pow_grad/Reshape\n",
      "gradients/pow_grad/Greater/y\n",
      "gradients/pow_grad/Greater\n",
      "gradients/pow_grad/ones_like/Shape\n",
      "gradients/pow_grad/ones_like/Const\n",
      "gradients/pow_grad/ones_like\n",
      "gradients/pow_grad/Select\n",
      "gradients/pow_grad/Log\n",
      "gradients/pow_grad/zeros_like\n",
      "gradients/pow_grad/Select_1\n",
      "gradients/pow_grad/mul_1\n",
      "gradients/pow_grad/MulNoNan_1\n",
      "gradients/pow_grad/Sum_1\n",
      "gradients/pow_grad/Reshape_1\n",
      "gradients/pow_grad/tuple/group_deps\n",
      "gradients/pow_grad/tuple/control_dependency\n",
      "gradients/pow_grad/tuple/control_dependency_1\n",
      "gradients/AddN\n",
      "GradientDescent/learning_rate\n",
      "GradientDescent/update_x/ApplyGradientDescent\n",
      "GradientDescent\n"
     ]
    }
   ],
   "source": [
    "for node in tf.get_default_graph().as_graph_def().node:\n",
    "    print(node.name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 167,
   "metadata": {},
   "outputs": [],
   "source": [
    "def reset_and_make_variables_part():\n",
    "    \"\"\"Для демонстрации одним блоком, без оптимизатора\"\"\"\n",
    "    tf.reset_default_graph()\n",
    "    \n",
    "    # with tf.name_scope(name='f_x'):\n",
    "    a = tf.constant(2., name='a')\n",
    "    b = tf.constant(5., name='b')\n",
    "    c = tf.constant(-9., name='c')\n",
    "    x = tf.Variable(0, name='x', dtype='float32')\n",
    "    f_x = a*x**2 + b * x + c\n",
    "    \n",
    "    return x, f_x\n",
    "\n",
    "\n",
    "def reset_and_make_variables_example():\n",
    "    \"\"\"Для демонстрации одним блоком\"\"\"\n",
    "    tf.reset_default_graph()\n",
    "\n",
    "    # with tf.name_scope(name='f_x'):\n",
    "    a = tf.constant(2., name='a')\n",
    "    b = tf.constant(5., name='b')\n",
    "    c = tf.constant(-9., name='c')\n",
    "    x = tf.Variable(0, name='x', dtype='float32')\n",
    "    f_x = a*x**2 + b * x + c\n",
    "    f_x_summary = tf.summary.scalar('f_x_value', f_x)  # Пояснения дальше в разделе про tensorboard\n",
    "    opt = tf.train.GradientDescentOptimizer(learning_rate=0.1)\n",
    "    sgd_operation = opt.minimize(loss=f_x)\n",
    "    \n",
    "    return x, f_x, sgd_operation, f_x_summary"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Возможные проблемы и их решение"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 114,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Attempting to use uninitialized value x\n",
      "\t [[node x/read (defined at <ipython-input-98-d6c8f8a34f66>:1) ]]\n",
      "\n",
      "Original stack trace for 'x/read':\n",
      "  File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\n",
      "    \"__main__\", mod_spec)\n",
      "  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n",
      "    exec(code, run_globals)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py\", line 16, in <module>\n",
      "    app.launch_new_instance()\n",
      "  File \"/usr/lib/python3/dist-packages/traitlets/config/application.py\", line 658, in launch_instance\n",
      "    app.start()\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py\", line 505, in start\n",
      "    self.io_loop.start()\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/platform/asyncio.py\", line 132, in start\n",
      "    self.asyncio_loop.run_forever()\n",
      "  File \"/usr/lib/python3.6/asyncio/base_events.py\", line 427, in run_forever\n",
      "    self._run_once()\n",
      "  File \"/usr/lib/python3.6/asyncio/base_events.py\", line 1440, in _run_once\n",
      "    handle._run()\n",
      "  File \"/usr/lib/python3.6/asyncio/events.py\", line 145, in _run\n",
      "    self._callback(*self._args)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py\", line 758, in _run_callback\n",
      "    ret = callback()\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py\", line 300, in null_wrapper\n",
      "    return fn(*args, **kwargs)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/gen.py\", line 1233, in inner\n",
      "    self.run()\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/gen.py\", line 1147, in run\n",
      "    yielded = self.gen.send(value)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py\", line 357, in process_one\n",
      "    yield gen.maybe_future(dispatch(*args))\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/gen.py\", line 326, in wrapper\n",
      "    yielded = next(result)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py\", line 267, in dispatch_shell\n",
      "    yield gen.maybe_future(handler(stream, idents, msg))\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/gen.py\", line 326, in wrapper\n",
      "    yielded = next(result)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py\", line 534, in execute_request\n",
      "    user_expressions, allow_stdin,\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/tornado/gen.py\", line 326, in wrapper\n",
      "    yielded = next(result)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py\", line 294, in do_execute\n",
      "    res = shell.run_cell(code, store_history=store_history, silent=silent)\n",
      "  File \"/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py\", line 536, in run_cell\n",
      "    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)\n",
      "  File \"/usr/lib/python3/dist-packages/IPython/core/interactiveshell.py\", line 2718, in run_cell\n",
      "    interactivity=interactivity, compiler=compiler, result=result)\n",
      "  File \"/usr/lib/python3/dist-packages/IPython/core/interactiveshell.py\", line 2822, in run_ast_nodes\n",
      "    if self.run_code(code, result):\n",
      "  File \"/usr/lib/python3/dist-packages/IPython/core/interactiveshell.py\", line 2882, in run_code\n",
      "    exec(code_obj, self.user_global_ns, self.user_ns)\n",
      "  File \"<ipython-input-98-d6c8f8a34f66>\", line 1, in <module>\n",
      "    x = tf.Variable(0, name='x', dtype='float32')\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 259, in __call__\n",
      "    return cls._variable_v1_call(*args, **kwargs)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 220, in _variable_v1_call\n",
      "    shape=shape)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 198, in <lambda>\n",
      "    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py\", line 2511, in default_variable_creator\n",
      "    shape=shape)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 263, in __call__\n",
      "    return super(VariableMetaclass, cls).__call__(*args, **kwargs)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 1568, in __init__\n",
      "    shape=shape)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 1755, in _init_from_args\n",
      "    self._snapshot = array_ops.identity(self._variable, name=\"read\")\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py\", line 180, in wrapper\n",
      "    return target(*args, **kwargs)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py\", line 86, in identity\n",
      "    ret = gen_array_ops.identity(input, name=name)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py\", line 4253, in identity\n",
      "    \"Identity\", input=input, name=name)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py\", line 788, in _apply_op_helper\n",
      "    op_def=op_def)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py\", line 507, in new_func\n",
      "    return func(*args, **kwargs)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py\", line 3616, in create_op\n",
      "    op_def=op_def)\n",
      "  File \"/home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py\", line 2005, in __init__\n",
      "    self._traceback = tf_stack.extract_stack()\n",
      "\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    with tf.Session() as sess:\n",
    "        print(f\"Show x: {sess.run(sgd_operation)}\")\n",
    "except tf.errors.FailedPreconditionError as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 120,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Attempting to use uninitialized value x\n",
      "\t [[{{node _retval_x_0_0}}]]\n"
     ]
    }
   ],
   "source": [
    "x, f_x, sgd_operation = reset_and_make_variables_example()\n",
    "try:\n",
    "    with tf.Session() as sess:\n",
    "        print(f\"Show x: {sess.run(x)}\")\n",
    "except tf.errors.FailedPreconditionError as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "\n",
    "#### Пояснение к ошибке - инициализация переменных\n",
    "В tensorflow переменные необходимо _инициализировать_. К сожалению, проставление _initial value_ не приводит к инициализации автоматически - и эту операцию необходимо проводить явно\n",
    "    \n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 153,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show x: = 0.0\n"
     ]
    }
   ],
   "source": [
    "x, f_x, sgd_operation, _ = reset_and_make_variables_example()\n",
    "\n",
    "try:\n",
    "    with tf.Session() as sess:\n",
    "        sess.run(x.initializer)  # Здесь x инициализируется\n",
    "        print(f\"Show x: = {sess.run(x)}\")\n",
    "except tf.errors.FailedPreconditionError as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "    \n",
    "### Альтернативная инициализация\n",
    "\n",
    "> Все переменные, требующие инициализации, можно заранее \"привязать\" к одной операции, которая будет вызвана внутри сессии. Это удобнее, если требуется инициализировать _много_ переменных:\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 154,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show x: = 0.0\n"
     ]
    }
   ],
   "source": [
    "x, f_x, sgd_operation, _ = reset_and_make_variables_example()\n",
    "init_op = tf.initialize_all_variables()  # Здесь x инициализируется\n",
    "\n",
    "try:\n",
    "    with tf.Session() as sess:\n",
    "        sess.run(init_op)\n",
    "        print(f\"Show x: = {sess.run(x)}\")\n",
    "except tf.errors.FailedPreconditionError as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Сначала посмотрим на граф функции без градиентов"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 140,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show x: = 0.0\n"
     ]
    }
   ],
   "source": [
    "x, f_x = reset_and_make_variables_part()\n",
    "init_op = tf.initialize_all_variables() \n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    print(f\"Show x: = {sess.run(x)}\")\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Граф\n",
    "<img src='files/img/f_x.png' width=400>\n",
    "\n",
    "### Инициализация x\n",
    "<img src='files/img/x_init.png' width=650>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Добавим градиент"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 162,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show x: = 0.0\n",
      "Show f(x): = -9.0\n",
      "-------------------\n",
      "Show x: = -0.5\n",
      "Show f(x): = -11.0\n"
     ]
    }
   ],
   "source": [
    "x, f_x, sgd_operation, _ = reset_and_make_variables_example()\n",
    "init_op = tf.initialize_all_variables() \n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    print(f\"Show x: = {sess.run(x)}\")\n",
    "    print(f\"Show f(x): = {sess.run(f_x)}\")\n",
    "\n",
    "    sess.run(sgd_operation)\n",
    "    print('-------------------')\n",
    "    print(f\"Show x: = {sess.run(x)}\")\n",
    "    print(f\"Show f(x): = {sess.run(f_x)}\")\n",
    "\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Добавлен скоуп с градиентами\n",
    "<img src='files/img/f_x_with_grad.png' width=650>\n",
    "\n",
    "### Содержимое скоупа\n",
    "<img src='files/img/f_x_gradients.png' width=650>\n",
    "\n",
    "### Дополнительные графы\n",
    "<img src='files/img/f_x_with_grad_aux.png' width=650>\n",
    "\n",
    "### Граф SGD\n",
    "<img src='files/img/sgd.png' width=650>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 163,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show x: = -1.2499998807907104\n",
      "Show f(x): = -12.125\n"
     ]
    }
   ],
   "source": [
    "iterations = 100\n",
    "x, f_x, sgd_operation, f_x_summary = reset_and_make_variables_example()\n",
    "init_op = tf.initialize_all_variables() \n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    for step in range(iterations): \n",
    "        sess.run(sgd_operation)\n",
    "        \n",
    "    print(f\"Show x: = {sess.run(x)}\")\n",
    "    print(f\"Show f(x): = {sess.run(f_x)}\")\n",
    "\n",
    "\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Добавим отслеживание значения функции"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 164,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show x: = -1.2499998807907104\n",
      "Show f(x): = -12.125\n"
     ]
    }
   ],
   "source": [
    "iterations = 100\n",
    "x, f_x, sgd_operation, f_x_summary = reset_and_make_variables_example()\n",
    "init_op = tf.initialize_all_variables() \n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    for step in range(iterations): \n",
    "        sgd, f_x_step =  sess.run([sgd_operation, f_x_summary])\n",
    "        writer.add_summary(f_x_step, step)\n",
    "        \n",
    "    print(f\"Show x: = {sess.run(x)}\")\n",
    "    print(f\"Show f(x): = {sess.run(f_x)}\")\n",
    "\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Градиент (the hard way)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 173,
   "metadata": {},
   "outputs": [],
   "source": [
    "def reset_and_make_gradients():\n",
    "    \"\"\"Для демонстрации одним блоком\"\"\"\n",
    "    tf.reset_default_graph()\n",
    "\n",
    "    # with tf.name_scope(name='f_x'):\n",
    "    a = tf.constant(2., name='a')\n",
    "    b = tf.constant(5., name='b')\n",
    "    c = tf.constant(-9., name='c')\n",
    "    x = tf.Variable(0, name='x', dtype='float32')\n",
    "    f_x = a*x**2 + b * x + c\n",
    "    f_x_summary = tf.summary.scalar('f_x_value', f_x)  # Пояснения дальше в разделе про tensorboard\n",
    "    grads = tf.gradients(f_x, [x])\n",
    "    \n",
    "    return x, f_x, grads, f_x_summary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 181,
   "metadata": {},
   "outputs": [],
   "source": [
    "iterations = 100\n",
    "x, f_x, grad, f_x_summary = reset_and_make_gradients()\n",
    "init_op = tf.initialize_all_variables() "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 182,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show grad f_x w.r.t. x at 0.0: = [5.0]\n"
     ]
    }
   ],
   "source": [
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    \n",
    "    grad_step, x_val, f_x_step =  sess.run([grad, x, f_x_summary])\n",
    "    writer.add_summary(f_x_step, step)\n",
    "        \n",
    "    print(f\"Show grad f_x w.r.t. x at {x_val}: = {grad_step}\")\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 183,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show grad f_x w.r.t. x at 0.0: = [5.0]\n"
     ]
    }
   ],
   "source": [
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    \n",
    "    grad_val, x_val, f_x_step =  sess.run([grad, x, f_x_summary])\n",
    "    writer.add_summary(f_x_step, step)\n",
    "        \n",
    "    print(f\"Show grad f_x w.r.t. x at {x_val}: = {grad_val}\")\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"files/img/f_x_just_grads.png\" width=650>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## \"Свой\" градиентный спуск\n",
    "\n",
    "**Точнее, градиентный спуск без стандартного оптимизатора SGD**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 197,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Show grad f_x w.r.t. x at 0.0: = [5.0]\n",
      "Show grad f_x w.r.t. x at -0.5: = [3.0]\n",
      "Show grad f_x w.r.t. x at -0.800000011920929: = [1.8]\n",
      "Show grad f_x w.r.t. x at -0.9800000190734863: = [1.0799999]\n",
      "Show grad f_x w.r.t. x at -1.0880000591278076: = [0.64799976]\n",
      "Show grad f_x w.r.t. x at -1.1528000831604004: = [0.38879967]\n",
      "Show grad f_x w.r.t. x at -1.1916800737380981: = [0.2332797]\n",
      "Show grad f_x w.r.t. x at -1.215008020401001: = [0.13996792]\n",
      "Show grad f_x w.r.t. x at -1.2290048599243164: = [0.08398056]\n",
      "Show grad f_x w.r.t. x at -1.2374029159545898: = [0.050388336]\n",
      "Show grad f_x w.r.t. x at -1.2424417734146118: = [0.030232906]\n",
      "Show grad f_x w.r.t. x at -1.2454650402069092: = [0.01813984]\n",
      "Show grad f_x w.r.t. x at -1.2472790479660034: = [0.010883808]\n",
      "Show grad f_x w.r.t. x at -1.248367428779602: = [0.006530285]\n",
      "Show grad f_x w.r.t. x at -1.2490204572677612: = [0.003918171]\n",
      "Show grad f_x w.r.t. x at -1.2494122982025146: = [0.0023508072]\n",
      "Show grad f_x w.r.t. x at -1.2496473789215088: = [0.0014104843]\n",
      "Show grad f_x w.r.t. x at -1.2497884035110474: = [0.00084638596]\n",
      "Show grad f_x w.r.t. x at -1.2498730421066284: = [0.0005078316]\n",
      "Show grad f_x w.r.t. x at -1.249923825263977: = [0.00030469894]\n",
      "Show grad f_x w.r.t. x at -1.249954342842102: = [0.00018262863]\n",
      "Show grad f_x w.r.t. x at -1.2499725818634033: = [0.00010967255]\n",
      "Show grad f_x w.r.t. x at -1.249983549118042: = [6.580353e-05]\n",
      "Show grad f_x w.r.t. x at -1.2499901056289673: = [3.9577484e-05]\n",
      "Show grad f_x w.r.t. x at -1.2499940395355225: = [2.3841858e-05]\n",
      "Show grad f_x w.r.t. x at -1.2499964237213135: = [1.4305115e-05]\n",
      "Show grad f_x w.r.t. x at -1.249997854232788: = [8.583069e-06]\n",
      "Show grad f_x w.r.t. x at -1.249998688697815: = [5.2452087e-06]\n",
      "Show grad f_x w.r.t. x at -1.2499991655349731: = [3.33786e-06]\n",
      "Show grad f_x w.r.t. x at -1.2499995231628418: = [1.9073486e-06]\n",
      "Show grad f_x w.r.t. x at -1.249999761581421: = [9.536743e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n",
      "Show grad f_x w.r.t. x at -1.2499998807907104: = [4.7683716e-07]\n"
     ]
    }
   ],
   "source": [
    "x, f_x, grad, f_x_summary = reset_and_make_gradients()\n",
    "init_op = tf.initialize_all_variables() \n",
    "\n",
    "iterations = 50\n",
    "learning_rate = 0.1\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    for step in range(iterations):\n",
    "        grad_val, x_val, f_x_step =  sess.run([grad, x, f_x_summary])  # [1]\n",
    "        grad_step = learning_rate * grad_val[0]  # [2]\n",
    "        update = tf.assign(x, x_val - grad_step)  # [3]\n",
    "        sess.run(update)  # [4]\n",
    "        \n",
    "        print(f\"Show grad f_x w.r.t. x at {x_val}: = {grad_val}\")\n",
    "        \n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "Действия по пунктам:\n",
    "\n",
    "* [1] вычисление значений градиента, $x$ и значения $f(x)$\n",
    "* [2] расчет шага градиента с учетом learning rate'a\n",
    "* [3] создание операции обновления $x$\n",
    "* [4] обновление $x$ при помощи `sess.run()`\n",
    "    \n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-- предполагаю, что примерно здесь закончится первый час --"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "# Логистическая регрессия в tensorflow\n",
    "\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 214,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.reset_default_graph()\n",
    "gl_norm_initializer = tf.glorot_normal_initializer()\n",
    "\n",
    "input_shape = 4  # [1]\n",
    "output_shape = 1  # [2]\n",
    "\n",
    "with tf.name_scope('model'):\n",
    "    weights = tf.Variable(gl_norm_initializer((input_shape, 1)), name='weights')\n",
    "    bias = tf.Variable(gl_norm_initializer((output_shape, 1)))    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "    \n",
    "### Как \"поместить\" данные внутрь модели?\n",
    "\n",
    "> Для \"размещения\" данных в tensorflow есть (фабрика?) `tf.placeholder`\n",
    "\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 216,
   "metadata": {},
   "outputs": [],
   "source": [
    "# tf.reset_default_graph()\n",
    "# gl_norm_initializer = tf.glorot_normal_initializer()\n",
    "\n",
    "# input_shape = 4  # [1]\n",
    "# output_shape = 1\n",
    "\n",
    "\n",
    "# with tf.name_scope('communications'):\n",
    "#     data = tf.placeholder(dtype=tf.float32, shape=[None, input_shape])  # [2]\n",
    "#     target = tf.placeholder(dtype=tf.float32, shape=[None, output_shape])\n",
    "    \n",
    "    \n",
    "with tf.name_scope('model'):\n",
    "    weights = tf.Variable(gl_norm_initializer((input_shape, 1)), name='weights')\n",
    "    bias = tf.Variable(gl_norm_initializer((output_shape, 1)))    \n",
    "    model = tf.matmul(data, weights) + bias  # [3]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<font size=\"4\">\n",
    "\n",
    "Действия по пунктам:\n",
    "\n",
    "* [1] переменные, в которых хранятся размерностит модели\n",
    "* [2] создание плейсхолдера; `None` - неопределенный (изменяемый) размер\n",
    "* [3] создание модели\n",
    "* Плейсхолдеры не нужно инициализировать; но в них нужно передавать данные (feed)\n",
    "\n",
    "\n",
    "#### Оптимизация модели\n",
    "\n",
    "Для оптимизации модели можно написать собственные формулы - взяв за основу \n",
    "\n",
    "$L(\\hat{y}, y) = - \\sum\\limits_{i=1}^{n} \\hat{y} \\cdot \\log{y} + (1 - \\hat{y}) \\log{(1 - y)}$\n",
    "\n",
    "проще воспользоваться набором готовых:\n",
    "   \n",
    "`loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=model, labels=target))`\n",
    "\n",
    "* `tf.reduce_mean` - вычисление среднего значения тензора\n",
    "* `tf.nn.sigmoid_cross_entropy_with_logits` - бинарная кросс-энтропия, применяющая сигмоиду к логитам ($wx + b$)\n",
    "</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 275,
   "metadata": {},
   "outputs": [],
   "source": [
    "tf.reset_default_graph()\n",
    "gl_norm_initializer = tf.glorot_normal_initializer()\n",
    "\n",
    "input_shape = 784\n",
    "output_shape = 1\n",
    "learning_rate = 0.03  # Добавили learning rate\n",
    "\n",
    "\n",
    "with tf.name_scope('communications'):\n",
    "    data = tf.placeholder(dtype=tf.float32, shape=[None, input_shape])\n",
    "    target = tf.placeholder(dtype=tf.float32, shape=[None, output_shape])\n",
    "    \n",
    "    \n",
    "with tf.name_scope('model'):\n",
    "    weights = tf.Variable(gl_norm_initializer((input_shape, 1)), name='weights')\n",
    "    bias = tf.Variable(gl_norm_initializer((output_shape, 1)))    \n",
    "    model = tf.matmul(data, weights) + bias \n",
    "    loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=model, labels=target))\n",
    "    opt = tf.train.GradientDescentOptimizer(learning_rate)  # [1]\n",
    "    goal = opt.minimize(loss)  # [2]\n",
    "    \n",
    "with tf.name_scope('evaluate'):\n",
    "    prediction = tf.round(tf.sigmoid(model))  # [3]\n",
    "    accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, target),  # [4]\n",
    "                              dtype=tf.float32), \n",
    "                             )\n",
    "    \n",
    "init_op = tf.initialize_all_variables() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Действия по пунктам:\n",
    "\n",
    "* [1] создание оптимизатора\n",
    "* [2] добавление лосса в функции, которые необходимо минимизировать\n",
    "* [3] операция вычисления значений (т.е. вероятностей в данном случае)\n",
    "* [4] расчет точности модели: подсчет количества совпадений прогноза и целевого значения\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "## Подготовка данных и запуск\n",
    "\n",
    "> tensorflow не умеет \"удобно\" подготавливать данные. Есть следующие подходы:\n",
    "\n",
    "* Подготовка данных в pandas / numpy / другое\n",
    "* tensorflow_transform, apache beam\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Минимальная подготовка данных"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tensorflow.keras.datasets import mnist\n",
    "\n",
    "\n",
    "def prepare_data(l1=1, l2=7):\n",
    "    (train_X, train_y), (test_X, test_y) = mnist.load_data()\n",
    "\n",
    "    train_X = train_X[(train_y == l1) | (train_y == l2)]\n",
    "    train_y = train_y[(train_y == l1) | (train_y == l2)]\n",
    "\n",
    "    test_X = test_X[(test_y == l1) | (test_y == l2)]\n",
    "    test_y = test_y[(test_y == l1) | (test_y == l2)]\n",
    "\n",
    "    train_X = (train_X / (train_X.max() / 2) - 1).reshape((-1, 784))\n",
    "    test_X = (test_X / (test_X.max() / 2) - 1).reshape((-1, 784))\n",
    "    \n",
    "    train_y = np.array([0 if i == l1 else 1 for i in train_y])\n",
    "    test_y = np.array([0 if i == l1 else 1 for i in test_y])\n",
    "    \n",
    "    return train_X, train_y, test_X, test_y\n",
    "\n",
    "\n",
    "def prepare_data_for_nn():\n",
    "    (train_X, train_y), (test_X, test_y) = mnist.load_data()\n",
    "\n",
    "    train_X = (train_X / (train_X.max() / 2) - 1).reshape((-1, 784))\n",
    "    test_X = (test_X / (test_X.max() / 2) - 1).reshape((-1, 784))\n",
    "\n",
    "    return train_X, train_y, test_X, test_y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Слои\n",
    "\n",
    "## Полносвязный слой\n",
    "\n",
    "> В tensorflow 1.13 было несколько вариантов API для работы со слоями. \n",
    "\n",
    "* `tf.layers`\n",
    "* `tf.keras.layers`\n",
    "\n",
    "В версии 2.0 осталось только keras API, но для знакомства и возможности работы с legacy-кодом посмотрим на модуль `tf.layers`. Однако новый код рекомендую писать на tf 2.0.\n",
    "\n",
    "## tf.layers API\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING: Logging before flag parsing goes to stderr.\n",
      "W0619 15:15:30.971963 139668503230272 deprecation.py:506] From /home/ml.stepanov/.local/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1288: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Call initializer instance with the dtype argument instead of passing it to the constructor\n",
      "W0619 15:15:30.985937 139668503230272 deprecation.py:323] From /home/ml.stepanov/.local/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.where in 2.0, which has the same broadcast rule as np.where\n",
      "W0619 15:15:31.039169 139668503230272 deprecation.py:323] From /home/ml.stepanov/.local/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:193: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n",
      "Instructions for updating:\n",
      "Use `tf.global_variables_initializer` instead.\n"
     ]
    }
   ],
   "source": [
    "tf.reset_default_graph()\n",
    "gl_norm_initializer = tf.glorot_normal_initializer()\n",
    "\n",
    "input_shape = 784\n",
    "output_shape = 1\n",
    "learning_rate = 0.03  # Добавили learning rate\n",
    "\n",
    "\n",
    "with tf.name_scope('communications'):\n",
    "    data = tf.placeholder(dtype=tf.float32, shape=[None, input_shape])\n",
    "    target = tf.placeholder(dtype=tf.float32, shape=[None, output_shape])\n",
    "    \n",
    "    \n",
    "with tf.name_scope('model'):\n",
    "    weights = tf.Variable(gl_norm_initializer((input_shape, 1)), name='weights')\n",
    "    bias = tf.Variable(gl_norm_initializer((output_shape, 1)))    \n",
    "    model = tf.matmul(data, weights) + bias \n",
    "    loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=model, labels=target))\n",
    "    opt = tf.train.GradientDescentOptimizer(learning_rate)  # [1]\n",
    "    goal = opt.minimize(loss)  # [2]\n",
    "    \n",
    "with tf.name_scope('evaluate'):\n",
    "    prediction = tf.round(tf.sigmoid(model))  # [3]\n",
    "    accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, target),  # [4]\n",
    "                              dtype=tf.float32), \n",
    "                             )\n",
    "    \n",
    "init_op = tf.initialize_all_variables() "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch:   10 loss: 0.114859 train_acc: 0.962559 test_acc: 0.955617\n",
      "epoch:   20 loss: 0.125860 train_acc: 0.974860 test_acc: 0.969024\n",
      "epoch:   30 loss: 0.073102 train_acc: 0.981010 test_acc: 0.976422\n",
      "epoch:   40 loss: 0.044092 train_acc: 0.976090 test_acc: 0.969949\n",
      "epoch:   50 loss: 0.034323 train_acc: 0.977935 test_acc: 0.973185\n",
      "epoch:   60 loss: 0.060688 train_acc: 0.983394 test_acc: 0.978271\n",
      "epoch:   70 loss: 0.027011 train_acc: 0.980856 test_acc: 0.975497\n",
      "epoch:   80 loss: 0.040022 train_acc: 0.983086 test_acc: 0.977809\n",
      "epoch:   90 loss: 0.065121 train_acc: 0.982702 test_acc: 0.976884\n",
      "epoch:  100 loss: 0.070032 train_acc: 0.987007 test_acc: 0.984281\n",
      "epoch:  110 loss: 0.018496 train_acc: 0.983855 test_acc: 0.978271\n",
      "epoch:  120 loss: 0.030391 train_acc: 0.986776 test_acc: 0.982894\n",
      "epoch:  130 loss: 0.016966 train_acc: 0.986930 test_acc: 0.983356\n",
      "epoch:  140 loss: 0.019036 train_acc: 0.986392 test_acc: 0.982894\n",
      "epoch:  150 loss: 0.040891 train_acc: 0.984239 test_acc: 0.978733\n",
      "epoch:  160 loss: 0.074102 train_acc: 0.987468 test_acc: 0.984743\n",
      "epoch:  170 loss: 0.026878 train_acc: 0.986161 test_acc: 0.981045\n",
      "epoch:  180 loss: 0.093025 train_acc: 0.986238 test_acc: 0.980120\n",
      "epoch:  190 loss: 0.014626 train_acc: 0.987084 test_acc: 0.982894\n",
      "epoch:  200 loss: 0.016862 train_acc: 0.986238 test_acc: 0.980120\n",
      "epoch:  210 loss: 0.145745 train_acc: 0.989160 test_acc: 0.989367\n",
      "epoch:  220 loss: 0.027593 train_acc: 0.986699 test_acc: 0.982432\n",
      "epoch:  230 loss: 0.016292 train_acc: 0.987545 test_acc: 0.984743\n",
      "epoch:  240 loss: 0.084869 train_acc: 0.990313 test_acc: 0.988442\n",
      "epoch:  250 loss: 0.126686 train_acc: 0.990467 test_acc: 0.988442\n",
      "epoch:  260 loss: 0.010623 train_acc: 0.988775 test_acc: 0.986130\n",
      "epoch:  270 loss: 0.064016 train_acc: 0.986084 test_acc: 0.980120\n",
      "epoch:  280 loss: 0.035971 train_acc: 0.991159 test_acc: 0.989829\n",
      "epoch:  290 loss: 0.017589 train_acc: 0.989621 test_acc: 0.987517\n",
      "epoch:  300 loss: 0.024834 train_acc: 0.988545 test_acc: 0.985206\n"
     ]
    }
   ],
   "source": [
    "train_X, train_y, test_X, test_y = prepare_data()\n",
    "\n",
    "batch_size = 32\n",
    "iter_num = 300\n",
    "\n",
    "loss_trace, train_acc, test_acc = [], [], []\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    writer = tf.summary.FileWriter('logs', sess.graph)\n",
    "    sess.run(init_op)\n",
    "    \n",
    "    for epoch in range(iter_num):\n",
    "\n",
    "        batch_index = np.random.choice(len(train_X), size=batch_size)\n",
    "        batch_train_X = train_X[batch_index]\n",
    "        batch_train_y = np.matrix(train_y[batch_index]).T\n",
    "        \n",
    "        sess.run(goal, \n",
    "                 feed_dict={data: batch_train_X,\n",
    "                            target: batch_train_y,\n",
    "                           },\n",
    "                )\n",
    "        temp_loss = sess.run(loss, \n",
    "                             feed_dict={data: batch_train_X, \n",
    "                                        target: batch_train_y,\n",
    "                                       },\n",
    "                            )\n",
    "        \n",
    "\n",
    "        \n",
    "        loss_trace.append(temp_loss)\n",
    "\n",
    "\n",
    "        if (epoch + 1) % 10 == 0:\n",
    "           \n",
    "            temp_train_acc = sess.run(accuracy,\n",
    "                                      feed_dict={data: train_X, \n",
    "                                                 target: np.matrix(train_y).T,\n",
    "                                                },\n",
    "                                     )\n",
    "            temp_test_acc = sess.run(accuracy, \n",
    "                                     feed_dict={data: test_X, \n",
    "                                                target: np.matrix(test_y).T,\n",
    "                                               },\n",
    "                                    )\n",
    "            \n",
    "            train_acc.append(temp_train_acc)\n",
    "            test_acc.append(temp_test_acc)\n",
    "            \n",
    "            print('epoch: {:4d} loss: {:5f} train_acc: {:5f} test_acc: {:5f}'\n",
    "                  .format(epoch + 1, temp_loss, temp_train_acc, temp_test_acc))\n",
    "            \n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Keras API"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow.keras as keras\n",
    "from tensorflow.keras import layers\n",
    "from tqdm import tqdm\n",
    "\n",
    "\n",
    "# Get the model.\n",
    "\n",
    "tf.reset_default_graph()\n",
    "\n",
    "\n",
    "train_X, train_y, test_X, test_y = prepare_data_for_nn()\n",
    "\n",
    "\n",
    "input_shape = 784\n",
    "hidden_shape = 128\n",
    "\n",
    "inputs = keras.Input(shape=(input_shape,), name='digits')\n",
    "x = layers.Dense(hidden_shape, activation='relu', name='dense_1')(inputs)\n",
    "outputs = layers.Dense(10, activation='softmax', name='predictions')(x)\n",
    "model = keras.Model(inputs=inputs, outputs=outputs)\n",
    "\n",
    "# Instantiate an optimizer.\n",
    "optimizer = keras.optimizers.SGD(learning_rate=1e-3)\n",
    "# Instantiate a loss function.\n",
    "loss_fn = keras.losses.SparseCategoricalCrossentropy()\n",
    "val_acc_metric = keras.metrics.SparseCategoricalAccuracy()\n",
    "\n",
    "\n",
    "batch_size = 256\n",
    "train_dataset = tf.data.Dataset.from_tensor_slices((train_X, train_y))\n",
    "train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)\n",
    "\n",
    "val_dataset = tf.data.Dataset.from_tensor_slices((test_X, test_y))\n",
    "val_dataset = val_dataset.batch(batch_size)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "model.compile(optimizer=keras.optimizers.SGD(),\n",
    "              loss=keras.losses.SparseCategoricalCrossentropy(),\n",
    "              metrics=[keras.metrics.SparseCategoricalAccuracy()]\n",
    "             )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train on 60000 samples, validate on 10000 samples\n",
      "Epoch 1/10\n",
      "60000/60000 [==============================] - 2s 31us/sample - loss: 0.2770 - sparse_categorical_accuracy: 0.9190 - val_loss: 0.2337 - val_sparse_categorical_accuracy: 0.9313\n",
      "Epoch 2/10\n",
      "60000/60000 [==============================] - 2s 28us/sample - loss: 0.2291 - sparse_categorical_accuracy: 0.9330 - val_loss: 0.2024 - val_sparse_categorical_accuracy: 0.9406\n",
      "Epoch 3/10\n",
      "60000/60000 [==============================] - 2s 29us/sample - loss: 0.1955 - sparse_categorical_accuracy: 0.9435 - val_loss: 0.1800 - val_sparse_categorical_accuracy: 0.9480\n",
      "Epoch 4/10\n",
      "60000/60000 [==============================] - 2s 29us/sample - loss: 0.1710 - sparse_categorical_accuracy: 0.9510 - val_loss: 0.1648 - val_sparse_categorical_accuracy: 0.9512\n",
      "Epoch 5/10\n",
      "60000/60000 [==============================] - 2s 29us/sample - loss: 0.1522 - sparse_categorical_accuracy: 0.9564 - val_loss: 0.1507 - val_sparse_categorical_accuracy: 0.9552\n",
      "Epoch 6/10\n",
      "60000/60000 [==============================] - 2s 27us/sample - loss: 0.1380 - sparse_categorical_accuracy: 0.9606 - val_loss: 0.1391 - val_sparse_categorical_accuracy: 0.9608\n",
      "Epoch 7/10\n",
      "60000/60000 [==============================] - 2s 28us/sample - loss: 0.1260 - sparse_categorical_accuracy: 0.9643 - val_loss: 0.1276 - val_sparse_categorical_accuracy: 0.9622\n",
      "Epoch 8/10\n",
      "60000/60000 [==============================] - 2s 28us/sample - loss: 0.1163 - sparse_categorical_accuracy: 0.9676 - val_loss: 0.1229 - val_sparse_categorical_accuracy: 0.9637\n",
      "Epoch 9/10\n",
      "60000/60000 [==============================] - 2s 28us/sample - loss: 0.1080 - sparse_categorical_accuracy: 0.9696 - val_loss: 0.1146 - val_sparse_categorical_accuracy: 0.9669\n",
      "Epoch 10/10\n",
      "60000/60000 [==============================] - 2s 28us/sample - loss: 0.1006 - sparse_categorical_accuracy: 0.9715 - val_loss: 0.1084 - val_sparse_categorical_accuracy: 0.9673\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.keras.callbacks.History at 0x7f022d2205f8>"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.fit(train_X, train_y, validation_data=(test_X, test_y), epochs=10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Создание новых слоев для keras при помощи tensorflow\n",
    "\n",
    "> Наверное, одно из самых важных применений tensorflow\n",
    "\n",
    "### Простой пример\n",
    "\n",
    "([И ссылка на туториал](https://keras.io/layers/writing-your-own-keras-layers/))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Linear(layers.Layer):\n",
    "\n",
    "    def __init__(self, units=32, input_dim=32):\n",
    "        super(Linear, self).__init__()\n",
    "        self.w = self.add_weight(shape=(input_dim, units),\n",
    "                                 initializer='random_normal',\n",
    "                                 trainable=True)\n",
    "        self.b = self.add_weight(shape=(units,),\n",
    "                                 initializer='zeros',\n",
    "                                 trainable=True)\n",
    "\n",
    "    def call(self, inputs):\n",
    "        return tf.matmul(inputs, self.w) + self.b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Eager mode\n",
    "\n",
    "> Что, если работатьс графом и сессией неудобно и не нужно? Есть *eager mode*!\n",
    "\n",
    "[Официальный гайд](https://www.tensorflow.org/guide/eager)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "tf.enable_eager_execution()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Проверка статуса:**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tf.executing_eagerly()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Вычисления\n",
    "\n",
    "> Теперь, чтобы вычислить значения, не нужно создавать сессии, нужно \"всего лишь\" вызвать метод `numpy()`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4\n"
     ]
    }
   ],
   "source": [
    "a = tf.constant(2)\n",
    "b = tf.constant(2)\n",
    "\n",
    "c = (a * b).numpy()\n",
    "print(c)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Вычисление градиента\n",
    "\n",
    "> Для вычисления градиента в таком режиме используется класс `tf.GradientTape`\n",
    "\n",
    "**Важные факты по gradient tape**\n",
    "\n",
    "* Записывает историю вычислений во время forward pass'a и переиспользует для вычисления градиента\n",
    "* Нужно создавать новый gradietn tape для каждого вычисления"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tf.Tensor([[2.]], shape=(1, 1), dtype=float32)\n"
     ]
    }
   ],
   "source": [
    "w = tf.Variable([[1.0]])\n",
    "with tf.GradientTape() as tape:\n",
    "    loss = w * w\n",
    "\n",
    "grad = tape.gradient(loss, w)\n",
    "print(grad) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Оптимизация при помощи GradientTape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [],
   "source": [
    "n_iter = 1000\n",
    "learning_rate = 0.1\n",
    "\n",
    "w = tf.Variable([[2.0]])\n",
    "optimizer = tf.train.AdamOptimizer(learning_rate)  # [1]\n",
    "\n",
    "for i in range(n_iter):\n",
    "    with tf.GradientTape() as tape:  # [2]\n",
    "        loss = 2 * w * w + w\n",
    "\n",
    "    grad = tape.gradient(loss, w)  # [3]\n",
    "    \n",
    "    optimizer.apply_gradients(# zip(grad, w)  # [4]\n",
    "                              [(grad, w)])    \n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[-0.25]], dtype=float32)"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "w.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* [1] создали оптимизатор\n",
    "* [2] создали ленту\n",
    "* [3] нашли градиенты (использовали ленту, на следующей итерации должна быть создана новая)\n",
    "* [4] в качестве аргумента надо передавать итерируемый набор пар (градиент, тензор)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
