{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5d0sNwvpBMJx"
      },
      "source": [
        "# <center>  🦙 Файнтюнинг больших языковых моделей 🎛</center>\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aFOU3SKaBMJ2"
      },
      "source": [
        "## 🧑‍🎓 В этом уроке:\n",
        "<img src='https://github.com/a-milenkin/LLM_practical_course/blob/main/images/finetune_vs_rag.webp?raw=1' align=\"right\" width=\"450\" height=\"400\">\n",
        "\n",
        "* 🎲 Разберемся, зачем и когда использовать файнтюнинг (fine-tuning)\n",
        "* [🤹‍♀️ Научимся готовить датасеты для файнтюнинга](#part2)\n",
        "* [🚀 Поймем, как осуществить дообучение с помощью фрэймворка [unsloth](https://github.com/unslothai/unsloth)](#part3)\n",
        "* [📦👩‍💻 Освоим техники эффективного фантюна (**Peft**, **Lora**, **QLora** и прочие неприличные аббревиатуры)](#part4)\n",
        "* [📦 Произведем инференс и правильное сохранение дообученной модели ](#part4)\n",
        "*  🥊 [Разберём файнтюнинг на примере 2-х датасетов:](#part6)\n",
        "    *  Соберать свой\n",
        "    *  или взять готовый датасет с HF [статья на Хабр](https://habr.com/ru/articles/832984/)\n",
        "*  [🧸 Выводы и заключения ✅](#part6)\n",
        "\n",
        "    "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rQE6WppUBMJ3"
      },
      "source": [
        "<div class=\"alert alert-info\">\n",
        "\n",
        "**🚀 Дообучать проще, чем кажется!`**\n",
        "    \n",
        "<img src='https://github.com/a-milenkin/LLM_practical_course/blob/main/images/lamauns.png?raw=1' align=\"right\" width=\"350\" height=\"478\" >\n",
        "\n",
        "* Чтобы запустить процесс файнтюнинга на ресурсах Google Colab и не ждать вечность. <br>\n",
        "* Воспользуемся специальным  фрэймворком [Unsloth](https://github.com/unslothai/unsloth), который позволяет ускорить процесс файнтюнинга от 2-х до 5 раз на потребительских видеокартах (и даже на CPU), и требует меньше памяти.\n",
        "* На странице проекта есть готовые удобные ноутбуки c разбором файнтюнинга самых популярных моделей - на базе одного из них мы сделали этот ноутбук.\n",
        "* Так же фреймворк позволяет запускать файнтюн без глубоких знаний нейросетей и ML."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fRKmSJrEBMJ4"
      },
      "source": [
        "# Установим unsloth и зависимости."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2eSvM9zX_2d3"
      },
      "outputs": [],
      "source": [
        "%%capture\n",
        "# установка Unsloth, Xformers (Flash Attention) и все другие пакеты!\n",
        "!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
        "\n",
        "#Нам нужно проверить какая версия Torch для Xformers (2.3 -> 0.0.27)\n",
        "from torch import __version__; from packaging.version import Version as V\n",
        "xformers = \"xformers==0.0.27\" if V(__version__) < V(\"2.4.0\") else \"xformers\"\n",
        "!pip install --no-deps {xformers} trl peft accelerate bitsandbytes triton"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MamiQGRUBMJ5"
      },
      "source": [
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "Импортируем библиотеки и модель, которую будем файнтюнить - `Llama-3.1-8B`. <br>\n",
        "В логах увидим, что **Unsloth** пропатчил нашу систему для двукратного ускорения процесса."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "QmUBVEnvCDJv",
        "outputId": "9d79f0f1-c3c3-44f2-df15-2f15a5e77216"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n",
            "==((====))==  Unsloth 2024.8: Fast Llama patching. Transformers = 4.44.2.\n",
            "   \\\\   /|    GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.\n",
            "O^O/ \\_/ \\    Pytorch: 2.4.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.\n",
            "\\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.27.post2. FA2 = False]\n",
            " \"-____-\"     Free Apache license: http://github.com/unslothai/unsloth\n",
            "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n"
          ]
        }
      ],
      "source": [
        "from unsloth import FastLanguageModel\n",
        "import torch\n",
        "\n",
        "max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!\n",
        "dtype = None          # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n",
        "load_in_4bit = True   # Use 4bit quantization to reduce memory usage. Can be False.\n",
        "\n",
        "model, tokenizer = FastLanguageModel.from_pretrained(\n",
        "    model_name = \"unsloth/Meta-Llama-3.1-8B\",\n",
        "    max_seq_length = max_seq_length,\n",
        "    dtype = dtype,\n",
        "    load_in_4bit = load_in_4bit, # Применяем QLoRA\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P5qwHxeRBMJ7"
      },
      "source": [
        "<div class=\"alert alert-info\">\n",
        "    \n",
        "Используемые параметры:\n",
        "* `model_name` - имя модели для файнтюна (смотрим доступные [здесь](https://github.com/unslothai/unsloth))\n",
        "* `max_seq_length` - максимальная длина последовательности в токенах.\n",
        "* `dtype` - Тип данных для хранения весов (зависит от модели видеокарты). Для атоматического определения ставить `None`.\n",
        "* `load_in_4bit` - ключевой параметр для подгрузки 4-х битной версии модели."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SXd9bTZd1aaL"
      },
      "source": [
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "🔥 Чтобы не переучивать полностью всю модель - будем использовать **PEFT** с **LoRA**. Так мы будем обновлять от 1 до 10% от всех параметров!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "6bZsfBuZDeCL",
        "outputId": "2872e6db-5237-4658-9e3f-4b991a13a9d9"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Unsloth 2024.8 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.\n"
          ]
        }
      ],
      "source": [
        "model = FastLanguageModel.get_peft_model(\n",
        "    model,\n",
        "    r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128\n",
        "    target_modules = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n",
        "                      \"gate_proj\", \"up_proj\", \"down_proj\",],\n",
        "    lora_alpha = 16,\n",
        "    lora_dropout = 0, # Supports any, but = 0 is optimized\n",
        "    bias = \"none\",    # Supports any, but = \"none\" is optimized\n",
        "    # [NEW] \"unsloth\" uses 30% less VRAM, fits 2x larger batch sizes!\n",
        "    use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for very long context\n",
        "    random_state = 3407,\n",
        "    use_rslora = False,  # We support rank stabilized LoRA\n",
        "    loftq_config = None, # And LoftQ\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "voqriRFpBMJ8"
      },
      "source": [
        "<div class=\"alert alert-info\">\n",
        "\n",
        "Для 90% случаев, вам подойдут значения параметров по умолчанию, можно экспериментировать только с параметром `r`. Полное описание всех параметров можно найти [здесь](https://docs.unsloth.ai/basics/lora-parameters-encyclopedia).\n",
        "\n",
        "Важные параметры LoRA:\n",
        "* `r` (ранг матрицы) - самый важный параметр, определяет размер дополнительной матрицы (адаптера), которую мы будем обучать в процессе файнтюнинга (рекомендуется выбирать кратно степеням двойки - 8, 16, 32 и.т.д). Так же от этого парамаетра зависит время которое будет затрачено на файнтюн, чем больше r, тем больше время.\n",
        "* `target_modules` - целевые модули, веса которых будут меняться при обучении\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vITh0KVJ10qX"
      },
      "source": [
        "# <center id=\"part2\">  🤹‍♀️ Подготовка датасета\n",
        "\n",
        "Для файнтюна Ламы, требуется перевести каждую запись из нашего датасета в формат `Alpaca prompt`.\n",
        "\n",
        "И обязательно в конце промпта добавляем токен конца генерации (`EOS-token`), чтобы избежать бесконечной генерации."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "LjY75GoYUCB8"
      },
      "outputs": [],
      "source": [
        "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context.\n",
        "Write a response that appropriately completes the request.\n",
        "\n",
        "### Instruction:\n",
        "{}\n",
        "\n",
        "### Input:\n",
        "{}\n",
        "\n",
        "### Response:\n",
        "{}\"\"\"\n",
        "\n",
        "EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN\n",
        "\n",
        "# Функция преобразования полей датасета в alpaca prompt\n",
        "def formatting_prompts_func(examples):\n",
        "    instructions = examples[\"Instruction\"]\n",
        "    inputs       = examples[\"Input\"]\n",
        "    outputs      = examples[\"Response\"]\n",
        "    texts = []\n",
        "    for instruction, input, output in zip(instructions, inputs, outputs):\n",
        "        # Must add EOS_TOKEN, otherwise your generation will go on forever!\n",
        "        text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN\n",
        "        texts.append(text)\n",
        "    return { \"text\" : texts, }\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "1Q_jm9q_TYGT",
        "outputId": "8108be31-7102-4cf1-85b8-2c1d60c126fd"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "{'Input': 'Запуск канала о Data Science',\n",
            " 'Instruction': 'Write a post on the following topic',\n",
            " 'Response': 'Всем привет!\\n'\n",
            "             '\\n'\n",
            "             'Решил запустить свой канал. Буду рассказывать здесь про свой '\n",
            "             'опыт в Data Science и лайфхаки',\n",
            " 'text': 'Below is an instruction that describes a task, paired with an input '\n",
            "         'that provides further context. Write a response that appropriately '\n",
            "         'completes the request.\\n'\n",
            "         '\\n'\n",
            "         '### Instruction:\\n'\n",
            "         'Write a post on the following topic\\n'\n",
            "         '\\n'\n",
            "         '### Input:\\n'\n",
            "         'Запуск канала о Data Science\\n'\n",
            "         '\\n'\n",
            "         '### Response:\\n'\n",
            "         'Всем привет!\\n'\n",
            "         '\\n'\n",
            "         'Решил запустить свой канал. Буду рассказывать здесь про свой опыт в '\n",
            "         'Data Science и лайфхаки<|end_of_text|>'}\n"
          ]
        }
      ],
      "source": [
        "from pprint import pprint\n",
        "from datasets import load_dataset\n",
        "\n",
        "# Скачиваем подготовленный датасет с HuggingFace\n",
        "dataset = load_dataset(\"Ivanich/datafeeling_posts\", split = \"train\")\n",
        "\n",
        "# Преобразуем в alpaca_prompt с помощью нашей функции и метода map.\n",
        "dataset = dataset.map(formatting_prompts_func, batched = True,)\n",
        "\n",
        "pprint(dataset[0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "44ODCGcJBMJ9"
      },
      "source": [
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "Видим, что готовый промпт добавился в колонку `text`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mDu8fiZZ63Sb"
      },
      "source": [
        "## Перед тем как ворваться в файнтюнинг\n",
        "Посмотрим, как модель справляется с задачей до файнтюна?"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "9gqTLCuk5pmp",
        "outputId": "ba37d0b0-0a5c-4e53-c5d5-52cfd3b461f5"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "['<|begin_of_text|>Below is an instruction that describes a task, paired with '\n",
            " 'an input that provides further context. Write a response that appropriately '\n",
            " 'completes the request.\\n'\n",
            " '\\n'\n",
            " '### Instruction:\\n'\n",
            " 'Write post about the following topic: \\n'\n",
            " '\\n'\n",
            " '### Input:\\n'\n",
            " 'Как запустить gen ai стартап\\n'\n",
            " '\\n'\n",
            " '### Response:\\n'\n",
            " '1. Запустить генетический алгоритм, который генерирует идеи стартапов.\\n'\n",
            " '2. Выбрать идеи, которые кажутся вам наиболее перспективными.\\n'\n",
            " '3. Выполнить анализ рынка и конкурентов для этих идей.\\n'\n",
            " '4. Выполнить маркетинговый анализ и анализ потребительского поведения.\\n'\n",
            " '5. Выполнить анализ бизнес-моделей и стратегий.\\n'\n",
            " '6. Выполнить анализ конкурентоспособности и рисков.\\n'\n",
            " '7. Выполнить анализ финансового состояния и возможностей.\\n'\n",
            " '8. Вып']\n"
          ]
        }
      ],
      "source": [
        "FastLanguageModel.for_inference(model)\n",
        "inputs = tokenizer(\n",
        "[\n",
        "    alpaca_prompt.format(\n",
        "        \"Write post about the following topic: \", # instruction\n",
        "        \"Как запустить gen ai стартап\", # input\n",
        "        \"\", # output - leave this blank for generation!\n",
        "    )\n",
        "], return_tensors = \"pt\").to(\"cuda\")\n",
        "\n",
        "outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)\n",
        "pprint(tokenizer.batch_decode(outputs))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cmC7CnLDBMJ-"
      },
      "source": [
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "Видим, что модель выдаёт какой-то план с пунктами, и это не очень похоже на пост для канала."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "idAEIeSQ3xdS"
      },
      "source": [
        "# <center id=\"part3\">   🚀 Запуск файнтюна / Дообучение модели\n",
        "Теперь давайте воспользуемся `SFTTrainer` - supervised fine-tuning (обучение без учителя) от `Huggingface TRL` . Так нам не придется писать свою reward модель, которой необходимо оценивать ответ Llama! <br>\n",
        "Дополнительную документацию можно найти здесь: [Документация TRL SFT](https://huggingface.co/docs/trl/sft_trainer). <br>\n",
        "Также `unsloth` поддерживает [`DPOTrainer`](https://huggingface.co/docs/trl/dpo_trainer) от TRL!\n",
        "\n",
        "Мы делаем 60 шагов, чтобы ускорить процесс, но вы можете установить `num_train_epochs=1` для полного запуска и отключить `max_steps=None`.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YIoQjsCzBMJ-"
      },
      "source": [
        "## <center> Крутая настройка для самых продвинутых | Wandb\n",
        "\n",
        "При желании можно подключить библиотеку [Weights&Biases](https://wandb.ai/site) - бесплатный трекер экспериментов. И файнтюннить по взрослому."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "L8ygwbu5BMJ-"
      },
      "outputs": [],
      "source": [
        "# import wandb\n",
        "\n",
        "# wb_token = 'WANDB_API_KEY' # ключ c сайта Weights & Biases https://wandb.ai/site\n",
        "# wandb.login(key=wb_token)\n",
        "# далее раскомментируйте параметр record_to"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "95_Nn-89DhsL",
        "outputId": "476f105b-b374-4861-eec5-88b0f3c859a6"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "max_steps is given, it will override any value given in num_train_epochs\n"
          ]
        }
      ],
      "source": [
        "from trl import SFTTrainer\n",
        "from transformers import TrainingArguments\n",
        "from unsloth import is_bfloat16_supported\n",
        "\n",
        "trainer = SFTTrainer(\n",
        "    model = model,\n",
        "    tokenizer = tokenizer,\n",
        "    train_dataset = dataset,\n",
        "    dataset_text_field = \"text\",\n",
        "    max_seq_length = max_seq_length,\n",
        "    dataset_num_proc = 2,\n",
        "    packing = False, # Can make training 5x faster for short sequences.\n",
        "    args = TrainingArguments(\n",
        "        per_device_train_batch_size = 4,\n",
        "        gradient_accumulation_steps = 2,\n",
        "        warmup_steps = 10,\n",
        "        #num_train_epochs = 1, # Set this for 1 full training run.\n",
        "        max_steps = 60,\n",
        "        learning_rate = 2e-4,\n",
        "        fp16 = not is_bfloat16_supported(),\n",
        "        bf16 = is_bfloat16_supported(),\n",
        "        logging_steps = 1,\n",
        "        optim = \"adamw_8bit\",\n",
        "        weight_decay = 0.01,\n",
        "        lr_scheduler_type = \"linear\",\n",
        "        seed = 3407,\n",
        "        output_dir = \"outputs\",\n",
        "        # report_to=\"wandb\", # Если используете Weights & Biases\n",
        "    ),\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2ejIt2xSNKKp",
        "outputId": "c4b07c62-3201-4a90-f024-560a86785d23"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "GPU = Tesla T4. Max memory = 14.748 GB.\n",
            "12.742 GB of memory reserved.\n"
          ]
        }
      ],
      "source": [
        "#@title Show current memory stats\n",
        "gpu_stats = torch.cuda.get_device_properties(0)\n",
        "start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
        "max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n",
        "\n",
        "print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n",
        "print(f\"{start_gpu_memory} GB of memory reserved.\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "yqxqAZ7KJ4oL",
        "outputId": "bbe9d498-00da-414d-99ad-14388acf1014"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1\n",
            "   \\\\   /|    Num examples = 587 | Num Epochs = 4\n",
            "O^O/ \\_/ \\    Batch size per device = 8 | Gradient Accumulation steps = 4\n",
            "\\        /    Total batch size = 32 | Total steps = 60\n",
            " \"-____-\"     Number of trainable parameters = 41,943,040\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "\n",
              "    <div>\n",
              "      \n",
              "      <progress value='60' max='60' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
              "      [60/60 43:35, Epoch 3/4]\n",
              "    </div>\n",
              "    <table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              " <tr style=\"text-align: left;\">\n",
              "      <th>Step</th>\n",
              "      <th>Training Loss</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <td>1</td>\n",
              "      <td>2.236400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>2</td>\n",
              "      <td>2.247700</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>3</td>\n",
              "      <td>2.328900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>4</td>\n",
              "      <td>2.165200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>5</td>\n",
              "      <td>2.294000</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>6</td>\n",
              "      <td>2.186900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>7</td>\n",
              "      <td>2.269100</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>8</td>\n",
              "      <td>2.214700</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>9</td>\n",
              "      <td>2.289900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>10</td>\n",
              "      <td>2.298600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>11</td>\n",
              "      <td>2.244600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>12</td>\n",
              "      <td>2.201800</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>13</td>\n",
              "      <td>2.153100</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>14</td>\n",
              "      <td>2.188300</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>15</td>\n",
              "      <td>2.166900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>16</td>\n",
              "      <td>2.182000</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>17</td>\n",
              "      <td>2.188200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>18</td>\n",
              "      <td>2.098000</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>19</td>\n",
              "      <td>2.084500</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>20</td>\n",
              "      <td>2.096200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>21</td>\n",
              "      <td>2.201400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>22</td>\n",
              "      <td>2.091800</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>23</td>\n",
              "      <td>2.053300</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>24</td>\n",
              "      <td>2.117500</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>25</td>\n",
              "      <td>2.018800</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>26</td>\n",
              "      <td>2.011700</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>27</td>\n",
              "      <td>1.953300</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>28</td>\n",
              "      <td>1.964100</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>29</td>\n",
              "      <td>2.048200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>30</td>\n",
              "      <td>1.907600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>31</td>\n",
              "      <td>2.030600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>32</td>\n",
              "      <td>2.014700</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>33</td>\n",
              "      <td>1.982400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>34</td>\n",
              "      <td>1.982400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>35</td>\n",
              "      <td>1.898200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>36</td>\n",
              "      <td>2.025500</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>37</td>\n",
              "      <td>1.779900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>38</td>\n",
              "      <td>2.007600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>39</td>\n",
              "      <td>1.968800</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>40</td>\n",
              "      <td>1.961200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>41</td>\n",
              "      <td>1.911900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>42</td>\n",
              "      <td>1.888400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>43</td>\n",
              "      <td>1.826900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>44</td>\n",
              "      <td>1.803100</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>45</td>\n",
              "      <td>1.816400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>46</td>\n",
              "      <td>1.819600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>47</td>\n",
              "      <td>1.691800</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>48</td>\n",
              "      <td>1.928200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>49</td>\n",
              "      <td>1.881500</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>50</td>\n",
              "      <td>1.897900</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>51</td>\n",
              "      <td>1.940600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>52</td>\n",
              "      <td>1.860200</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>53</td>\n",
              "      <td>1.859400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>54</td>\n",
              "      <td>1.853800</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>55</td>\n",
              "      <td>1.901000</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>56</td>\n",
              "      <td>1.895100</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>57</td>\n",
              "      <td>1.795600</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>58</td>\n",
              "      <td>1.882400</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>59</td>\n",
              "      <td>1.795100</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <td>60</td>\n",
              "      <td>1.654000</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table><p>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "# Запускаем тренировку!\n",
        "trainer_stats = trainer.train()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "pCqnaKmlO1U9",
        "outputId": "7e351717-6f83-4794-dff1-633468bc3b3f"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "2669.216 seconds used for training.\n",
            "44.49 minutes used for training.\n",
            "Peak reserved memory = 12.742 GB.\n",
            "Peak reserved memory for training = 0.0 GB.\n",
            "Peak reserved memory % of max memory = 86.398 %.\n",
            "Peak reserved memory for training % of max memory = 0.0 %.\n"
          ]
        }
      ],
      "source": [
        "#@title Show final memory and time stats\n",
        "used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
        "used_memory_for_lora = round(used_memory - start_gpu_memory, 3)\n",
        "used_percentage = round(used_memory         /max_memory*100, 3)\n",
        "lora_percentage = round(used_memory_for_lora/max_memory*100, 3)\n",
        "print(f\"{trainer_stats.metrics['train_runtime']} seconds used for training.\")\n",
        "print(f\"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.\")\n",
        "print(f\"Peak reserved memory = {used_memory} GB.\")\n",
        "print(f\"Peak reserved memory for training = {used_memory_for_lora} GB.\")\n",
        "print(f\"Peak reserved memory % of max memory = {used_percentage} %.\")\n",
        "print(f\"Peak reserved memory for training % of max memory = {lora_percentage} %.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ekOmTR1hSNcr"
      },
      "source": [
        "# <center id=\"part4\">  Инференс 💬\n",
        "Давайте запустим модель! Можете изменить инструкцию и ввод, Response оставим пустым!\n",
        "\n",
        "Попробовать вдвое ускорить инференс можно в Colab для **Llama-3.1 8b Instruct** [здесь](https://colab.research.google.com/drive/1T-YBVfnphoVc8E2E854qF3jdia2Ll2W2?usp=sharing)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "kR3gIAX-SM2q",
        "outputId": "ec9e915d-ddd7-4249-d135-1676b26f6c69",
        "scrolled": true
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "['<|begin_of_text|>Below is an instruction that describes a task, paired with '\n",
            " 'an input that provides further context. Write a response that appropriately '\n",
            " 'completes the request.\\n'\n",
            " '\\n'\n",
            " '### Instruction:\\n'\n",
            " 'Write post about the following topic: \\n'\n",
            " '\\n'\n",
            " '### Input:\\n'\n",
            " 'как запустить gen ai стартап\\n'\n",
            " '\\n'\n",
            " '### Response:\\n'\n",
            " '🎤 Сегодня у меня был интересный опыт. \\n'\n",
            " '\\n'\n",
            " '🔝 Встретился с 2-мя ребятами, которые хотят запустить свой gen ai стартап. \\n'\n",
            " '\\n'\n",
            " '🏆 В целом, все было классно. У них крутой опыт, хороший тимбилдинг и '\n",
            " 'интересная задумка. \\n'\n",
            " '\\n'\n",
            " '🤔 Однако, я все же посоветовал им не запускать стартап, а просто скинуть '\n",
            " 'свою идею в нашу ML лотерею. \\n'\n",
            " '\\n'\n",
            " '🤔 Понятно, что это не совсем']\n"
          ]
        }
      ],
      "source": [
        "# alpaca_prompt = Copied from above\n",
        "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
        "inputs = tokenizer(\n",
        "[\n",
        "    alpaca_prompt.format(\n",
        "        \"Write post about the following topic: \", # instruction\n",
        "        \"как запустить gen ai стартап\", # input\n",
        "        \"\", # output - leave this blank for generation!\n",
        "    )\n",
        "], return_tensors = \"pt\").to(\"cuda\")\n",
        "\n",
        "outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)\n",
        "pprint(tokenizer.batch_decode(outputs))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YLP44KqJBMKA"
      },
      "source": [
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "✅ После файнтюна модель по тому же запросу пишет пост, и явно угадывается стиль постов канала [Datafeeling](https://t.me/datafeeling)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CrSvZObor0lY"
      },
      "source": [
        " **Запуск генерации в режиме стриминга:** Вы также можете использовать `TextStreamer` для непрерывного инференса, чтобы вы могли видеть генерацию токен за токеном!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "e2pEuRb1r2Vg",
        "outputId": "b289b319-ddff-430b-ff0e-d29a403ae59e"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n",
            "\n",
            "### Instruction:\n",
            "Write post about the following topic: \n",
            "\n",
            "### Input:\n",
            "Как стать kaggle master\n",
            "\n",
            "### Response:\n",
            "🏆 Как стать kaggle master\n",
            "\n",
            "🤔 На Kaggle сейчас много мэтчей, в которых можно скинуть лидерборд. И это прекрасно, но вот только не все так просто. \n",
            "\n",
            "🤫 Если ты хочешь добиться успеха, то придется много потрудиться. В противном случае ты просто не добьешься успеха. И это нормально, не судите. \n",
            "\n",
            "🤔 Поэтому, если ты хочешь добиться успеха, то придется много потрудиться. И это нормально, не судите. \n",
            "\n",
            "�\n"
          ]
        }
      ],
      "source": [
        "from transformers import TextStreamer\n",
        "\n",
        "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
        "inputs = tokenizer(\n",
        "[\n",
        "    alpaca_prompt.format(\n",
        "        \"Write post about the following topic: \", # instruction\n",
        "        \"Как стать kaggle master\", # input\n",
        "        \"\", # output - leave this blank for generation!\n",
        "    )\n",
        "], return_tensors = \"pt\").to(\"cuda\")\n",
        "\n",
        "\n",
        "text_streamer = TextStreamer(tokenizer)\n",
        "_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uMuVrWbjAzhc"
      },
      "source": [
        "### <center> Сохранение, загрузка файнтюненных моделей\n",
        "Чтобы сохранить окончательную модель в качестве адаптеров `LoRA`, используйте:\n",
        "* либо `push_to_hub` от **Huggingface** для онлайн-сохранения\n",
        "* либо `save_pretrained` для локального сохранения\n",
        "\n",
        "**Важно:** сохраняется ТОЛЬКО адаптер `LoRA`, а не полная модель. Как сохранить в 16-битном формате или `GGUF` смотрите в [этом ноутбуке](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing)!"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3Pt-g2YquOa8"
      },
      "outputs": [],
      "source": [
        "from getpass import getpass\n",
        "\n",
        "hf_token = getpass(prompt=\"Введите ваш HuggingFaceHub API ключ\")\n",
        "hf_username = 'Ivanich' # Вводим свой username на HugginFace"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 130,
          "referenced_widgets": [
            "497302a6acc84773a84ff23cd36da214",
            "b62cf033432f40e0a9ee259515167b51",
            "22eed99a28c74a80a42f95cc2529913b",
            "1f07aa6d2b744092ac1b2f13352b3909",
            "7a7468b6571c4b818f706b13e37345b5",
            "d2117400d4744ac498d1582a3336f905",
            "36e5a40703714c76ba41451af838a1fe",
            "0374d3f515e640ea80e11cb8cffdbf7d",
            "e08504a1906e41bf8cfe2e7d231e0f2f",
            "1417bf853c6241038d6d01fae7183091",
            "9d78a73117ff4b59b6f6499b7f978d14",
            "297389dcbf3040f19fb6773f776f3f2e",
            "1a8353acfc6e4a77b5e754ec3a9d164e",
            "a6784a8092dd4652bb19a142b05986b1",
            "74d5c798f06346e9bd1cf96bf72669ff",
            "d937f93cbe12423d9198d740769eaaad",
            "3d7de1da6dc5424bb29a8510b1cfc549",
            "d9f595f07ea14377ad4f2af81b3dc236",
            "3267638f39f34833955764925bc59ad9",
            "7a23197e84e449a6b77b893f4bcc81fb",
            "6747454af34a467690d783bae764fbfa",
            "d6ef70581b8841d4b7455f1074373478",
            "177ec807de33414281f03b5790021dc8",
            "a1242cc96bf9450abbbc00593bc7c8a8",
            "906abe2734de4a7bbceb71fefb265ae8",
            "de7e5ca6727c4c0d99522d35981a4ad2",
            "7ca294a7b4aa40668f8ea154f744a4b4",
            "d27389522e9c437695f0e93a0ecf1f09",
            "16b0f13d883d48809d4703704f43204c",
            "2c90127799134934b6eb68325838a8a8",
            "c0a6e820a2eb41dc9c6244f614272ef5",
            "a4a6ed211ca44d509cb82311c27caba3",
            "c7a2e8fad27e4fa2b9c80a970002d6bd"
          ]
        },
        "id": "upcOlWe7A1vc",
        "outputId": "64f23760-f101-4ef6-e7f8-384eec50db9d"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "497302a6acc84773a84ff23cd36da214",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "README.md:   0%|          | 0.00/588 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "297389dcbf3040f19fb6773f776f3f2e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "177ec807de33414281f03b5790021dc8",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "adapter_model.safetensors:   0%|          | 0.00/168M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Saved model to https://huggingface.co/Ivanich/datafeeling_model\n"
          ]
        }
      ],
      "source": [
        "model.save_pretrained(\"datafeeling_model\") # Local saving\n",
        "tokenizer.save_pretrained(\"datafeeling_model\")\n",
        "\n",
        "model.push_to_hub(f\"{hf_username}/datafeeling_model\", token = hf_token) # Online saving\n",
        "tokenizer.push_to_hub(f\"{hf_username}/datafeeling_model\", token = hf_token) # Online saving"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GctFPeY3BMKC"
      },
      "source": [
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "Если перейти по ссылке, которую выдает HF после загрузки на хаб - увидим, что сохранилась не вся модель, а только веса адаптера LoRA (168 Mb)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AEEcJ4qfC7Lp"
      },
      "source": [
        "Now if you want to load the LoRA adapters we just saved for inference, set `False` to `True`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "MKX_XKs_BNZR",
        "outputId": "7431de93-bc80-48b7-9491-1f3ce12615fd"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "['<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\\n\\n### Instruction:\\nWrite post about the following topic\\n\\n### Input:\\nРаспределение долей в стартапе\\n\\n### Response:\\nЯ думаю, что это очень важный вопрос: как распределять доли в стартапе? И, что еще важнее, как распределять доли в стартапе, когда вы еще не знаете, каким будет стартап?  Очень много людей в стартапах, которые делают это по-старому, то есть, распределяют доли по статусу: founder, cofounder, employee, contractor, investor, etc. Это не так плохо, но, с другой стороны, я не знаю ни одного стартапа, который бы был успеш']"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "if False:\n",
        "    from unsloth import FastLanguageModel\n",
        "    model, tokenizer = FastLanguageModel.from_pretrained(\n",
        "        model_name = \"datafeeling_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
        "        max_seq_length = max_seq_length,\n",
        "        dtype = dtype,\n",
        "        load_in_4bit = load_in_4bit,\n",
        "    )\n",
        "    FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
        "\n",
        "# alpaca_prompt = You MUST copy from above!\n",
        "\n",
        "inputs = tokenizer(\n",
        "[\n",
        "    alpaca_prompt.format(\n",
        "        \"Write post about the following topic\", # instruction\n",
        "        \"Распределение долей в стартапе\", # input\n",
        "        \"\", # output - leave this blank for generation!\n",
        "    )\n",
        "], return_tensors = \"pt\").to(\"cuda\")\n",
        "\n",
        "outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)\n",
        "tokenizer.batch_decode(outputs)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jwlSxwzIBMKD"
      },
      "source": [
        "✅ Отлично, видим, что результат получился похожим на правду"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zm7gsf4aBMKD"
      },
      "source": [
        "# <center id=\"part6\"> А что ещё из полезного?\n",
        "\n",
        "<div class=\"alert alert-success\">\n",
        "    \n",
        "* [RAFT](https://arxiv.org/abs/2403.10131) - RAG + Fine Tuning - файнтюнят модель, чтобы отбирать документы для RAG\n",
        "* [Finetuning ChatGPT](https://platform.openai.com/docs/guides/fine-tuning) - OpenAI и Anthropic объявили, что можно будет файнтюнить их модели на своих данных, а потом использовать эти версии моделей по API (платно).\n",
        "* [Датасет](https://huggingface.co/datasets/Ivanich/datafeeling_posts) с постами канала Datafeeling, собранный в этом [ноутбуке](https://github.com/a-milenkin/LLM_practical_course/blob/main/notebooks/M5_2_Dataset_prepare.ipynb).\n",
        "* Пример файнтюнинга на готовом датасете с HuggingFace c медицинскими данными. Оформили в виде [статьи на Хабр](https://habr.com/ru/articles/832984/)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MfCayKefBMKD"
      },
      "source": [
        "# <center id=\"part7\"> 🧸 Выводы и заключения ✅\n",
        "\n",
        "**Плюсы FT:**\n",
        "* Более консистентные результаты генрации (например, можно передать стилистику автора)\n",
        "* Уменьшение галлюцинаций (следование новым доменным знаниями)\n",
        "* Можно натренировать под специфический юзкейс\n",
        "* Не забывает данные (хранятся в весах модели)\n",
        "* Можно передать больше данных, чем в RAG (т.к. нет ограничений контекстного окна)\n",
        "* Можно использовать небольшие модели - меньше затрат на дальнейшее использование\n",
        "* Приватность - данные не передаются открыто в промпте, а \"зашиты\" в весах (меньше возможностей для промпт-хакинга)\n",
        "* Выше скорость инференса по сравнению с RAG (т.к. не нужно делать семантический поиск по базе и короче промпт, быстрее начинается генерация)\n",
        "\n",
        "**Минусы FT:**\n",
        "* Нужно собрать гораздо больше данных (самая трудоемкая и важная часть)\n",
        "* Повышенный порог входа - нужны более глубокие технические знания\n",
        "* Нужно больше ресурсов и времени на старте\n",
        "* Необходимость переучивать при выходе новых версий модели\n",
        "* Иногда все равно придется использовать RAG для доступа к новой часто изменяющейся информации"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "B6E5TGLqBMKH"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "T4",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "cv",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.12"
    },
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "0374d3f515e640ea80e11cb8cffdbf7d": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "1417bf853c6241038d6d01fae7183091": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "16b0f13d883d48809d4703704f43204c": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "177ec807de33414281f03b5790021dc8": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_a1242cc96bf9450abbbc00593bc7c8a8",
              "IPY_MODEL_906abe2734de4a7bbceb71fefb265ae8",
              "IPY_MODEL_de7e5ca6727c4c0d99522d35981a4ad2"
            ],
            "layout": "IPY_MODEL_7ca294a7b4aa40668f8ea154f744a4b4"
          }
        },
        "1a8353acfc6e4a77b5e754ec3a9d164e": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_3d7de1da6dc5424bb29a8510b1cfc549",
            "placeholder": "​",
            "style": "IPY_MODEL_d9f595f07ea14377ad4f2af81b3dc236",
            "value": "100%"
          }
        },
        "1f07aa6d2b744092ac1b2f13352b3909": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_1417bf853c6241038d6d01fae7183091",
            "placeholder": "​",
            "style": "IPY_MODEL_9d78a73117ff4b59b6f6499b7f978d14",
            "value": " 588/588 [00:00&lt;00:00, 33.5kB/s]"
          }
        },
        "22eed99a28c74a80a42f95cc2529913b": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_0374d3f515e640ea80e11cb8cffdbf7d",
            "max": 588,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_e08504a1906e41bf8cfe2e7d231e0f2f",
            "value": 588
          }
        },
        "297389dcbf3040f19fb6773f776f3f2e": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_1a8353acfc6e4a77b5e754ec3a9d164e",
              "IPY_MODEL_a6784a8092dd4652bb19a142b05986b1",
              "IPY_MODEL_74d5c798f06346e9bd1cf96bf72669ff"
            ],
            "layout": "IPY_MODEL_d937f93cbe12423d9198d740769eaaad"
          }
        },
        "2c90127799134934b6eb68325838a8a8": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "3267638f39f34833955764925bc59ad9": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "36e5a40703714c76ba41451af838a1fe": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "3d7de1da6dc5424bb29a8510b1cfc549": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "497302a6acc84773a84ff23cd36da214": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_b62cf033432f40e0a9ee259515167b51",
              "IPY_MODEL_22eed99a28c74a80a42f95cc2529913b",
              "IPY_MODEL_1f07aa6d2b744092ac1b2f13352b3909"
            ],
            "layout": "IPY_MODEL_7a7468b6571c4b818f706b13e37345b5"
          }
        },
        "6747454af34a467690d783bae764fbfa": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "74d5c798f06346e9bd1cf96bf72669ff": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_6747454af34a467690d783bae764fbfa",
            "placeholder": "​",
            "style": "IPY_MODEL_d6ef70581b8841d4b7455f1074373478",
            "value": " 1/1 [00:02&lt;00:00,  2.84s/it]"
          }
        },
        "7a23197e84e449a6b77b893f4bcc81fb": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "7a7468b6571c4b818f706b13e37345b5": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "7ca294a7b4aa40668f8ea154f744a4b4": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "906abe2734de4a7bbceb71fefb265ae8": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_2c90127799134934b6eb68325838a8a8",
            "max": 167832240,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_c0a6e820a2eb41dc9c6244f614272ef5",
            "value": 167832240
          }
        },
        "9d78a73117ff4b59b6f6499b7f978d14": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "a1242cc96bf9450abbbc00593bc7c8a8": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_d27389522e9c437695f0e93a0ecf1f09",
            "placeholder": "​",
            "style": "IPY_MODEL_16b0f13d883d48809d4703704f43204c",
            "value": "adapter_model.safetensors: "
          }
        },
        "a4a6ed211ca44d509cb82311c27caba3": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "a6784a8092dd4652bb19a142b05986b1": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_3267638f39f34833955764925bc59ad9",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_7a23197e84e449a6b77b893f4bcc81fb",
            "value": 1
          }
        },
        "b62cf033432f40e0a9ee259515167b51": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_d2117400d4744ac498d1582a3336f905",
            "placeholder": "​",
            "style": "IPY_MODEL_36e5a40703714c76ba41451af838a1fe",
            "value": "README.md: 100%"
          }
        },
        "c0a6e820a2eb41dc9c6244f614272ef5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "c7a2e8fad27e4fa2b9c80a970002d6bd": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "d2117400d4744ac498d1582a3336f905": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d27389522e9c437695f0e93a0ecf1f09": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d6ef70581b8841d4b7455f1074373478": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "d937f93cbe12423d9198d740769eaaad": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d9f595f07ea14377ad4f2af81b3dc236": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "de7e5ca6727c4c0d99522d35981a4ad2": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_a4a6ed211ca44d509cb82311c27caba3",
            "placeholder": "​",
            "style": "IPY_MODEL_c7a2e8fad27e4fa2b9c80a970002d6bd",
            "value": " 176M/? [00:02&lt;00:00, 82.4MB/s]"
          }
        },
        "e08504a1906e41bf8cfe2e7d231e0f2f": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        }
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}