LLM Course | Chat
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "IAM token or API key has to be passed in request"
llm_yandex_gpt = YandexGPT(api_key=os.getenv("YC_api_key"),
folder_id=os.getenv("YC_folder_id"),
model_uri=f"gpt://{os.getenv('YC_folder_id')}/yandexgpt/latest",
temperature=0.0,)
template = "What is the capital of {country}?"
prompt = PromptTemplate.from_template(template)
country = "Russia"
input_data = prompt.format(country=country)
response = llm_yandex_gpt.invoke(input=input_data)
print(response)
from yandex_gpt import YandexGPT, YandexGPTConfigManagerForAPIKey
# Setup configuration (input fields may be empty if they are set in environment variables)
config = YandexGPTConfigManagerForAPIKey(model_type="yandexgpt", catalog_id="your_catalog_id", api_key="your_api_key")
# Instantiate YandexGPT
yandex_gpt = YandexGPT(config_manager=config)
# Async function to get completion
async def get_completion():
messages = [{"role": "user", "text": "Hello, world!"}]
completion = await yandex_gpt.get_async_completion(messages=messages)
print(completion)
# Run the async function
import asyncio
asyncio.run(get_completion())
print(llm_chain.invoke(question).content)
print(llm_chain.invoke(question)['text'])
job_title_schema = ResponseSchema(
name="job_title",
description="Как называется должность, как указано в описании вакансии, на том же языке. Если написан грейд, его нужно убрать (например, Senior Python developer -> Python developer, C++ разработчик (middle, senior) -> C++ разработчик)."
)
prompt_template = """Тебе будет дан текст вакансии, из этого текста извлеки информацию:
job_title: ...
company: ...
salary: ...
tg: ...
grade: ...
Text: {text_input}
{format_instructions}
Answer: Если информация по какой-то из колонок явно не указана в описании вакансии, то поставь значение "None".
"""
Как проходит обучение
- Выдаем каждому ключи к API ChatGPT и объясняем, что с ними делать
https://stepik.org/lesson/1028705/step/5?unit=1036976
llm = LlamaCpp(
model_path="./model-q4_K.gguf",
temperature=0.75,
max_tokens=500,
)
Better cost efficiency: 50% cost discount compared to synchronous APIs
Answer the following questions as best you can. You have access to the
following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
!pip install -U --quiet langchain-nvidia-ai-endpoints
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from getpass import getpass
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings
from langchain_nvidia_ai_endpoints import ChatNVIDIA
ChatNVIDIA.get_available_models()
NVIDIAEmbeddings.get_available_models()
api_key = getpass(prompt='Введите API ключ')
llm = ChatNVIDIA(model="meta/llama-3.1-405b-instruct",
nvidia_api_key=api_key
)
embedder = NVIDIAEmbeddings(model='nvidia/nv-embed-v1',
api_key=api_key
)
!pip install langchain tiktoken -qты эту бибилиотеку скачал? от нее скорость существенно зависит. Ну и модель хуже справляется, это да, но я в пример обученную под instruct а не под chat привел, думаю, можно найти получше
handle_parsing_errors=True
to the AgentExecutor. This is
the error: Could not parse LLM output: I don't know
API
-ключ?
We couldn't create your account due to activity on your network. Please contact our support.
from dotenv import load_dotenv
import os
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings, ChatNVIDIA
def initialize_llm(model_name='google/gemma-2-27b-it'):
load_dotenv()
api_key = os.getenv('NVIDIA_API_KEY')
if not api_key:
raise ValueError("API ключ не найден! Проверьте файл .env")
llm = ChatNVIDIA(model=model_name, nvidia_api_key=api_key)
embedder = NVIDIAEmbeddings(model='nvidia/nv-embedqa-mistral-7b-v2', api_key=api_key)
return llm, embedder
HuggingFaceEndpoint
was deprecated in LangChain
0.0.37 and will be removed in 1.0. An updated version of the class
exists in the :class:~langchain-huggingface package and should be
used instead. To use it run `pip install -U
:class:
~langchain-huggingface` and import as from
:class:
~langchain_huggingface import
HuggingFaceEndpoint``.
HuggingFaceEndpoint
was deprecated in LangChain
0.0.37 and will be removed in 1.0. An updated version of the class
exists in the :class:~langchain-huggingface package and should be
used instead. To use it run `pip install -U
:class:
~langchain-huggingface` and import as from
:class:
~langchain_huggingface import
HuggingFaceEndpoint``.