-
Notifications
You must be signed in to change notification settings - Fork 5
Funciones con Llama #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… para el reconocimiento de las citas
llama3/function_llama.py
Outdated
| } | ||
| } | ||
|
|
||
| output = llm.create_chat_completion(messages=messages, response_format=response_format, max_tokens=20000, temperature=0.5, top_p=0.5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eduranm os valores deveriam ser parâmetros da função ou de configuração
llama3/function_llama.py
Outdated
| class functionsLlama: | ||
|
|
||
| def getReference(text_reference): | ||
| llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eduranm el modelo llama-3.2-3b-instruct-q4_k_m.gguf debería ser variable de entorno
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eduranm quita llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf") de getReference
| { | ||
| 'role': 'assistant', | ||
| 'content': { | ||
| 'reftype': 'software', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eduranm añada el content proveído por el usuario como mixed-citation
llama3/function_llama.py
Outdated
| from llama_cpp import Llama | ||
| import os | ||
|
|
||
| class functionsLlama: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- cria un módulo
generic_llama.pycon el contenido:
from config.settings.base import LLAMA_MODEL_DIR
from llama_cpp import Llama
import os
class GenericLlama:
def __init__(self, messages, response_format, max_tokens=20000, temperature=0.5, top_p=0.5)):
self. llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf")
self. messages = messages
self.response_format = response_format
self.max_tokens = max_tokens
self.temperature = temperature
self.top_p = top_p
def run(self, user_input):
input = self.messages.copy()
input.append({
'role': 'user',
'content': user_input
})
return self.llm.create_chat_completion(messages=input, response_format=self.response_format, max_tokens=self.max_tokens, temperature=self.temperature, top_p=self.top_p)
llama3/function_llama.py
Outdated
| def getReference(text_reference): | ||
| llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf") | ||
|
|
||
| messages = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- crea una nueva carpeta fuera de llama3, llamada
reference - dentro de
reference, creaconfig.py - dentro de
config.py:
MESSAGES = .... # valor de messages
RESPONSE_FORMAT = .... # valor de response_formatThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- dentro de dentro de reference, crea
marker.py - dentro de marker.py:
from llama3.generic_llama import GenericLlama
from reference.config import MESSAGE, RESPONSE_FORMAT
reference_marker = GenericLlama(MESSAGES, RESPONSE_FORMAT)
def mark_reference(reference_text):
output = reference_marker.run(reference_text)
# output['choices'][0]['message']['content']
for item in output["choices"]:
yield item["message"]["content"]
def mark_references(reference_block):
for ref_row in reference_block.split("\n"):
ref_row = ref_row.strip()
if ref_row:
choices = mark_reference(ref_row)
yield {
"reference": ref_row,
"choices": list(choices)
}
O que esse PR faz?
Agrega archivos y configuraciones iniciales para utilizar llama en el proyecto
Onde a revisão poderia começar?
Por commit
Como este poderia ser testado manualmente?
Iniciar proyecto
Algum cenário de contexto que queira dar?
N/A
Screenshots
Quais são tickets relevantes?
#9
Referências
N/A