Skip to content

Conversation

@eduranm
Copy link
Contributor

@eduranm eduranm commented Jan 23, 2025

O que esse PR faz?
Agrega archivos y configuraciones iniciales para utilizar llama en el proyecto

Onde a revisão poderia começar?
Por commit

Como este poderia ser testado manualmente?
Iniciar proyecto

Algum cenário de contexto que queira dar?
N/A

Screenshots

Quais são tickets relevantes?
#9

Referências
N/A

}
}

output = llm.create_chat_completion(messages=messages, response_format=response_format, max_tokens=20000, temperature=0.5, top_p=0.5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm os valores deveriam ser parâmetros da função ou de configuração

class functionsLlama:

def getReference(text_reference):
llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf")
Copy link
Member

@robertatakenaka robertatakenaka Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm el modelo llama-3.2-3b-instruct-q4_k_m.gguf debería ser variable de entorno

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm quita llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf") de getReference

{
'role': 'assistant',
'content': {
'reftype': 'software',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm añada el content proveído por el usuario como mixed-citation

from llama_cpp import Llama
import os

class functionsLlama:
Copy link
Member

@robertatakenaka robertatakenaka Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm

  1. cria un módulo generic_llama.py con el contenido:
from config.settings.base import LLAMA_MODEL_DIR
from llama_cpp import Llama
import os

class GenericLlama:

    def __init__(self, messages, response_format, max_tokens=20000, temperature=0.5, top_p=0.5)):
       self. llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf")
       self. messages = messages
       self.response_format = response_format
       self.max_tokens = max_tokens
       self.temperature = temperature
       self.top_p = top_p

     def run(self, user_input):
          input = self.messages.copy()
          input.append({
            'role': 'user',
            'content': user_input
          })
          return self.llm.create_chat_completion(messages=input, response_format=self.response_format, max_tokens=self.max_tokens, temperature=self.temperature, top_p=self.top_p)

def getReference(text_reference):
llm = Llama(model_path = LLAMA_MODEL_DIR+"/llama-3.2-3b-instruct-q4_k_m.gguf")

messages = [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm

  1. crea una nueva carpeta fuera de llama3, llamada reference
  2. dentro de reference, crea config.py
  3. dentro de config.py:
MESSAGES = .... # valor de messages

RESPONSE_FORMAT = .... # valor de response_format

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eduranm

  1. dentro de dentro de reference, crea marker.py
  2. dentro de marker.py:
from llama3.generic_llama import GenericLlama

from reference.config import MESSAGE, RESPONSE_FORMAT


reference_marker = GenericLlama(MESSAGES, RESPONSE_FORMAT)


def mark_reference(reference_text):
    output = reference_marker.run(reference_text)
    # output['choices'][0]['message']['content']
    for item in output["choices"]:
        yield item["message"]["content"]


def mark_references(reference_block):
    for ref_row in reference_block.split("\n"):
        ref_row = ref_row.strip()
        if ref_row:
            choices = mark_reference(ref_row)
            yield {
                "reference": ref_row,
                "choices": list(choices)
            }

@robertatakenaka robertatakenaka merged commit d18ae2c into scieloorg:main Jan 27, 2025
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants