Skip to content

Conversation

@better629
Copy link

to add gradio support

@csris
Copy link
Contributor

csris commented Mar 15, 2023

Thanks for the PR! I'll be taking a look at this soon.

Copy link
Contributor

@csris csris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the direction this patch is going with providing a nice Gradio interface. But, unless I'm missing something, it needs to format the input_text from the text box into a prompt suitable for the language model. Please see inference/conversation.py for a helper class that constructs prompts.

import sys

CUR_DIR = os.path.abspath(os.path.dirname(__file__))
MODEL_PATH = os.path.join(CUR_DIR, "../../GPT-NeoXT-Chat-Base-20B/")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this should be

Suggested change
MODEL_PATH = os.path.join(CUR_DIR, "../../GPT-NeoXT-Chat-Base-20B/")
MODEL_PATH = os.path.join(CUR_DIR, "../huggingface_models/GPT-NeoXT-Chat-Base-20B/")

@better629
Copy link
Author

@csris ok, I have updated the code. Thank you!

@csris
Copy link
Contributor

csris commented Mar 31, 2023

Thanks! Will look.

response = self.conv.get_last_turn()

state = state + [(input_text, response)]
return state, state
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the same state object being returned twice from this function?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants