Introduction
When using third-party tools like LangChain to call large OpenAI models, these tools process the prompts in ways that can obscure what specific information the application is sending to the models. This tool provides a proxy service that captures and prints messages sent to GPT models and their responses to the console, facilitating developers in identifying and solving issues.
Compile this project using Go:
go build
Run the compiled executable:
./go-llm-proxy
Use curl to test if the proxy service can correctly forward requests and display return data:
curl --location --request POST 'http://127.0.0.1:8888/openai/v1/chat/completions' \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"messages": [
{
"role": "system",
"content": "Just say hi"
}
],
"model": "gpt-3.5-turbo",
"stream": true
}'
from https://github.com/litongjava/go-llm-proxy
No comments:
Post a Comment