Minyang Chen
1 min readApr 8, 2024

--

Hi, very good question to have control on on the result. i think it need further LLM optimization for output generation in prompt engineering side.

Couple options I explored recently but haven't had a chance to put together in writing yet.

#1 One of them is call DSPY - programming framework to replace prompt engineering. that what they call.

https://github.com/stanfordnlp/dspy

How it work is you create a training dataset for the input and output, the framework generate the optimize the prompt to generate the output you want.

#2 another option is specify the try specify the JSON as response format, see sample below.

```

llm.create_chat_completion(

messages=[

{

"role": "system",

"content": "You are a helpful assistant that outputs in JSON.",

},

{"role": "user", "content": "Who is prime mister of Canada in 2022?"},

],

response_format={

"type": "json_object",

},

temperature=0.7,

)

```

Hope this helps, I will shared more in once I get a chance to put together a sample project.

--

--

Minyang Chen
Minyang Chen

Written by Minyang Chen

Enthusiastic in AI, Cloud, Big Data and Software Engineering. Sharing insights from my own experiences.

No responses yet