Understanding ChatGpt Prompt Engineering- Part IV - Using ChatGPT for Inference tasks
Basic Usage of LLM’s Loading the boiler plate code import openai import os from dotenv import load_dotenv, find_dotenv #library to load the local environment variables jupyter _=load_dotenv(find_dotenv()) api_key= os.getenv("OPENAI_API_KEY") #creating the basic prompting function client = openai.OpenAI() def get_completion(prompt, model = "gpt-3.5-turbo"): messages = [{"role": "user", "content":prompt}] response = client.chat.completions.create( model = model, messages = messages, temperature = 0 ) return response.choices[0].message.content LLM as a Inference Device Inference is essentially extraction of specific property of the text fed. These could be sentiments, specific key-value pairs, tone, labels, names etc. The application of this function can be very wide, I hope to illustrate a few examples here. ...