Prompt Response Generation
This script utilizes a Large Language Model (LLM) to generate a factual response based on a user-provided prompt. The response is structured as a simple list of facts, one per line.
This script utilizes a Large Language Model (LLM) to generate a factual response based on a user-provided prompt. The response is structured as a simple list of facts, one per line.
Purpose¶
The script takes a JSON file containing a prompt, an optional model specification, and optional parameters as input. It interacts with an LLM via the chat_with_llm utility function to generate a response according to specific instructions (factual, concise, varied facts, one per line). The resulting response, along with metadata, is saved to an output JSON file.
Usage¶
To run the script, use the following command format:
python prompt.py <input_json> [output_json]
<input_json>: Path to the required input JSON file.[output_json]: Optional path for the output JSON file. If not provided, a path will be generated automatically in theoutput/directory based on the task name and a UUID.
The script uses the handle_command_args function from utils.py to parse these command-line arguments.
Input Files¶
The input must be a JSON file containing the following keys:
prompt(string): The user’s prompt or question for the LLM. (Required)model(string): The identifier for the LLM to use. Defaults to"gemma3"if not provided. (Optional)parameters(object): Additional parameters to pass to the LLM API. (Optional)
Example (examples/prompt-in.json):
{
"prompt": "List key facts about the planet Mars.",
"model": "gemma3",
"parameters": {}
}
Key Functions¶
generate_prompt_response(input_data):- Extracts the
prompt,model, andparametersfrom the input dictionary. - Constructs a detailed system message instructing the LLM on the desired output format and style (factual, concise, varied, one fact per line, no extra formatting).
- Calls the
chat_with_llmfunction (fromutils.py) with the model, system message, user prompt, and parameters. - Returns the content of the LLM’s response.
- Extracts the
main():- Parses command-line arguments using
handle_command_args. - Loads the input data from the specified JSON file using
load_json. - Calls
generate_prompt_responseto get the LLM response. - Determines the output file path using
get_output_filepath, generating one if not specified. - Creates metadata (process name, start time, UUID) using
create_output_metadata. - Formats the final output data, including metadata, the original prompt, and the response content (split into a list of strings by newline).
- Saves the output data to the determined JSON file path using
save_output.
- Parses command-line arguments using
- Imported
utilsFunctions:load_json: Loads data from a JSON file.save_output: Saves data to a JSON file.chat_with_llm: Handles interaction with the LLM API.create_output_metadata: Generates standard metadata for output files.get_output_filepath: Determines the appropriate output file path.handle_command_args: Parses command-line arguments for input/output files.
LLM Interaction¶
The script interacts with an LLM specified by the model parameter (defaulting to “gemma3”).
- System Prompt Construction: A specific system message is sent to the LLM:
You are a knowledgeable assistant specialized in providing accurate, concise, and informative facts about various topics. Your responses should be factual, specific, and organized. When asked about a subject, provide clear, detailed information based on your knowledge, focusing on relevant details. Present your information in a clear, structured format with one fact per line. Avoid unnecessary commentary, opinions, or irrelevant details. Focus on providing factual, educational content about the requested topic. Focus on having a wide variety of facts. Output the instructions in a simple list format with no numbers, symbols, markdown, or extra formatting.This guides the LLM to generate a list of distinct, factual statements about the topic in the user prompt. - User Prompt: The
promptvalue from the input JSON is used as the user prompt. - API Call: The
chat_with_llmfunction sends the system message, user prompt, model identifier, and any specifiedparametersto the LLM API. - Response Processing: The raw text response from the LLM is returned by
generate_prompt_response.
Output¶
The script generates a JSON file containing:
- Metadata generated by
create_output_metadata(includingprocess_name,start_time,uuid). - The original
promptprovided in the input file. - The
responsefrom the LLM, formatted as a list of strings. Each string in the list corresponds to a line (intended to be a single fact) from the LLM’s raw output, split by the newline character (\n).
Example (examples/prompt-out.json):
```json
{
“process_name”: “Prompt Response”,
“start_time”: “2024-07-15T10:30:00.123456”,
“uuid”: “a1b2c3d4-e5f6-7890-1234-567890abcdef”,
“prompt”: “List key facts about the planet Mars.”,
“response”: [
“Mars is the fourth planet from the Sun.”,
“It is often called the ‘Red Planet’ due to its reddish appearance.”,
“Mars has two small moons, Phobos and Deimos.”,
“A Martian day (sol) is slightly longer than an Earth day: 24 hours and 37 minutes.”,
“The planet has polar ice caps composed of water ice and frozen carbon dioxide.”,
“Olympus Mons on Mars is the largest volcano in the solar system.”,
“Valles Marineris is one of the largest canyons in the solar system, located on Mars.”
]
}