Automation Challenges Generation
This script utilizes a Large Language Model (LLM) to generate a list of specific technical, practical, and conceptual challenges related to automating tasks within a given topic or field.
This script utilizes a Large Language Model (LLM) to generate a list of specific technical, practical, and conceptual challenges related to automating tasks within a given topic or field.
Purpose¶
The primary goal of this script is to identify and articulate the difficulties encountered when attempting to automate processes in a specific domain. It prompts an LLM to list challenges, focusing on technical limitations, practical constraints, and the nuances of human expertise that are hard to replicate.
Usage¶
The script is executed from the command line:
python generate-automation-challenges.py <input_json> [output_json] [-saveInputs] [-uuid="UUID"] [-flow_uuid="FLOW-UUID"]
<input_json>: (Required) Path to the input JSON file containing the topic and LLM parameters.[output_json]: (Optional) Path where the output JSON file should be saved. If omitted, a default path is generated based on the script name and a UUID.-saveInputs: (Optional) Flag to save the system and user messages sent to the LLM into theflow/<flowUUID>/inputs/directory. Requires-flow_uuidto be set.-uuid="UUID": (Optional) Specify a custom UUID for the output file generation.-flow_uuid="FLOW-UUID": (Optional) Specify a UUID for the flow, used for saving inputs when-saveInputsis active.
Input Files¶
The script expects an input JSON file (<input_json>) with the following structure:
{
"topic": "The specific field or topic for challenge analysis",
"model": "gemma3", // Optional: Specify the LLM model (defaults if not provided)
"parameters": {} // Optional: Dictionary of parameters for the LLM call
}
Key Functions¶
generate_automation_challenges(input_data, save_inputs=False):- Extracts
topic,model, andparametersfrom the input data. - Constructs system and user prompts tailored for identifying automation challenges.
- Optionally saves the prompts using
utils.saveToFileifsave_inputsis True andflowUUIDis set. - Calls
utils.chat_with_llmto interact with the specified LLM. - Parses the expected JSON response from the LLM using
utils.parse_llm_json_response. - Handles potential
json.JSONDecodeErrorif the LLM response is not valid JSON. - Returns the parsed list of challenges or
Noneon failure.
- Extracts
main():- Parses command-line arguments using
utils.handle_command_args. - Sets the global
flowUUIDif provided. - Loads the input JSON data using
utils.load_json. - Calls
generate_automation_challengesto get the list of challenges. - Determines the output file path using
utils.get_output_filepath. - Creates process metadata (script name, start time, UUID) using
utils.create_output_metadata. - Combines the metadata and the generated challenges into a final dictionary.
- Saves the output data to the determined path using
utils.save_output.
- Parses command-line arguments using
utilsModule Functions: The script relies on helper functions fromutils.pyfor common tasks like loading/saving JSON (load_json,save_output,saveToFile), interacting with the LLM (chat_with_llm), parsing LLM responses (parse_llm_json_response), handling arguments (handle_command_args), managing output files (get_output_filepath), and creating metadata (create_output_metadata).
LLM Interaction¶
- Prompt Construction:
- A system message defines the AI’s role as an expert in automation challenges.
- A user message requests 4-8 specific challenges for the provided
topic, asking for a title and detailed explanation for each, focusing on technical, practical, and human factors. It explicitly requests the output in JSON format.
- API Call: The
chat_with_llmfunction sends these prompts to the specified LLM. - Response Parsing: The script expects the LLM to return a JSON object containing a list of challenges, each with a
titleandexplanation. Theparse_llm_json_responsefunction handles extracting this data. If the response is not valid JSON, an error is printed.
Output¶
The script generates a JSON output file containing:
process_metadata: Information about the script execution (name, start time, UUID).challenges: The list of automation challenges (each withtitleandexplanation) as generated by the LLM.
Example Output Structure:
```json { “process_metadata”: { “script_name”: “Automation Challenges Generation”, “start_time”: “YYYY-MM-DDTHH:MM:SS.ffffff”, “end_time”: “YYYY-MM-DDTHH:MM:SS.ffffff”, “duration_seconds”: 10.5, “uuid”: “generated-or-provided-uuid” }, “challenges”: [ { “title”: “Challenge Title 1”, “explanation”: “Detailed explanation of the first challenge…” }, { “title”: “Challenge Title 2”, “explanation”: “Detailed explanation of the second challenge…” } // … more challenges ] }