Future Technology Analysis

This script utilizes a Large Language Model (LLM) to generate a comprehensive overview of the technologies required for the full automation of a specified topic. It focuses on forecasting realistic technological advancements within a 5-15 year timeframe.

This script utilizes a Large Language Model (LLM) to generate a comprehensive overview of the technologies required for the full automation of a specified topic. It focuses on forecasting realistic technological advancements within a 5-15 year timeframe.

Purpose

The primary goal of future-technology.py is to analyze a given topic and produce a structured JSON output detailing the sensory, control, mechanical, and software systems needed for its complete automation. It also estimates the timeline and identifies key research areas.

Usage

Run the script from the command line:

python future-technology.py <input_json> [output_json] [-saveInputs] [-uuid="UUID"] [-flow_uuid="FLOW-UUID"]
  • <input_json>: (Required) Path to the input JSON file containing the analysis parameters.
  • [output_json]: (Optional) Path to save the output JSON file. If omitted, a default path is generated.
  • -saveInputs: (Optional) Flag to save the system and user prompts sent to the LLM.
  • -uuid="UUID": (Optional) Specify a custom UUID for the output metadata.
  • -flow_uuid="FLOW-UUID": (Optional) Specify a UUID for the flow, used for organizing saved inputs if -saveInputs is used.

Input Files

The script expects an input JSON file (<input_json>) with the following structure:

{
  "topic": "The subject area for automation analysis",
  "model": "gemma3", // Or another LLM model identifier
  "parameters": { // Optional LLM parameters
    // e.g., "temperature": 0.7
  }
}

Key Functions

  • sanitize_json_string(json_str): Removes invalid control characters from a string to prevent JSON parsing errors.
  • extract_json_from_response(response): Attempts to extract and parse a valid JSON object from the LLM’s response, handling potential variations like direct JSON, JSON within code fences (json ... or ...), or JSON enclosed in curly braces.
  • generate_future_technology(input_data, save_inputs=False): Constructs the system and user prompts based on the input topic, interacts with the specified LLM using utils.chat_with_llm, and processes the response using extract_json_from_response.
  • main(): Handles command-line arguments using utils.handle_command_args, loads input data using utils.load_json, orchestrates the analysis by calling generate_future_technology, prepares metadata using utils.create_output_metadata and utils.get_output_filepath, and saves the final output using utils.save_output.
  • Utility Functions (from utils.py): The script relies on several functions imported from utils.py for common tasks like loading JSON (load_json), saving output (save_output, saveToFile), interacting with the LLM (chat_with_llm), creating metadata (create_output_metadata), determining output paths (get_output_filepath), and handling arguments (handle_command_args).

LLM Interaction

  1. Prompt Construction: A system prompt instructs the LLM to act as a future technology forecaster. A user prompt provides the specific topic and requests the output in a predefined JSON format.
  2. LLM Call: The chat_with_llm function sends these prompts to the specified LLM (model from input).
  3. Expected Response: The script anticipates a JSON response containing arrays for sensory_systems, control_systems, mechanical_systems, software_integration, key_research_areas, and a timeline_estimate.

JSON Handling

LLM responses can sometimes include extra text or formatting inconsistencies. * sanitize_json_string: Cleans the raw response string by removing characters that would break standard JSON parsers. * extract_json_from_response: Employs multiple strategies (direct parsing, code fence extraction, curly brace extraction) to robustly locate and parse the intended JSON payload within the potentially messy LLM output. This ensures the script can reliably retrieve the structured data even if the LLM doesn’t strictly adhere to the “JSON only” request.

Output

The script generates a JSON file containing: * Process metadata (script name, start time, UUID) generated by create_output_metadata. * The core future_technology data, structured as a JSON object extracted from the LLM response.

{
  "process_metadata": {
    "script_name": "Future Technology Analysis",
    "start_time": "...",
    "uuid": "..."
  },
  "future_technology": {
    "sensory_systems": [...],
    "control_systems": [...],
    "mechanical_systems": [...],
    "software_integration": [...],
    "timeline_estimate": "...",
    "key_research_areas": [...]
  }
}

The output is saved to the specified [output_json] path or a generated default path.