Execution Lifecycle¶
This page documents the complete flow from voice input to command response, including every hook and validation step.
Full Lifecycle Diagram¶
flowchart TD
A[Voice Input] --> B{pre_route?}
B -->|PreRouteResult| F[execute]
B -->|None| C[LLM Inference]
C --> D[Tool Call Selected]
D --> E[post_process_tool_call]
E --> F
F --> G[_validate_secrets]
G -->|MissingSecretsError| ERR[Error Response]
G -->|OK| H[_validate_params]
H -->|Missing required| ERR2[ValueError]
H -->|OK| I[validate_call]
I -->|Errors| J{Has suggestions?}
J -->|Yes| K[Auto-correct kwargs]
K --> L[run]
J -->|No| M[validation_error Response]
I -->|All pass| L
L --> N[CommandResponse]
style A fill:#e1f5fe
style N fill:#e8f5e9
style ERR fill:#ffebee
style ERR2 fill:#ffebee
style M fill:#fff3e0
Step-by-Step Walkthrough¶
1. Pre-Route Check¶
Before involving the LLM at all, the command center calls pre_route() on every registered command with the raw voice text.
def pre_route(self, voice_command: str) -> PreRouteResult | None:
text = voice_command.lower().strip()
if text in ("pause", "pause the music"):
return PreRouteResult(arguments={"action": "pause"})
return None
If any command returns a PreRouteResult:
- The LLM is bypassed entirely
- The result's
argumentsdict is passed directly toexecute() - If
spoken_responseis set, it overrides the normal TTS generation - This is significantly faster (no LLM latency)
If all commands return None:
- Normal LLM inference proceeds
When to use pre_route:
- Short, unambiguous commands ("pause", "skip", "resume", "volume 50")
- Deterministic patterns that never need LLM interpretation
- Keep the word count check tight (the music command caps at 5 words)
2. LLM Inference¶
The command center sends the voice command to the LLM along with all registered command schemas (generated by get_command_schema() or to_openai_tool_schema()).
The LLM:
- Reads all command descriptions, parameters, examples, rules, and antipatterns
- Selects the best-matching command (or decides to answer directly if
allow_direct_answer=True) - Extracts parameter values from the voice command
- Returns a tool call with the command name and arguments
3. Post-Process Tool Call¶
After the LLM produces a tool call, post_process_tool_call() gets a chance to fix common LLM mistakes before execution.
def post_process_tool_call(self, args: dict, voice_command: str) -> dict:
# Fix: LLM sometimes passes action="delete" instead of "trash"
if args.get("action") == "delete":
args["action"] = "trash"
# Fix: LLM sometimes forgets the query for play action
if args.get("action") == "play" and not args.get("query"):
args["query"] = self._extract_query_from_utterance(voice_command)
return args
This method receives the raw voice command as a second argument, which is useful for extracting data the LLM missed.
4. Execute (Orchestration)¶
execute() on JarvisCommandBase orchestrates the validation pipeline. You do not override this method.
def execute(self, request_info: RequestInformation, **kwargs) -> CommandResponse:
self._validate_secrets()
self._validate_params(kwargs)
results = self.validate_call(**kwargs)
errors = [r for r in results if not r.success]
if errors:
return CommandResponse.validation_error(errors)
# Apply auto-corrections
for r in results:
if r.suggested_value is not None:
kwargs[r.param_name] = r.suggested_value
return self.run(request_info, **kwargs)
5. Secret Validation¶
_validate_secrets() checks that every secret in required_secrets with required=True has a non-empty value in the database.
def _validate_secrets(self):
missing = []
for secret in self.required_secrets:
if secret.required and not get_secret_value(secret.key, secret.scope):
missing.append(secret.key)
if missing:
raise MissingSecretsError(missing)
If secrets are missing, a MissingSecretsError is raised. The command center catches this and tells the user to configure the missing settings.
6. Parameter Presence Validation¶
_validate_params() checks that every parameter with required=True is present in kwargs.
def _validate_params(self, kwargs):
missing = [
p.name for p in self.parameters if p.required and kwargs.get(p.name) is None
]
if missing:
raise ValueError(f"Missing required params: {', '.join(missing)}")
7. Value Validation¶
validate_call() runs three checks on each parameter value:
- Type validation -- is the value the right Python type?
- Enum validation -- if
enum_valuesis set, is the value in the list? - Custom validation -- if a
validation_functionis defined, does it pass?
The default implementation loops over all parameters:
def validate_call(self, **kwargs) -> list[ValidationResult]:
results = []
for param in self.parameters:
value = kwargs.get(param.name)
if value is None:
continue
is_valid, error_msg = param.validate(value)
if not is_valid:
results.append(ValidationResult(
success=False,
param_name=param.name,
command_name=self.command_name,
message=error_msg,
valid_values=param.enum_values,
))
return results
Override for cross-parameter or context-dependent validation:
def validate_call(self, **kwargs) -> list[ValidationResult]:
results = super().validate_call(**kwargs)
# Custom: verify entity_id exists in Home Assistant
entity_id = kwargs.get("entity_id")
if entity_id and not self._entity_exists(entity_id):
results.append(ValidationResult(
success=False,
param_name="entity_id",
command_name=self.command_name,
message=f"Device '{entity_id}' not found",
valid_values=self._get_known_entities(),
))
return results
8. Auto-Correction¶
If any ValidationResult has a suggested_value, the value is automatically corrected in kwargs before run() is called:
This is useful for fuzzy matching. For example, if the user says "turn on the living room light" and the entity is light.living_room_main, your validate_call() can return a suggestion:
results.append(ValidationResult(
success=True,
param_name="entity_id",
command_name=self.command_name,
suggested_value="light.living_room_main",
))
9. Run¶
Finally, your run() method executes with validated, potentially auto-corrected parameters:
def run(self, request_info: RequestInformation, **kwargs) -> CommandResponse:
# Parameters are validated and corrected at this point
city = kwargs.get("city", "default")
return CommandResponse.success_response(context_data={"city": city})
10. Response Back to CC¶
The CommandResponse flows back to the command center, which:
- Reads
context_dataand generates a spoken response via the LLM - Sends TTS audio to the node
- Sends structured data to the mobile app
- If
wait_for_input=True, keeps the conversation open - If
actionsare present, renders buttons in the mobile UI
Installation Lifecycle¶
When a command is first installed on a node, a separate lifecycle runs.
install_command.py Flow¶
flowchart TD
A[install_command.py] --> B[Discover command classes]
B --> C[Run DB migrations]
C --> D[For each command:]
D --> E[Read all_possible_secrets]
E --> F[Seed empty rows in secrets DB]
F --> G{Has required_packages?}
G -->|Yes| H[pip install packages]
G -->|No| I[Done]
H --> I
# Install all commands
python scripts/install_command.py --all
# Install a specific command
python scripts/install_command.py get_weather
# List commands and their secrets
python scripts/install_command.py --list
init_data.py Flow¶
For commands that need first-install setup (like fetching device lists or registering with external services):
This calls the command's init_data() method, which can run interactive setup:
def init_data(self) -> dict:
# Interactive setup: prompt for URL, authenticate, list devices
url = input("Service URL: ")
# ... setup logic ...
return {"status": "success", "devices_found": 5}
required_packages Auto-Install¶
When required_packages returns packages, they are installed on first use:
@property
def required_packages(self) -> List[JarvisPackage]:
return [
JarvisPackage("music-assistant-client", ">=1.3.0"),
]
The install script runs pip install music-assistant-client>=1.3.0 and records it in custom-requirements.txt for reproducibility.
Validation Flow (CC Side)¶
When the command center receives a validation_error response, it follows this flow:
flowchart TD
A[validation_error Response] --> B{Has valid_values?}
B -->|Yes| C[LLM retries with valid_values hint]
C --> D{LLM picks valid value?}
D -->|Yes| E[Re-execute with corrected args]
D -->|No| F[Ask user for clarification]
B -->|No| F
F --> G[User responds]
G --> H[Re-execute with user's input]
The valid_values list in ValidationResult is critical -- it tells the LLM what the correct options are, allowing automatic retry without bothering the user.
Multi-Turn Conversation Flow¶
When a command returns wait_for_input=True, the conversation stays open:
sequenceDiagram
participant User
participant CC as Command Center
participant LLM
participant Node
User->>CC: "What's 5 plus 3?"
CC->>LLM: Tool inference
LLM->>CC: calculate(num1=5, num2=3, operation=add)
CC->>Node: execute(...)
Node->>CC: follow_up_response(result=8)
CC->>User: "5 plus 3 equals 8"
Note over CC: wait_for_input=True, conversation stays open
User->>CC: "Now multiply that by 2"
CC->>LLM: Tool inference (with conversation history)
LLM->>CC: calculate(num1=8, num2=2, operation=multiply)
CC->>Node: execute(...)
Node->>CC: follow_up_response(result=16)
CC->>User: "8 times 2 equals 16"
The LLM has access to the conversation history, so it can resolve references like "that" to the previous result.