Monologue
Monologue Skill
← Skills Overview | Back to README
What It Does
The Monologue Skill makes Bea speak spontaneously when no one is talking to her. After a configurable idle timeout, she picks a topic and starts telling an unprompted story, anecdote, or opinion — just like a real streamer filling dead air. The story unfolds in chunks over time, with natural pauses between sections.
File
src/modules/skills/implementations/monologue.py
State Machine
The skill operates as a two-state machine inside its update() loop:
┌─────────────────────────────┐
│ IDLE STATE │
│ (waiting for timeout) │
│ │
│ global_idle_time │
│ > interval_seconds? │
└────────────┬────────────────┘
│ YES
▼
┌─────────────────────────────┐
│ STORYTELLING STATE │
│ (telling current story) │◄──────┐
│ │ │ continue
│ time_since_speech │ │ chunk
│ > chunk_pause_seconds? ├───────┘
└────────────┬────────────────┘
│ story ends / closed
▼
back to IDLE
IDLE mode
Reads the timestamp of the last message in HistoryManager. If now - last_message_time > interval_seconds, triggers _start_new_story().
STORYTELLING mode
After each spoken chunk, waits for chunk_pause_seconds of silence (i.e. time_since_speech > chunk_pause_seconds), then calls _continue_story(). This simulates natural stream-of-consciousness speech with pauses.
Topic Generation
When starting a new story, the skill:
- Calls
LLM.chat()with the monologue prompt and a list ofrecent_topicsto avoid repetition. - The LLM returns a new topic.
- The topic is stored in a
deque(maxlen=20)to avoid immediate re-use.
The monologue system prompt (data/prompts/monologue.txt) instructs the AI to:
- Never ask questions to the audience
- Tell a detailed anecdote or opinion
- Use stream-of-consciousness style
- Not force-close — conclude naturally
Chunk Continuation
Each chunk is generated by calling brain.generate_response() with a system trigger:
_continue_story()
├─ system_prompt = brain.system_prompt + monologue_rules
├─ user_trigger = "(System: Continue the story about '<topic>'. Remember: NO questions. If finished, add [END].)"
└─ brain.generate_response(user_trigger, system_prompt=combined_prompt)
└─ uses the global HistoryManager history for context continuity
The story builds on the main conversation history of the brain — there is no separate per-story history buffer. The story_conversation_history attribute exists in the class but is reset and unused between chunks.
The skill detects a story conclusion by checking for the [END] token in the LLM’s response. When found, the token is stripped before calling perform_output_task() and is_telling_story is set to False.
Configuration
"monologue": {
"enabled": false,
"interval_seconds": 30,
"chunk_pause_seconds": 4.0,
"prompt_path": "data/prompts/monologue.txt"
}
| Key | Default | Description |
|---|---|---|
enabled | false | Toggle the skill |
interval_seconds | 30 | Silence duration before Bea starts monologuing |
chunk_pause_seconds | 4.0 | Seconds of silence between story chunks before the next chunk is generated |
prompt_path | data/prompts/monologue.txt | Path to the monologue rules prompt |
Setting
interval_seconds: 1makes Bea start almost immediately — useful for testing.
Interaction with Brain State
The update() method always checks self.context.is_speaking first. If the brain is already outputting audio, the skill resets last_speech_time and does nothing. This prevents the monologue from starting while Bea is mid-sentence.