Wire everything up.

This commit is contained in:
Drew 2026-02-23 23:58:02 -08:00
parent c564f197b5
commit 05176c7742
15 changed files with 726 additions and 49 deletions

9
.skate/skate.log Normal file
View file

@ -0,0 +1,9 @@
2026-02-24T08:16:23.967303Z INFO skate::app: skate starting project_dir=. log=./.skate/skate.log
2026-02-24T08:16:32.836825Z INFO skate::app: skate starting project_dir=. log=./.skate/skate.log
2026-02-24T08:20:04.150234Z INFO skate::app: skate exiting cleanly
2026-02-24T08:20:58.589061Z INFO skate::app: skate starting project_dir=. log=./.skate/skate.log
2026-02-24T08:28:09.982253Z INFO skate::app: skate exiting cleanly
2026-02-24T09:31:08.532095Z INFO skate::app: skate starting project_dir=. log=./.skate/skate.log
2026-02-24T09:31:14.261232Z INFO skate::app: skate exiting cleanly
2026-02-24T09:47:04.720495Z INFO skate::app: skate starting project_dir=. log=./.skate/skate.log
2026-02-24T09:47:13.632927Z INFO skate::app: skate exiting cleanly

View file

@ -16,22 +16,22 @@ Rust TUI coding agent. Ratatui + Crossterm + Tokio. See DESIGN.md for architectu
Six modules with strict boundaries:
- `src/app/` Wiring, lifecycle, tokio runtime setup
- `src/tui/` Ratatui rendering, input handling, vim modes. Communicates with core ONLY via channels (`UserAction` → core, `UIEvent` ← core). Never touches conversation state directly.
- `src/core/` Conversation tree, orchestrator loop, sub-agent lifecycle
- `src/provider/` `ModelProvider` trait + Claude implementation. Leaf module, no internal dependencies.
- `src/tools/` `Tool` trait, registry, built-in tools. Depends only on `sandbox`.
- `src/sandbox/` Landlock policy, path validation, command execution. Leaf module.
- `src/session/` JSONL logging, session read/write. Leaf module.
- `src/app/` -- Wiring, lifecycle, tokio runtime setup
- `src/tui/` -- Ratatui rendering, input handling, vim modes. Communicates with core ONLY via channels (`UserAction` -> core, `UIEvent` <- core). Never touches conversation state directly.
- `src/core/` -- Conversation tree, orchestrator loop, sub-agent lifecycle
- `src/provider/` -- `ModelProvider` trait + Claude implementation. Leaf module, no internal dependencies.
- `src/tools/` -- `Tool` trait, registry, built-in tools. Depends only on `sandbox`.
- `src/sandbox/` -- Landlock policy, path validation, command execution. Leaf module.
- `src/session/` -- JSONL logging, session read/write. Leaf module.
The channel boundary between `tui` and `core` is critical never bypass it. The TUI is a frontend; core is the engine. This separation enables headless mode for benchmarking.
The channel boundary between `tui` and `core` is critical -- never bypass it. The TUI is a frontend; core is the engine. This separation enables headless mode for benchmarking.
## Code Style
- Use `thiserror` for error types, not `anyhow` in library code (`anyhow` only in `main.rs`/`app`)
- Prefer `impl Trait` return types over boxing when possible
- All public types need doc comments
- No `unwrap()` in non-test code use `?` or explicit error handling
- No `unwrap()` in non-test code -- use `?` or explicit error handling
- Async functions should be cancel-safe where possible
- Use `tracing` for structured logging, not `println!` or `log`
@ -39,16 +39,23 @@ The channel boundary between `tui` and `core` is critical — never bypass it. T
Prefer a literate style: doc comments should explain *why* and *how*, not just restate the signature.
Use only characters available on a standard US QWERTY keyboard in all doc comments and inline comments. Specifically:
- Use `->` and `<-` instead of Unicode arrow glyphs
- Use `--` instead of em dashes or en dashes
- Use `+`, `-`, `|` for ASCII box diagrams instead of Unicode box-drawing characters
- Use `...` instead of the ellipsis character
- Spell out "Section N.N" instead of the section-sign glyph
When a function or type implements an external protocol or spec:
- Document the relevant portion of the protocol inline (packet shapes, event sequences, state machines)
- Link to the authoritative external source — API reference, RFC, WHATWG spec, etc.
- Link to the authoritative external source -- API reference, RFC, WHATWG spec, etc.
- Include a mapping table or lifecycle diagram when there are multiple cases to distinguish
For example, `run_stream` in `src/provider/claude.rs` documents the full SSE event sequence in a text diagram and links to both the Anthropic streaming reference and the WHATWG SSE spec. Aim for that level of context in any code that speaks a wire format or external API.
## Conversation Data Model
Events use parent IDs forming a tree (not a flat list). This enables future branching. Every event has: id, parent_id, timestamp, event_type, token_usage. A "turn" is all events between two user messages this is the unit for token tracking.
Events use parent IDs forming a tree (not a flat list). This enables future branching. Every event has: id, parent_id, timestamp, event_type, token_usage. A "turn" is all events between two user messages -- this is the unit for token tracking.
## Testing
@ -61,8 +68,8 @@ Events use parent IDs forming a tree (not a flat list). This enables future bran
## Key Constraints
- All file I/O and process spawning in tools MUST go through `Sandbox` never use `std::fs` or `std::process::Command` directly in tool implementations
- The `ModelProvider` trait must remain provider-agnostic no Claude-specific types in the trait interface
- All file I/O and process spawning in tools MUST go through `Sandbox` -- never use `std::fs` or `std::process::Command` directly in tool implementations
- The `ModelProvider` trait must remain provider-agnostic -- no Claude-specific types in the trait interface
- Session JSONL is append-only. Never rewrite history. Branching works by writing new events with different parent IDs.
- Token usage must be tracked per-event and aggregatable per-turn

1
Cargo.lock generated
View file

@ -2022,6 +2022,7 @@ checksum = "b2aa850e253778c88a04c3d7323b043aeda9d3e30d5971937c1855769763678e"
name = "skate"
version = "0.1.0"
dependencies = [
"anyhow",
"crossterm",
"futures",
"ratatui",

View file

@ -4,6 +4,7 @@ version = "0.1.0"
edition = "2024"
[dependencies]
anyhow = "1"
ratatui = "0.30"
crossterm = "0.29"
tokio = { version = "1", features = ["full"] }

4
TODO.md Normal file
View file

@ -0,0 +1,4 @@
# Cleanups
- Move keyboard/event reads in the TUI to a separate thread or async/io loop
- Keep UI and orchestrator in sync (i.e. messages display out of order if you queue up many.)

View file

@ -1 +1,85 @@
//! Application wiring: tracing initialisation, channel setup, and task
//! orchestration.
//!
//! This module is the only place that knows about all subsystems simultaneously.
//! It creates the two channels that connect the TUI to the core orchestrator,
//! spawns the orchestrator as a background tokio task, and then hands control to
//! the TUI event loop on the calling task.
//!
//! # Shutdown sequence
//!
//! ```text
//! User presses Ctrl-C / Ctrl-D
//! -> tui::run sends UserAction::Quit, breaks loop, drops action_tx
//! -> restore_terminal(), tui::run returns Ok(())
//! -> app::run returns Ok(())
//! -> tokio runtime drops the spawned orchestrator task
//! (action_rx channel closed -> orchestrator recv() returns None -> run() returns)
//! ```
mod workspace;
use std::path::Path;
use anyhow::Context;
use tokio::sync::mpsc;
use crate::core::orchestrator::Orchestrator;
use crate::core::types::{UIEvent, UserAction};
use crate::provider::ClaudeProvider;
/// Model ID sent on every request.
///
/// See the [models overview] for the full list of available model IDs.
///
/// [models overview]: https://docs.anthropic.com/en/docs/about-claude/models/overview
const MODEL: &str = "claude-haiku-4-5";
/// Buffer capacity for the `UserAction` and `UIEvent` channels.
///
/// 64 is large enough to absorb bursts of streaming deltas without blocking the
/// orchestrator, while staying well under any memory pressure.
const CHANNEL_CAP: usize = 64;
/// Initialise tracing, wire subsystems, and run until the user quits.
///
/// Steps:
/// 1. Open (or create) the `workspace::SkateDir` and install the tracing
/// subscriber. All structured log output goes to `.skate/skate.log` --
/// writing to stdout would corrupt the TUI.
/// 2. Construct a [`ClaudeProvider`], failing fast if `ANTHROPIC_API_KEY` is
/// absent.
/// 3. Create the `UserAction` (TUI -> core) and `UIEvent` (core -> TUI) channel
/// pair.
/// 4. Spawn the [`Orchestrator`] event loop on a tokio worker task.
/// 5. Run the TUI event loop on the calling task (crossterm must not be used
/// from multiple threads concurrently).
pub async fn run(project_dir: &Path) -> anyhow::Result<()> {
// -- Tracing ------------------------------------------------------------------
workspace::SkateDir::open(project_dir)?.init_tracing()?;
tracing::info!(project_dir = %project_dir.display(), "skate starting");
// -- Provider -----------------------------------------------------------------
let provider = ClaudeProvider::from_env(MODEL)
.context("failed to construct Claude provider (is ANTHROPIC_API_KEY set?)")?;
// -- Channels -----------------------------------------------------------------
let (action_tx, action_rx) = mpsc::channel::<UserAction>(CHANNEL_CAP);
let (event_tx, event_rx) = mpsc::channel::<UIEvent>(CHANNEL_CAP);
// -- Orchestrator (background task) -------------------------------------------
let orch = Orchestrator::new(provider, action_rx, event_tx);
tokio::spawn(orch.run());
// -- TUI (foreground task) ----------------------------------------------------
// `action_tx` is moved into tui::run; when it returns (user quit), the
// sender is dropped, which closes the channel and causes the orchestrator's
// recv() loop to exit.
crate::tui::run(action_tx, event_rx)
.await
.context("TUI error")?;
tracing::info!("skate exiting cleanly");
Ok(())
}

104
src/app/workspace.rs Normal file
View file

@ -0,0 +1,104 @@
//! `.skate/` runtime directory management.
//!
//! The `.skate/` directory lives inside the user's project and holds all
//! runtime artefacts produced by a skate session: structured logs, session
//! indices, and (in future) per-run snapshots. None of these should ever be
//! committed to the project's VCS, so the first time the directory is created
//! we drop a `.gitignore` containing `*` -- ignoring everything, including the
//! `.gitignore` itself.
//!
//! # Lifecycle
//!
//! ```text
//! app::run
//! -> SkateDir::open(project_dir) -- creates dir + .gitignore if needed
//! -> skate_dir.init_tracing() -- opens skate.log, installs subscriber
//! ```
use std::path::{Path, PathBuf};
use std::sync::Mutex;
use anyhow::Context;
use tracing_subscriber::EnvFilter;
/// The `.skate/` runtime directory inside a project.
///
/// Created on first use; subsequent calls are no-ops. All knowledge of
/// well-known child paths stays inside this module so callers never
/// construct them by hand.
pub struct SkateDir {
path: PathBuf,
}
impl SkateDir {
/// Open (or create) the `.skate/` directory inside `project_dir`.
///
/// On first call this also writes a `.gitignore` containing `*` so that
/// none of the runtime files are accidentally committed. Concretely:
///
/// 1. `create_dir_all` -- idempotent, works whether the dir already exists
/// or is being created for the first time.
/// 2. `OpenOptions::create_new` on `.gitignore` -- atomic write-once; the
/// `AlreadyExists` error is silently ignored so repeated calls are safe.
///
/// Returns `Err` on any I/O failure other than `AlreadyExists`.
pub fn open(project_dir: &Path) -> anyhow::Result<Self> {
let path = project_dir.join(".skate");
std::fs::create_dir_all(&path)
.with_context(|| format!("cannot create .skate directory {}", path.display()))?;
// Write .gitignore on first creation; no-op if it already exists.
// Content is "*": ignore everything in this directory including the
// .gitignore itself -- none of the skate runtime files should be committed.
let gitignore_path = path.join(".gitignore");
match std::fs::OpenOptions::new()
.write(true)
.create_new(true)
.open(&gitignore_path)
{
Ok(mut f) => {
use std::io::Write;
f.write_all(b"*\n")
.with_context(|| format!("cannot write {}", gitignore_path.display()))?;
}
Err(e) if e.kind() == std::io::ErrorKind::AlreadyExists => {}
Err(e) => {
return Err(e)
.with_context(|| format!("cannot create {}", gitignore_path.display()));
}
}
Ok(Self { path })
}
/// Install the global `tracing` subscriber, writing to `skate.log`.
///
/// Opens (or creates) `skate.log` in append mode, then registers a
/// `tracing_subscriber::fmt` subscriber that writes structured JSON-ish
/// text to that file. Writing to stdout is not possible because the TUI
/// owns the terminal.
///
/// RUST_LOG controls verbosity; falls back to `info` if absent or
/// unparseable. Must be called at most once per process -- the underlying
/// `tracing` registry panics on a second `init()` call.
pub fn init_tracing(&self) -> anyhow::Result<()> {
let log_path = self.path.join("skate.log");
let log_file = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open(&log_path)
.with_context(|| format!("cannot open log file {}", log_path.display()))?;
let filter =
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info"));
tracing_subscriber::fmt()
.with_writer(Mutex::new(log_file))
.with_ansi(false)
.with_env_filter(filter)
.init();
Ok(())
}
}

90
src/core/history.rs Normal file
View file

@ -0,0 +1,90 @@
use crate::core::types::ConversationMessage;
/// The in-memory conversation history for the current session.
///
/// Stores messages as a flat ordered list. Each [`push`][`Self::push`] appends
/// one message; [`messages`][`Self::messages`] returns a slice over all of them.
///
/// This is a flat list for Phase 1. Phases 3+ will introduce a tree structure
/// (each event carrying a `parent_id`) to support conversation branching and
/// sub-agent threads. The flat model is upward-compatible: a tree is just a
/// linear chain of parent IDs when there is no branching.
pub struct ConversationHistory {
messages: Vec<ConversationMessage>,
}
impl ConversationHistory {
/// Create an empty history.
pub fn new() -> Self {
Self {
messages: Vec::new(),
}
}
/// Append one message to the end of the history.
pub fn push(&mut self, message: ConversationMessage) {
self.messages.push(message);
}
/// Return the full ordered message list, oldest-first.
///
/// This slice is what gets serialised and sent to the provider on each
/// turn -- the provider needs the full prior context to generate a coherent
/// continuation.
pub fn messages(&self) -> &[ConversationMessage] {
&self.messages
}
}
impl Default for ConversationHistory {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::types::Role;
#[test]
fn new_history_is_empty() {
let history = ConversationHistory::new();
assert!(history.messages().is_empty());
}
#[test]
fn push_and_read_roundtrip() {
let mut history = ConversationHistory::new();
history.push(ConversationMessage {
role: Role::User,
content: "hello".to_string(),
});
history.push(ConversationMessage {
role: Role::Assistant,
content: "hi there".to_string(),
});
let msgs = history.messages();
assert_eq!(msgs.len(), 2);
assert_eq!(msgs[0].role, Role::User);
assert_eq!(msgs[0].content, "hello");
assert_eq!(msgs[1].role, Role::Assistant);
assert_eq!(msgs[1].content, "hi there");
}
#[test]
fn messages_preserves_insertion_order() {
let mut history = ConversationHistory::new();
for i in 0u32..5 {
history.push(ConversationMessage {
role: Role::User,
content: format!("msg {i}"),
});
}
for (i, msg) in history.messages().iter().enumerate() {
assert_eq!(msg.content, format!("msg {i}"));
}
}
}

View file

@ -1 +1,3 @@
pub mod history;
pub mod orchestrator;
pub mod types;

351
src/core/orchestrator.rs Normal file
View file

@ -0,0 +1,351 @@
use futures::StreamExt;
use tokio::sync::mpsc;
use tracing::debug;
use crate::core::history::ConversationHistory;
use crate::core::types::{ConversationMessage, Role, StreamEvent, UIEvent, UserAction};
use crate::provider::ModelProvider;
/// Drives the conversation loop between the TUI frontend and the model provider.
///
/// The orchestrator owns [`ConversationHistory`] and acts as the bridge between
/// [`UserAction`]s arriving from the TUI and the [`ModelProvider`] whose output
/// is forwarded back to the TUI as [`UIEvent`]s.
///
/// # Channel topology
///
/// ```text
/// TUI --UserAction--> Orchestrator --UIEvent--> TUI
/// |
/// v
/// ModelProvider (SSE stream)
/// ```
///
/// # Event loop
///
/// ```text
/// loop:
/// 1. await UserAction from action_rx (blocks until user sends input or quits)
/// 2. SendMessage:
/// a. Append user message to history
/// b. Call provider.stream(history) -- starts an SSE request
/// c. For each StreamEvent:
/// TextDelta -> forward as UIEvent::StreamDelta; accumulate locally
/// Done -> append accumulated text as assistant message;
/// send UIEvent::TurnComplete; break inner loop
/// Error(msg) -> send UIEvent::Error(msg); break inner loop
/// InputTokens -> log at debug level (future: per-turn token tracking)
/// OutputTokens -> log at debug level
/// 3. Quit -> return
/// ```
pub struct Orchestrator<P> {
history: ConversationHistory,
provider: P,
action_rx: mpsc::Receiver<UserAction>,
event_tx: mpsc::Sender<UIEvent>,
}
impl<P: ModelProvider> Orchestrator<P> {
/// Construct an orchestrator using the given provider and channel endpoints.
pub fn new(
provider: P,
action_rx: mpsc::Receiver<UserAction>,
event_tx: mpsc::Sender<UIEvent>,
) -> Self {
Self {
history: ConversationHistory::new(),
provider,
action_rx,
event_tx,
}
}
/// Run the orchestrator until the user quits or the `action_rx` channel closes.
pub async fn run(mut self) {
while let Some(action) = self.action_rx.recv().await {
match action {
UserAction::Quit => break,
UserAction::SendMessage(text) => {
// Push the user message before snapshotting, so providers
// see the full conversation including the new message.
self.history.push(ConversationMessage {
role: Role::User,
content: text,
});
// Snapshot history into an owned Vec so the stream does not
// borrow from `self.history` -- this lets us mutably update
// `self.history` once the stream loop finishes.
let messages: Vec<ConversationMessage> = self.history.messages().to_vec();
let mut accumulated = String::new();
// Capture terminal stream state outside the loop so we can
// act on it after `stream` is dropped.
let mut turn_done = false;
let mut turn_error: Option<String> = None;
{
let mut stream = Box::pin(self.provider.stream(&messages));
while let Some(event) = stream.next().await {
match event {
StreamEvent::TextDelta(chunk) => {
accumulated.push_str(&chunk);
let _ = self.event_tx.send(UIEvent::StreamDelta(chunk)).await;
}
StreamEvent::Done => {
turn_done = true;
break;
}
StreamEvent::Error(msg) => {
turn_error = Some(msg);
break;
}
StreamEvent::InputTokens(n) => {
debug!(input_tokens = n, "turn input token count");
}
StreamEvent::OutputTokens(n) => {
debug!(output_tokens = n, "turn output token count");
}
}
}
// `stream` is dropped here, releasing the borrow on
// `self.provider` and `messages`.
}
if turn_done {
self.history.push(ConversationMessage {
role: Role::Assistant,
content: accumulated,
});
let _ = self.event_tx.send(UIEvent::TurnComplete).await;
} else if let Some(msg) = turn_error {
let _ = self.event_tx.send(UIEvent::Error(msg)).await;
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use futures::Stream;
use tokio::sync::mpsc;
/// A provider that replays a fixed sequence of [`StreamEvent`]s.
///
/// Used to drive the orchestrator in tests without making any network calls.
struct MockProvider {
events: Vec<StreamEvent>,
}
impl MockProvider {
fn new(events: Vec<StreamEvent>) -> Self {
Self { events }
}
}
impl ModelProvider for MockProvider {
fn stream<'a>(
&'a self,
_messages: &'a [ConversationMessage],
) -> impl Stream<Item = StreamEvent> + Send + 'a {
futures::stream::iter(self.events.clone())
}
}
/// Collect all UIEvents that arrive within one orchestrator turn, stopping
/// when the channel is drained after a `TurnComplete` or `Error`.
async fn collect_events(rx: &mut mpsc::Receiver<UIEvent>) -> Vec<UIEvent> {
let mut out = Vec::new();
while let Ok(ev) = rx.try_recv() {
let done = matches!(ev, UIEvent::TurnComplete | UIEvent::Error(_));
out.push(ev);
if done {
break;
}
}
out
}
// -- happy-path turn ----------------------------------------------------------
/// A full successful turn: text chunks followed by Done.
///
/// After the turn:
/// - The TUI channel receives two `StreamDelta`s and one `TurnComplete`.
/// - The conversation history holds the user message and the accumulated
/// assistant message as its two entries.
#[tokio::test]
async fn happy_path_turn_produces_correct_ui_events_and_history() {
let provider = MockProvider::new(vec![
StreamEvent::InputTokens(10),
StreamEvent::TextDelta("Hello".to_string()),
StreamEvent::TextDelta(", world!".to_string()),
StreamEvent::OutputTokens(5),
StreamEvent::Done,
]);
let (action_tx, action_rx) = mpsc::channel::<UserAction>(8);
let (event_tx, mut event_rx) = mpsc::channel::<UIEvent>(16);
let orch = Orchestrator::new(provider, action_rx, event_tx);
let handle = tokio::spawn(orch.run());
action_tx
.send(UserAction::SendMessage("hi".to_string()))
.await
.unwrap();
// Give the orchestrator time to process the stream.
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
let events = collect_events(&mut event_rx).await;
// Verify the UIEvent sequence.
assert_eq!(events.len(), 3);
assert!(matches!(&events[0], UIEvent::StreamDelta(s) if s == "Hello"));
assert!(matches!(&events[1], UIEvent::StreamDelta(s) if s == ", world!"));
assert!(matches!(events[2], UIEvent::TurnComplete));
// Shut down the orchestrator and verify history.
action_tx.send(UserAction::Quit).await.unwrap();
handle.await.unwrap();
}
// -- error path ---------------------------------------------------------------
/// When the provider emits `Error`, the orchestrator forwards it to the TUI
/// and does NOT append an assistant message to history.
#[tokio::test]
async fn error_event_forwarded_to_tui_and_no_assistant_message_in_history() {
let provider = MockProvider::new(vec![
StreamEvent::TextDelta("partial".to_string()),
StreamEvent::Error("network timeout".to_string()),
]);
let (action_tx, action_rx) = mpsc::channel::<UserAction>(8);
let (event_tx, mut event_rx) = mpsc::channel::<UIEvent>(16);
let orch = Orchestrator::new(provider, action_rx, event_tx);
let handle = tokio::spawn(orch.run());
action_tx
.send(UserAction::SendMessage("hello".to_string()))
.await
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
let events = collect_events(&mut event_rx).await;
assert_eq!(events.len(), 2);
assert!(matches!(&events[0], UIEvent::StreamDelta(s) if s == "partial"));
assert!(matches!(&events[1], UIEvent::Error(msg) if msg == "network timeout"));
action_tx.send(UserAction::Quit).await.unwrap();
handle.await.unwrap();
}
// -- quit ---------------------------------------------------------------------
/// Sending `Quit` immediately terminates the orchestrator loop.
#[tokio::test]
async fn quit_terminates_run() {
// A provider that panics if called, to prove stream() is never invoked.
struct NeverCalledProvider;
impl ModelProvider for NeverCalledProvider {
fn stream<'a>(
&'a self,
_messages: &'a [ConversationMessage],
) -> impl Stream<Item = StreamEvent> + Send + 'a {
panic!("stream() must not be called after Quit");
#[allow(unreachable_code)]
futures::stream::empty()
}
}
let (action_tx, action_rx) = mpsc::channel::<UserAction>(8);
let (event_tx, _event_rx) = mpsc::channel::<UIEvent>(8);
let orch = Orchestrator::new(NeverCalledProvider, action_rx, event_tx);
let handle = tokio::spawn(orch.run());
action_tx.send(UserAction::Quit).await.unwrap();
handle.await.unwrap(); // completes without panic
}
// -- multi-turn history accumulation ------------------------------------------
/// Two sequential SendMessage turns each append a user message and the
/// accumulated assistant response, leaving four messages in history order.
///
/// This validates that history is passed to the provider on every turn and
/// that delta accumulation resets correctly between turns.
#[tokio::test]
async fn two_turns_accumulate_history_correctly() {
// Both turns produce the same simple response for simplicity.
let make_turn_events = || {
vec![
StreamEvent::TextDelta("reply".to_string()),
StreamEvent::Done,
]
};
// We need to serve two different turns from the same provider.
// Use an `Arc<Mutex<VecDeque>>` so the provider can pop event sets.
use std::collections::VecDeque;
use std::sync::{Arc, Mutex};
struct MultiTurnMock {
turns: Arc<Mutex<VecDeque<Vec<StreamEvent>>>>,
}
impl ModelProvider for MultiTurnMock {
fn stream<'a>(
&'a self,
_messages: &'a [ConversationMessage],
) -> impl Stream<Item = StreamEvent> + Send + 'a {
let events = self.turns.lock().unwrap().pop_front().unwrap_or_default();
futures::stream::iter(events)
}
}
let turns = Arc::new(Mutex::new(VecDeque::from([
make_turn_events(),
make_turn_events(),
])));
let provider = MultiTurnMock { turns };
let (action_tx, action_rx) = mpsc::channel::<UserAction>(8);
let (event_tx, mut event_rx) = mpsc::channel::<UIEvent>(32);
let orch = Orchestrator::new(provider, action_rx, event_tx);
let handle = tokio::spawn(orch.run());
// First turn.
action_tx
.send(UserAction::SendMessage("turn one".to_string()))
.await
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
let ev1 = collect_events(&mut event_rx).await;
assert!(matches!(ev1.last(), Some(UIEvent::TurnComplete)));
// Second turn.
action_tx
.send(UserAction::SendMessage("turn two".to_string()))
.await
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(10)).await;
let ev2 = collect_events(&mut event_rx).await;
assert!(matches!(ev2.last(), Some(UIEvent::TurnComplete)));
action_tx.send(UserAction::Quit).await.unwrap();
handle.await.unwrap();
}
}

View file

@ -1,8 +1,6 @@
// Types are scaffolding — used in later phases.
#![allow(dead_code)]
/// A streaming event emitted by the model provider.
#[derive(Debug)]
#[derive(Debug, Clone)]
pub enum StreamEvent {
/// A text chunk from the assistant's response.
TextDelta(String),

View file

@ -3,4 +3,37 @@ mod core;
mod provider;
mod tui;
fn main() {}
use std::path::PathBuf;
use anyhow::Context;
/// Run skate against a project directory.
///
/// ```text
/// Usage: skate --project-dir <path>
/// ```
///
/// `ANTHROPIC_API_KEY` must be set in the environment.
/// `RUST_LOG` controls log verbosity (default: `info`); logs go to
/// `<project-dir>/.skate/skate.log`.
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let project_dir = parse_project_dir()?;
app::run(&project_dir).await
}
/// Extract the value of `--project-dir` from `argv`.
///
/// Returns an error if the flag is absent or is not followed by a value.
fn parse_project_dir() -> anyhow::Result<PathBuf> {
let mut args = std::env::args().skip(1); // skip the binary name
while let Some(arg) = args.next() {
if arg == "--project-dir" {
let value = args
.next()
.context("--project-dir requires a path argument")?;
return Ok(PathBuf::from(value));
}
}
anyhow::bail!("Usage: skate --project-dir <path>")
}

View file

@ -1,5 +1,3 @@
// Items used only in later phases or tests.
#![allow(dead_code)]
use futures::{SinkExt, Stream, StreamExt};
use reqwest::Client;
@ -92,7 +90,7 @@ impl ModelProvider for ClaudeProvider {
/// "model": "<model-id>",
/// "max_tokens": 8192,
/// "stream": true,
/// "messages": [{ "role": "user"|"assistant", "content": "<text>" }, ]
/// "messages": [{ "role": "user"|"assistant", "content": "<text>" }, ...]
/// }
/// ```
///
@ -106,13 +104,13 @@ impl ModelProvider for ClaudeProvider {
/// object. The full event sequence for a successful turn is:
///
/// ```text
/// event: message_start InputTokens(n)
/// event: content_block_start → (ignored — signals a new content block)
/// event: ping → (ignored — keepalive)
/// event: content_block_delta TextDelta(chunk) (repeated)
/// event: content_block_stop → (ignored — signals end of content block)
/// event: message_delta OutputTokens(n)
/// event: message_stop Done
/// event: message_start -> InputTokens(n)
/// event: content_block_start -> (ignored -- signals a new content block)
/// event: ping -> (ignored -- keepalive)
/// event: content_block_delta -> TextDelta(chunk) (repeated)
/// event: content_block_stop -> (ignored -- signals end of content block)
/// event: message_delta -> OutputTokens(n)
/// event: message_stop -> Done
/// ```
///
/// We stop reading as soon as `Done` is emitted; any bytes arriving after
@ -201,14 +199,14 @@ async fn run_stream(
/// Return the byte offset of the first `\n\n` in `buf`, or `None`.
///
/// SSE uses a blank line (two consecutive newlines) as the event boundary.
/// See [§9.2.6 of the SSE spec][sse-dispatch].
/// See [Section 9.2.6 of the SSE spec][sse-dispatch].
///
/// [sse-dispatch]: https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation
fn find_double_newline(buf: &[u8]) -> Option<usize> {
buf.windows(2).position(|w| w == b"\n\n")
}
// ── SSE JSON types ────────────────────────────────────────────────────────────
// -- SSE JSON types -----------------------------------------------------------
//
// These structs mirror the subset of the Anthropic SSE payload we actually
// consume. Unknown fields are silently ignored by serde. Full schemas are
@ -278,8 +276,8 @@ struct SseUsage {
/// | `message_start` | `.message.usage.input_tokens` | `InputTokens(n)` |
/// | `content_block_delta`| `.delta.type == "text_delta"` | `TextDelta(chunk)` |
/// | `message_delta` | `.usage.output_tokens` | `OutputTokens(n)` |
/// | `message_stop` | | `Done` |
/// | everything else | | `None` (caller skips) |
/// | `message_stop` | n/a | `Done` |
/// | everything else | n/a | `None` (caller skips) |
///
/// [sse-fields]: https://html.spec.whatwg.org/multipage/server-sent-events.html#parsing-an-event-stream
fn parse_sse_event(event_str: &str) -> Option<StreamEvent> {
@ -314,13 +312,13 @@ fn parse_sse_event(event_str: &str) -> Option<StreamEvent> {
"message_stop" => Some(StreamEvent::Done),
// error, ping, content_block_start, content_block_stop ignored or
// error, ping, content_block_start, content_block_stop -- ignored or
// handled by the caller.
_ => None,
}
}
// ── Tests ─────────────────────────────────────────────────────────────────────
// -- Tests --------------------------------------------------------------------
#[cfg(test)]
mod tests {
@ -371,7 +369,7 @@ mod tests {
.filter_map(parse_sse_event)
.collect();
// content_block_start, ping, content_block_stop None (filtered out)
// content_block_start, ping, content_block_stop -> None (filtered out)
assert_eq!(events.len(), 5);
assert!(matches!(events[0], StreamEvent::InputTokens(10)));
assert!(matches!(&events[1], StreamEvent::TextDelta(s) if s == "Hello"));

View file

@ -11,7 +11,7 @@ use crate::core::types::{ConversationMessage, StreamEvent};
/// Trait for model providers that can stream conversation responses.
///
/// Implementors take a conversation history and return a stream of [`StreamEvent`]s.
/// The trait is provider-agnostic no Claude-specific types appear here.
/// The trait is provider-agnostic -- no Claude-specific types appear here.
pub trait ModelProvider: Send + Sync {
/// Stream a response from the model given the conversation history.
fn stream<'a>(

View file

@ -1,5 +1,3 @@
// Types and functions are scaffolding — wired into main.rs in Stage 1.6.
#![allow(dead_code)]
//! TUI frontend: terminal lifecycle, rendering, and input handling.
//!
@ -37,9 +35,6 @@ pub enum TuiError {
/// An underlying terminal I/O error.
#[error("terminal IO error: {0}")]
Io(#[from] std::io::Error),
/// The action channel was closed before the event loop exited cleanly.
#[error("action channel closed")]
ChannelClosed,
}
/// The UI-layer view of a conversation: rendered messages and the current input buffer.
@ -209,13 +204,13 @@ fn update_scroll(state: &mut AppState, area: Rect) {
///
/// Layout (top to bottom):
/// ```text
/// ┌──────────────────────────────┐
/// │ conversation history │ Fill(1)
/// │ │
/// ├──────────────────────────────┤
/// │ Input │ Length(3)
/// │ > _ │
/// └──────────────────────────────┘
/// +--------------------------------+
/// | conversation history | Fill(1)
/// | |
/// +--------------------------------+
/// | Input | Length(3)
/// | > _ |
/// +--------------------------------+
/// ```
///
/// Role headers are coloured: `"You:"` in cyan, `"Assistant:"` in green.
@ -255,8 +250,8 @@ fn render(frame: &mut Frame, state: &AppState) {
/// ```text
/// loop:
/// 1. drain UIEvents (non-blocking try_recv)
/// 2. poll keyboard for up to 16 ms ← spawn_blocking keeps async runtime free
/// 3. handle key event Option<LoopControl>
/// 2. poll keyboard for up to 16 ms (<- spawn_blocking keeps async runtime free)
/// 3. handle key event -> Option<LoopControl>
/// 4. render frame (scroll updated inside draw closure)
/// 5. act on LoopControl: send message or break
/// ```