Output Mode (Append)

What is Output Mode?

By default, each workflow step's output replaces everything before it — only the last step's output becomes the published article. Output Mode lets you change this behavior so multiple LLM steps can each contribute a section to the same article.

Each LLM step has two output modes:

  • Replace (default) — This step's output becomes the entire article, replacing any previous content
  • Append — This step's output is added to the end of the running article from previous steps
Key concept: A “running article” accumulates across LLM steps. When a step uses Replace, it resets the running article. When a step uses Append, it adds to the running article. The running article is what gets published.

How It Works

Step 1: Web Scraper Output: [scraped data] Running article: (empty — data sources don't affect it) Step 2: LLM (Replace) Prompt: "Write sections 1-3 about {{keyword}}..." Running article: <h2>Section 1</h2>... <h2>Section 2</h2>... <h2>Section 3</h2>... Step 3: LLM (Append) Prompt: "Write sections 4-6 about {{keyword}}..." Running article: Sections 1-3 + Sections 4-6 Step 4: LLM (Append) Prompt: "Write a conclusion and FAQ section..." Running article: Sections 1-6 + Conclusion + FAQ Published article = Full running article (all sections combined)

When a step is in Append mode, it receives the current running article as context so the LLM knows what it's continuing from. This prevents repetition and ensures smooth transitions between sections.

Data Source Steps

Data source steps (Web Scraper, YouTube, DataForSEO, Search Console, News, Humanizer, Anchor Browser) do not participate in article accumulation. Their output is context/data for subsequent LLM steps, not article content. The output mode setting is not available on data source steps.

Configuration

The Output Mode setting is found under Advanced on each LLM workflow step, alongside Max Output Tokens and Temperature.

Example Workflows

Multi-Part Article

Split a long article across multiple LLM steps, each handling a specific section:

Step 1: LLM (Replace) "Write an introduction and sections 1-3 about {{keyword}}. Cover the fundamentals and key concepts." Step 2: LLM (Append) "Write sections 4-6 about {{keyword}}. Cover advanced topics and best practices." Step 3: LLM (Append) "Write a conclusion, FAQ section, and key takeaways for the article about {{keyword}}."

Research + Multi-Part Writing

Scrape data first, then write the article in parts:

Step 1: Web Scraper Scrapes competitor article for research data Step 2: DataForSEO SERP Fetches current Google results and PAA questions Step 3: LLM (Replace) "Using the scraped content and SERP data, write the first half of an article about {{keyword}}." Step 4: LLM (Append) "Write the second half, including a FAQ section based on the People Also Ask questions."

Write + Edit

Write the article, then have an editor step polish it. The editor uses Replace since it outputs the full revised article:

Step 1: LLM (Replace) "Write a comprehensive article about {{keyword}}." Step 2: LLM (Append) "Write a FAQ section with 5 questions." Step 3: LLM (Replace) — Editor step "Review the article and improve clarity, fix any grammar issues, and ensure smooth transitions between sections. Output the full revised article."

Note: The editor step uses Replace because it outputs the entire article with edits incorporated. There is no “surgical edit” mode — the LLM must output the full text.

API Usage

Set outputMode on any LLM workflow step via the External API:

{ "workflowAgents": [ { "id": "step-1", "order": 0, "provider": "anthropic", "model": "claude-sonnet-4-5-20250514", "prompt": "Write sections 1-3...", "enabled": true }, { "id": "step-2", "order": 1, "provider": "anthropic", "model": "claude-sonnet-4-5-20250514", "prompt": "Write sections 4-6...", "enabled": true, "outputMode": "append" }, { "id": "step-3", "order": 2, "provider": "openai", "model": "gpt-5-mini", "prompt": "Edit the full article for clarity...", "enabled": true } ] }

Steps without outputMode default to “replace”. Only LLM steps support this setting.

Tips

  • The first LLM step should always use Replace (there's nothing to append to yet)
  • Each append-mode step receives the current running article as context, so it can continue naturally without repeating content
  • Use an editor/review step at the end with Replace to polish the combined article
  • Consider model context window limits — each append step sees the full running article plus all previous step outputs
  • For very long content, use models with large context windows (Claude, Gemini) to avoid truncation
Was this helpful?