What is Output Mode?
By default, each workflow step's output replaces everything before it — only the last step's output becomes the published article. Output Mode lets you change this behavior so multiple LLM steps can each contribute a section to the same article.
Each LLM step has two output modes:
- Replace (default) — This step's output becomes the entire article, replacing any previous content
- Append — This step's output is added to the end of the running article from previous steps
How It Works
When a step is in Append mode, it receives the current running article as context so the LLM knows what it's continuing from. This prevents repetition and ensures smooth transitions between sections.
Data Source Steps
Data source steps (Web Scraper, YouTube, DataForSEO, Search Console, News, Humanizer, Anchor Browser) do not participate in article accumulation. Their output is context/data for subsequent LLM steps, not article content. The output mode setting is not available on data source steps.
Configuration
The Output Mode setting is found under Advanced on each LLM workflow step, alongside Max Output Tokens and Temperature.
Example Workflows
Multi-Part Article
Split a long article across multiple LLM steps, each handling a specific section:
Research + Multi-Part Writing
Scrape data first, then write the article in parts:
Write + Edit
Write the article, then have an editor step polish it. The editor uses Replace since it outputs the full revised article:
Note: The editor step uses Replace because it outputs the entire article with edits incorporated. There is no “surgical edit” mode — the LLM must output the full text.
API Usage
Set outputMode on any LLM workflow step via the External API:
Steps without outputMode default to “replace”. Only LLM steps support this setting.
Tips
- The first LLM step should always use Replace (there's nothing to append to yet)
- Each append-mode step receives the current running article as context, so it can continue naturally without repeating content
- Use an editor/review step at the end with Replace to polish the combined article
- Consider model context window limits — each append step sees the full running article plus all previous step outputs
- For very long content, use models with large context windows (Claude, Gemini) to avoid truncation