Skip to main content

Pipeline Debugging

The Pipeline Debugger provides an interactive testing environment for validating pipeline processing logic before deployment. It enables you to execute pipelines with test data and observe how each processor transforms the data through the processing chain.

Accessing the Debugger

Navigate to Configuration > Pipelines, select a pipeline, and click Debug in the pipeline detail view. The debugger opens in a new view with the pipeline visualization and testing controls.

Dataset Providers

The debugger supports three methods for providing test data:

ProviderDescription
Existing DatasetSelect from previously saved datasets in your organization
Upload FileUpload a log file containing test data
Manual LogsEnter log data directly in the input editor

Existing Dataset

Use saved datasets for consistent, repeatable testing across pipeline iterations. After selecting a dataset, choose a specific log line to use as test input.

note

Datasets are organization-level resources that can be shared across team members.

Upload File

Upload log files for one-time testing. After upload, select a specific log line from the file content to use as test input.

Manual Logs

Enter log data directly in the input editor, then select a specific log line to use as test input. This method is useful for:

  • Testing specific edge cases
  • Debugging individual log entries
  • Quick validation of processor behavior

Environment Variables

The debugger allows configuration of environment variables that affect pipeline execution context:

VariableInput TypeDescription
devicetypeDropdownSimulates the source device type (Windows, Linux, Syslog, HTTP, TCP, UDP, Kafka, Azure Event Hubs, Azure Blob Storage)
definitionidDropdownSets the log definition identifier from available Windows and Linux log types
requestTextConfigures request-level context variables
info

Configure these variables to match production conditions when testing pipelines that contain conditional logic based on device type or other context values.

Pipeline Visualization

The debugger displays the pipeline structure as an interactive node graph:

  • Pipeline nodes: Represent the main pipeline and any referenced child pipelines
  • Processor nodes: Show individual processors in the execution chain
  • Connection lines: Indicate data flow between processors

When a pipeline references child pipelines, click on a pipeline node to navigate into its structure. A breadcrumb trail at the top of the view tracks your navigation path and allows you to return to parent pipelines.

Execution and Results

Running the Pipeline

Click Run to execute the pipeline with the current input data. The debugger processes the data through all configured processors and displays results for each step.

Node Status Indicators

After execution, each node displays a status indicator:

StatusDescription
CompletedProcessor executed successfully
FailedProcessor encountered an error
SkippedProcessor was skipped due to conditional logic
ContinueProcessing continued to the next processor
DroppedEvent was dropped by the processor
ReturnProcessing returned early from the pipeline

Viewing Results

The output panel displays the transformed data after pipeline execution. Click a processor node to view its specific output.

tip

Enable diff mode to highlight changes between input and output, or inspect the active configuration for any processor in the chain.

Workflow

  1. Select or enter test data using one of the dataset providers
  2. Configure environment variables if needed for conditional processing
  3. Click Run to execute the pipeline
  4. Review node status indicators to identify processing results
  5. Click individual nodes to inspect their output
  6. Use diff mode to compare transformations
  7. Iterate on pipeline configuration based on test results