Skip to main content

Directors: Deployment

VirtualMetric DataStream Directors support flexible deployment options to match your infrastructure requirements and operational preferences. Whether you're running on physical hardware, virtual machines, or containerized environments, Directors can be deployed to optimize performance while maintaining data sovereignty.

Definitions

Directors are lightweight, containerized services that process security telemetry data locally while connecting to the DataStream cloud platform for configuration management. This architecture ensures your sensitive data remains within your controlled environment while providing centralized management capabilities.

Supported Models

A Standalone Director uses a single instance to handle all data processing. This model is recommended for most production deployments due to its simple configuration and management. It is suitable for small to medium-scale environments.

A Clustered Director deployment uses multiple instances with load balancing and high availability. Clusters provide automatic failover and redundancy, horizontal scaling capabilities, and are ideal for mission-critical, high-volume environments. To create a cluster, first deploy Directors as standalone instances, then group them via the Clusters tab in the Directors management interface. See Directors: Management for cluster configuration details.

Options

As Physical Server

Deploy Directors directly on dedicated physical hardware for maximum performance and complete infrastructure control. This option provides maximum performance and resource allocation with complete control over hardware specifications. Without virtualization overhead, physical deployments are ideal for high-throughput environments.

Considerations

Physical server deployments involve higher infrastructure costs and maintenance overhead. They offer limited flexibility for resource scaling and require longer deployment and provisioning times.

As Virtual Machine

Deploy Directors on virtual machines across various hypervisors and cloud platforms for balanced performance and flexibility. Virtual machine deployments offer flexible resource allocation and scaling with cost-effective resource utilization. They simplify backup and disaster recovery procedures and are platform agnostic, supporting VMware, Hyper-V, KVM, AWS, Azure, and GCP.

Considerations

Virtual machine deployments introduce slight performance overhead from virtualization and create dependency on hypervisor platform stability.

Recommended VM Specifications:

Workload SizeCPU CoresMemoryStorageNotes
Small2-4 cores8GB RAM50GBDevelopment/testing, < 10K EPS
Medium4-8 cores16GB RAM100GBStandard production, 10K-50K EPS
Large8+ cores32GB RAM200GB+High-volume production, > 50K EPS
Disclaimer

Actual requirements may vary depending on the workload.

As Container

Deploy Directors in containerized environments for modern infrastructure management and orchestration capabilities.

Docker deployments run as single-host containers with simplified dependency management. They offer easy scaling and updates, making them ideal for development and small production environments.

Kubernetes deployments provide multi-node orchestration with automatic scaling, built-in service discovery and load balancing. They support rolling updates with zero downtime and deliver enterprise-grade high availability and resilience.

Containerized deployments provide consistent deployment across environments with rapid scaling and resource optimization. They include integrated monitoring and logging capabilities and support DevOps-friendly CI/CD integration.

Platform-Specific Considerations

On Linux

Linux deployments offer performance advantages: network-based collectors (syslog, TCP, SNMP) operate more efficiently with lower resource overhead for network processing. Linux provides superior performance for high-volume data ingestion and native support for Unix/Linux system integration.

Agent Connectivity:

  • Windows Agent: Full support for Windows systems via VirtualMetric Agent with optional pre-processing
  • Linux Agentless: Complete support via SSH-based connections
  • Windows Agentless: Full support via WinRM protocols
note

Microsoft has announced plans to deprecate WinRM in future Windows releases. VirtualMetric continues to support WinRM-based agentless monitoring. Consider using VirtualMetric Agent for long-term Windows endpoint monitoring strategies.

Recommended For:

  • High-volume network data collection environments
  • Mixed Windows/Linux infrastructure monitoring
  • Cost-sensitive deployments requiring maximum efficiency

On Windows

Agentless Connectivity:

  • Windows Agentless: Full support via WinRM protocols
  • Linux Agentless: Complete support via SSH connections
  • Universal Agent: Support for both Windows and Linux systems with optional pre-processing
note

Microsoft has announced plans to deprecate WinRM in future Windows releases. VirtualMetric continues to support WinRM-based agentless monitoring. Consider using VirtualMetric Agent for long-term Windows endpoint monitoring strategies.

Windows deployments provide native Windows service integration with Active Directory authentication support. They offer PowerShell-based management capabilities and seamless Windows ecosystem integration.

Recommended For:

  • Windows-centric environments
  • Organizations requiring agentless Windows monitoring
  • Environments with existing Windows management infrastructure

Agent Pre-Processing Architecture

VirtualMetric Agents support optional pipeline-based pre-processing before sending data to Directors. This distributed processing model reduces Director workload and enables edge-based data transformation.

Processing Models

In the Traditional Model, the Agent collects logs locally at the endpoint and sends raw data to the Director. The Director then processes data through pipelines and forwards the processed data to targets.

In the Pre-Processing Model, the Agent collects logs locally at the endpoint and processes data through configured pipelines before sending pre-processed data to the Director. The Director forwards data to targets, with optional additional processing if needed.

Pre-Processing Benefits

Pre-processing reduces Director processing load through distributed computation and lowers network bandwidth consumption via edge-based filtering and transformation. This approach improves scalability for large-scale deployments with multiple Agents and enables faster data delivery through parallel processing at collection points.

From an architectural perspective, edge-based filtering reduces unnecessary data transmission while local transformation enables compliance requirements at the data source. The distributed processing model supports horizontal scaling and reduces central processing bottlenecks in high-volume environments.

Pre-Processing Configuration

Agent pre-processing is configured through the Director's device configuration for that Agent. Pipelines assigned to Agent devices execute locally on the Agent, using the same pipeline syntax and processors available as Director pipelines. Configuration is managed centrally through the Director for consistency.

tip

Agent pipelines support hot configuration reload. Changes made in the Director interface are synchronized to Agents automatically without requiring an Agent restart.

Use Cases for Agent Pre-Processing

In high-volume environments, you can filter non-essential logs at the collection point before transmission, reduce network bandwidth for high-volume log sources, and distribute processing load across multiple Agent endpoints.

For compliance and privacy, mask sensitive data (PII, credentials) at the source before transmission. Apply regulatory transformations at the data collection point to ensure data compliance before leaving the endpoint network.

In edge computing scenarios, process data locally in remote or branch offices to minimize data transmission to the central Director. This approach supports disconnected or intermittent connectivity scenarios.

For cost optimization, reduce Director infrastructure requirements through distributed processing. Lower network bandwidth costs via edge-based filtering and optimize central processing capacity allocation.

Configuration Considerations

When implementing Agent pre-processing, balance processing load between Agents and Directors based on infrastructure capacity. Consider network latency and bandwidth when deciding what to process at the edge. Use Agent pre-processing for filtering and basic transformations, reserving complex processing (enrichment, external lookups) for the Director when possible. Monitor Agent resource utilization to prevent endpoint performance impact.

Installation Process

Standalone Director Installation

The standard installation process follows a guided setup through the DataStream web interface:

  1. Access Director Creation

    • Navigate to Home > Fleet Management > Directors
    • Click "Create director" to begin setup process
  2. Configure Director Properties

    • Assign unique Director name for identification

    • Select "Standalone" installation type

    • Choose appropriate platform

      A self-managed director is indicated under the Mode column as Self-managed, with a warning icon to its right. Hovering over the icon displays a tooltip, informing the user that the xonfiguration has changed and that the current one has to be deployed.

      info

      The actions menu of a self-managed director contains a Download config option. Clicking it downloads the vmetric.vmf file to the Downloads directory of Windows. This file should be placed under the <vm_root>\Director\config directory.

      This option removes the access verification step. The user can monitor errors through the CLI or the files under the <vm_root>\Director\storage\logs directory.

  3. Generate Installation Scripts

    • System generates platform-specific installation scripts
    • Unique API key created for secure cloud connectivity
    • Scripts provided for both PowerShell (Windows) and Bash (Linux)
  4. Execute Installation

    • Run provided script with administrative privileges on target system
    • Installation downloads and configures Director service
    • Automatic service registration and startup configuration
  5. Verify Connectivity

    • Use built-in connection verification tool
    • Confirm Director successfully connects to DataStream platform
    • Complete setup process once connectivity is established

Network Requirements

Critical: Pre-Installation Network Access

Before running installation commands, ensure the target system has outbound HTTPS access to dl.vget.me. The installation scripts are served from this URL and will fail silently or with connection errors if blocked by firewall rules.

Outbound Connectivity:

  • Port 443 (HTTPS) to dl.vget.me for installation script and binary downloads
  • Port 443 (HTTPS) for DataStream cloud platform communication
  • DNS resolution for dl.vget.me and *.virtualmetric.com domains
warning

TLS certificate validation requires accurate system time. Ensure NTP is configured and the system clock is synchronized before installation.

Installation Script Delivery

The installation commands use dl.vget.me as the script delivery endpoint. This URL serves platform-appropriate installation scripts based on the requesting client:

ClientScript ServedCommand Example
PowerShellinstall.ps1$h="<apiKey>"; iwr dl.vget.me | iex
curl/wgetinstall.shcurl -sL dl.vget.me | h="<apiKey>" bash
BrowserRedirects to virtualmetric.comN/A

The scripts automatically:

  1. Detect the operating system (Windows, Linux, macOS, FreeBSD, etc.)
  2. Detect the CPU architecture (amd64, arm64)
  3. Download the appropriate Agent binary from dl.vget.me/agent/{os}/{arch}
  4. Configure the Agent with the provided API key
  5. Install and start the service

Supported Platforms:

Operating SystemArchitectures
Windowsamd64, arm64
Linuxamd64, arm64, 386, ppc64, ppc64le
macOS (Darwin)amd64, arm64
FreeBSDamd64, arm64
Other Unixamd64, arm64
Customizable Download URL

For environments where dl.vget.me is not accessible, the download URL can be configured via the VITE_INSTALLATION_DOWNLOAD_URL environment variable in self-hosted deployments.

Inbound Connectivity:

  • Configure based on data source requirements
  • Ports as defined in device configurations

Firewall Configuration

Outbound Rules (Required):

  • Allow HTTPS (443) to dl.vget.me (installation scripts and binaries)
  • Allow HTTPS (443) to *.virtualmetric.com (platform communication)
  • Allow DNS queries for name resolution
  • Allow NTP for time synchronization
Firewall Rule Order

If your firewall processes rules in order, ensure the dl.vget.me rule is evaluated before any blanket deny rules. Installation will fail if the target system cannot reach this endpoint.

Inbound Rules (As Needed):

  • Open ports for configured data collection protocols
  • Allow management access (SSH for Linux, RDP for Windows)
  • Configure source restrictions based on security policies

Security and Performance

Security Considerations

Deploy Directors in appropriate network segments and implement network access controls and monitoring. Use dedicated service accounts with minimal privileges and enable logging and audit trails for security monitoring.

All sensitive data processing occurs locally on the Director, with only configuration metadata transmitted to the cloud platform. Implement encryption for data at rest and in transit, and maintain regular security updates and patch management.

Performance Optimization

Monitor CPU and memory utilization patterns and allocate sufficient disk space for logging and buffering. Configure appropriate network interface capacity and plan for peak load scenarios and growth.

For data processing efficiency, optimize YAML pipeline configurations for performance and implement efficient parsing and transformation rules. Use appropriate batch sizes for different data sources and monitor processing latency and throughput metrics. Consider Agent pre-processing for high-volume deployments to distribute processing load.

High Availability Planning

Maintain regular configuration backups and version control. Document recovery procedures and test them regularly. Implement monitoring and alerting for service health and plan for disaster recovery scenarios.

For redundancy, group Directors into clusters for automatic failover and load balancing. Clusters require a minimum of 3 Directors (odd number for quorum) and provide continuous operation when individual Directors fail. See Clusters for configuration details. Consider geographic distribution for disaster recovery and plan for seamless failover procedures.

Troubleshooting

For deployment issues including script execution failures, service startup problems, and connectivity issues, refer to the Directors Troubleshooting documentation.

Advanced Deployment Scenarios

Multi-Site Deployments

For organizations with multiple locations or data centers, deploy Directors at each site for local data processing. Implement centralized configuration management and coordinate routing and aggregation strategies. Plan for inter-site connectivity and failover.

Compliance and Regulatory Requirements

For regulated industries requiring specific compliance, implement appropriate data retention and disposal policies. Configure audit logging and compliance reporting, ensure data sovereignty and jurisdictional requirements are met, and plan for regulatory audit and inspection procedures.

Scalability Planning

As your environment grows, monitor resource utilization and performance trends. Plan for vertical scaling (more resources per Director) or horizontal scaling (clustering multiple Directors). Implement capacity planning and forecasting procedures to determine when to transition from standalone to clustered deployments.

Next Steps

Once you've selected your deployment approach:

  1. Prepare Infrastructure - Set up target systems with required specifications
  2. Configure Networking - Implement firewall rules and connectivity requirements
  3. Install Director - Follow the guided installation process in the DataStream interface
  4. Configure Data Sources - Set up devices and data collection points
  5. Test and Validate - Verify data flow and processing functionality
  6. Monitor Operations - Implement ongoing monitoring and maintenance procedures

For specific installation guidance, access the Director Configuration interface through Home > Fleet Management > Directors and follow the step-by-step setup wizard.