They are implemented in theDocumentation Index
Fetch the complete documentation index at: https://webscraping.titannet.io/docs/llms.txt
Use this file to discover all available pages before exploring further.
webscraping-node repository and are designed to run outside the user’s local browser as managed execution nodes.
What browser nodes do
A browser node:- Boots with node-specific configuration
- Authenticates and registers itself with the platform
- Advertises browser capabilities
- Polls for work from the scheduler path
- Executes browser-driven scraping logic
- Returns structured data and media through ingestion flows
Runtime model
Browser nodes are built around headless browser execution using Playwright and Puppeteer. The node selects an execution engine, prepares the browser context, runs template-provided logic, and then hands results back to the platform through ingestion APIs.High-level lifecycle
The browser-node lifecycle is:- Bootstrap identity
- Register with the control plane
- Connect to the scheduler path
- Request work
- Execute browser automation
- Upload media and finalize structured output
- Continue polling and sending lifecycle signals
Why this worker type matters
Browser nodes represent the platform’s modern dedicated worker model. They are intentionally separated from the user dashboard and from local browser execution so they can run as independent infrastructure. This makes them a good fit for managed capacity, automation-oriented execution, and controlled operational environments.Internal building blocks at a high level
Without going into implementation detail, browser nodes are organized around:- Control-plane connectivity
- Task orchestration
- Runtime execution
- Template loading and verification
- Proxy and session handling
- Ingestion and completion routing