mcp_server_webcrawl.crawlers.siteone package

Submodules

mcp_server_webcrawl.crawlers.siteone.adapter module

class SiteOneManager[source]

Bases: IndexedManager

Manages SiteOne directory data in in-memory SQLite databases. Wraps wget archive format (shared by SiteOne and wget) Provides connection pooling and caching for efficient access.

Initialize the SiteOne manager with empty cache and statistics.

__init__()[source]

Initialize the SiteOne manager with empty cache and statistics.

Return type:

None

get_sites(datasrc, ids=None, fields=None)[source]

List site directories in the datasrc directory as sites.

Parameters:
  • datasrc (Path) – path to the directory containing site subdirectories

  • ids (list[int] | None) – optional list of site IDs to filter by

  • fields (list[str] | None) – optional list of fields to include in the response

Returns:

List of SiteResult objects, one for each site directory

Return type:

list[SiteResult]

Notes

Returns an empty list if the datasrc directory doesn’t exist.

get_resources(datasrc, sites=None, query='', fields=None, sort=None, limit=20, offset=0)[source]

Get resources from wget directories using in-memory SQLite.

Parameters:
  • datasrc (Path) – path to the directory containing wget captures

  • sites (list[int] | None) – optional list of site IDs to filter by

  • query (str) – search query string

  • fields (list[str] | None) – optional list of fields to include in response

  • sort (str | None) – sort order for results

  • limit (int) – maximum number of results to return

  • offset (int) – number of results to skip for pagination

Returns:

Tuple of (list of ResourceResult objects, total count)

Return type:

tuple[list[ResourceResult], int, IndexState]

mcp_server_webcrawl.crawlers.siteone.crawler module

class SiteOneCrawler[source]

Bases: IndexedCrawler

A crawler implementation for SiteOne captured sites. Provides functionality for accessing and searching web content from SiteOne captures. SiteOne merges a wget archive with a custom SiteOne generated log to aquire more fields than wget can alone.

Initialize the SiteOne crawler with a data source directory.

Parameters:

datasrc – The input argument as Path, it must be a directory containing SiteOne captures organized as subdirectories

Raises:

AssertionError – If datasrc is None or not a directory

__init__(datasrc)[source]

Initialize the SiteOne crawler with a data source directory.

Parameters:

datasrc (Path) – The input argument as Path, it must be a directory containing SiteOne captures organized as subdirectories

Raises:

AssertionError – If datasrc is None or not a directory

mcp_server_webcrawl.crawlers.siteone.tests module

class SiteOneTests[source]

Bases: BaseCrawlerTests

Test suite for the SiteOne crawler implementation. Uses all wrapped test methods from BaseCrawlerTests plus SiteOne-specific features.

Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.

setUp()[source]

Set up the test environment with fixture data.

test_siteone_pulse()[source]

Test basic crawler initialization.

test_siteone_sites()[source]

Test site retrieval API functionality.

Test boolean search functionality

test_siteone_resources()[source]

Test resource retrieval API functionality with various parameters.

test_interrobot_images()[source]

Test InterroBot-specific image handling and thumbnails.

test_siteone_random_sort()[source]

Test random sort functionality using the ‘?’ sort parameter.

test_siteone_content_parsing()[source]

Test content type detection and parsing.

test_siteone_advanced_features()[source]

Test SiteOne-specific advanced features not covered by base tests.

test_report()[source]

Test thumbnail generation functionality (InterroBot-specific).

Module contents