mcp_server_webcrawl.crawlers.wget package

Submodules

mcp_server_webcrawl.crawlers.wget.adapter module

class WgetManager[source]

Bases: IndexedManager

Manages wget directory data in in-memory SQLite databases. Provides connection pooling and caching for efficient access.

Initialize the wget manager with empty cache and statistics.

__init__()[source]

Initialize the wget manager with empty cache and statistics.

Return type:

None

get_sites(datasrc, ids=None, fields=None)[source]

List site directories in the datasrc directory as sites.

Parameters:
  • datasrc (Path) – path to the directory containing site subdirectories

  • ids (list[int] | None) – optional list of site IDs to filter by

  • fields (list[str] | None) – optional list of fields to include in the response

Returns:

List of SiteResult objects, one for each site directory

Return type:

list[SiteResult]

Notes

Returns an empty list if the datasrc directory doesn’t exist.

get_resources(datasrc, sites=None, query='', fields=None, sort=None, limit=20, offset=0)[source]

Get resources from wget directories using in-memory SQLite.

Parameters:
  • datasrc (Path) – path to the directory containing wget captures

  • sites (list[int] | None) – optional list of site IDs to filter by

  • query (str) – search query string

  • fields (list[str] | None) – optional list of fields to include in response

  • sort (str | None) – sort order for results

  • limit (int) – maximum number of results to return

  • offset (int) – number of results to skip for pagination

Returns:

Tuple of (list of ResourceResult objects, total count)

Return type:

tuple[list[ResourceResult], int, IndexState]

mcp_server_webcrawl.crawlers.wget.crawler module

class WgetCrawler[source]

Bases: IndexedCrawler

A crawler implementation for wget captured sites. Provides functionality for accessing and searching web content from wget captures.

Initialize the wget crawler with a data source directory.

Parameters:

datasrc – the input argument as Path, it must be a directory containing wget captures organized as subdirectories

Raises:

AssertionError – If datasrc is None or not a directory

__init__(datasrc)[source]

Initialize the wget crawler with a data source directory.

Parameters:

datasrc (Path) – the input argument as Path, it must be a directory containing wget captures organized as subdirectories

Raises:

AssertionError – If datasrc is None or not a directory

mcp_server_webcrawl.crawlers.wget.tests module

class WgetTests[source]

Bases: BaseCrawlerTests

Test suite for the wget crawler implementation. Uses all wrapped test methods from BaseCrawlerTests.

Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.

setUp()[source]

Set up the test environment with fixture data.

test_wget_pulse()[source]

Test basic crawler initialization.

test_wget_sites()[source]

Test site retrieval API functionality.

Test boolean search functionality

test_wget_resources()[source]

Test resource retrieval API functionality with various parameters.

test_wget_random_sort()[source]

Test random sort functionality using the ‘?’ sort parameter.

test_wget_content_parsing()[source]

Test content type detection and parsing.

test_report()[source]

Test thumbnail generation functionality (InterroBot-specific).

Module contents