mcp_server_webcrawl.crawlers.wget package
Submodules
mcp_server_webcrawl.crawlers.wget.adapter module
- class WgetManager[source]
Bases:
BaseManager
Manages wget directory data in in-memory SQLite databases. Provides connection pooling and caching for efficient access.
Initialize the wget manager with empty cache and statistics.
- get_sites(datasrc, ids=None, fields=None)[source]
List wget directories in the datasrc directory as sites.
- Parameters:
- Returns:
List of SiteResult objects, one for each wget directory
- Return type:
- get_resources(datasrc, ids=None, sites=None, query='', types=None, fields=None, statuses=None, sort=None, limit=20, offset=0)[source]
Get resources from wget directories using in-memory SQLite.
- Parameters:
datasrc (Path) – Path to the directory containing wget captures
ids (list[int] | None) – Optional list of resource IDs to filter by
sites (list[int] | None) – Optional list of site IDs to filter by
query (str) – Search query string
types (list[ResourceResultType] | None) – Optional list of resource types to filter by
fields (list[str] | None) – Optional list of fields to include in response
statuses (list[int] | None) – Optional list of HTTP status codes to filter by
sort (str | None) – Sort order for results
limit (int) – Maximum number of results to return
offset (int) – Number of results to skip for pagination
- Returns:
Tuple of (list of ResourceResult objects, total count)
- Return type:
- get_resources_with_manager(crawl_manager, datasrc, ids=None, sites=None, query='', types=None, fields=None, statuses=None, sort=None, limit=20, offset=0)[source]
Get resources from directories using in-memory SQLite with the specified manager.
- Parameters:
crawl_manager (BaseManager) – BaseManager instance used for file indexing and database access
datasrc (Path) – Path to the directory containing site captures
ids (list[int] | None) – Optional list of resource IDs to filter by
sites (list[int] | None) – Optional list of site IDs to filter by
query (str) – Search query string
types (list[ResourceResultType] | None) – Optional list of resource types to filter by
fields (list[str] | None) – Optional list of fields to include in response
statuses (list[int] | None) – Optional list of HTTP status codes to filter by
sort (str | None) – Sort order for results
limit (int) – Maximum number of results to return
offset (int) – Number of results to skip for pagination
- Returns:
Tuple of (list of ResourceResult objects, total count)
- Return type:
Notes
Returns empty results if sites is empty or not provided. If the database is being built, it will log a message and return empty results.
mcp_server_webcrawl.crawlers.wget.crawler module
- class WgetCrawler[source]
Bases:
IndexedCrawler
A crawler implementation for wget captured sites. Provides functionality for accessing and searching web content from wget captures.
Initialize the wget crawler with a data source directory.
- Parameters:
datasrc – The input argument as Path, it must be a directory containing wget captures organized as subdirectories
- Raises:
AssertionError – If datasrc is None or not a directory
- __init__(datasrc)[source]
Initialize the wget crawler with a data source directory.
- Parameters:
datasrc (Path) – The input argument as Path, it must be a directory containing wget captures organized as subdirectories
- Raises:
AssertionError – If datasrc is None or not a directory
mcp_server_webcrawl.crawlers.wget.tests module
- class WgetTests[source]
Bases:
BaseCrawlerTests
Test suite for the wget crawler implementation.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.