feat(job_crawler): initialize job crawler service with kafka integration
- Add technical documentation (技术方案.md) with system architecture and design details - Create FastAPI application structure with modular organization (api, core, models, services, utils) - Implement job data crawler service with incremental collection from third-party API - Add Kafka service integration with Docker Compose configuration for message queue - Create data models for job listings, progress tracking, and API responses - Implement REST API endpoints for data consumption (/consume, /status) and task management - Add progress persistence layer using SQLite for tracking collection offsets - Implement date filtering logic to extract data published within 7 days - Create API client service for third-party data source integration - Add configuration management with environment-based settings - Include Docker support with Dockerfile and docker-compose.yml for containerized deployment - Add logging configuration and utility functions for date parsing - Include requirements.txt with all Python dependencies and README documentation
This commit is contained in:
13
job_crawler/app/models/__init__.py
Normal file
13
job_crawler/app/models/__init__.py
Normal file
@@ -0,0 +1,13 @@
|
||||
"""数据模型"""
|
||||
from .job import JobData
|
||||
from .progress import CrawlProgress, CrawlStatus
|
||||
from .response import ApiResponse, ConsumeResponse, StatusResponse
|
||||
|
||||
__all__ = [
|
||||
"JobData",
|
||||
"CrawlProgress",
|
||||
"CrawlStatus",
|
||||
"ApiResponse",
|
||||
"ConsumeResponse",
|
||||
"StatusResponse"
|
||||
]
|
||||
Reference in New Issue
Block a user