feat(job_crawler): initialize job crawler service with kafka integration
- Add technical documentation (技术方案.md) with system architecture and design details - Create FastAPI application structure with modular organization (api, core, models, services, utils) - Implement job data crawler service with incremental collection from third-party API - Add Kafka service integration with Docker Compose configuration for message queue - Create data models for job listings, progress tracking, and API responses - Implement REST API endpoints for data consumption (/consume, /status) and task management - Add progress persistence layer using SQLite for tracking collection offsets - Implement date filtering logic to extract data published within 7 days - Create API client service for third-party data source integration - Add configuration management with environment-based settings - Include Docker support with Dockerfile and docker-compose.yml for containerized deployment - Add logging configuration and utility functions for date parsing - Include requirements.txt with all Python dependencies and README documentation
This commit is contained in:
22
job_crawler/app/core/logging.py
Normal file
22
job_crawler/app/core/logging.py
Normal file
@@ -0,0 +1,22 @@
|
||||
"""日志配置"""
|
||||
import logging
|
||||
import sys
|
||||
from .config import settings
|
||||
|
||||
|
||||
def setup_logging():
|
||||
"""配置日志"""
|
||||
level = logging.DEBUG if settings.app.debug else logging.INFO
|
||||
|
||||
logging.basicConfig(
|
||||
level=level,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
||||
handlers=[
|
||||
logging.StreamHandler(sys.stdout)
|
||||
]
|
||||
)
|
||||
|
||||
# 降低第三方库日志级别
|
||||
logging.getLogger("httpx").setLevel(logging.WARNING)
|
||||
logging.getLogger("kafka").setLevel(logging.WARNING)
|
||||
logging.getLogger("uvicorn").setLevel(logging.INFO)
|
||||
Reference in New Issue
Block a user