feat(job_crawler): implement reverse-order incremental crawling with real-time Kafka publishing

- Add comprehensive sequence diagrams documenting container startup, task initialization, and incremental crawling flow
- Implement reverse-order crawling logic (from latest to oldest) to optimize performance by processing new data first
- Add real-time Kafka message publishing after each batch filtering instead of waiting for task completion
- Update progress tracking to store last_start_offset for accurate incremental crawling across sessions
- Enhance crawler service with improved offset calculation and batch processing logic
- Update configuration files to support new crawling parameters and Kafka integration
- Add progress model enhancements to track crawling state and handle edge cases
- Improve main application initialization to properly handle lifespan events and task auto-start
This change enables efficient incremental data collection where new data is prioritized and published immediately, reducing latency and improving system responsiveness.
This commit is contained in:
2026-01-15 17:46:55 +08:00
parent 63cd432a0c
commit 3acc0a9221
8 changed files with 402 additions and 60 deletions

View File

@@ -1,4 +1,5 @@
"""FastAPI应用入口"""
import asyncio
import logging
from contextlib import asynccontextmanager
from fastapi import FastAPI
@@ -15,8 +16,18 @@ logger = logging.getLogger(__name__)
async def lifespan(app: FastAPI):
"""应用生命周期管理"""
logger.info("服务启动中...")
# 自动启动所有采集任务
if settings.crawler.auto_start:
from app.services import crawler_manager
logger.info("自动启动采集任务...")
asyncio.create_task(crawler_manager.start_all())
yield
logger.info("服务关闭中...")
from app.services import crawler_manager
crawler_manager.stop_all()
kafka_service.close()