feat(job_crawler): implement reverse-order incremental crawling with real-time Kafka publishing

- Add comprehensive sequence diagrams documenting container startup, task initialization, and incremental crawling flow
- Implement reverse-order crawling logic (from latest to oldest) to optimize performance by processing new data first
- Add real-time Kafka message publishing after each batch filtering instead of waiting for task completion
- Update progress tracking to store last_start_offset for accurate incremental crawling across sessions
- Enhance crawler service with improved offset calculation and batch processing logic
- Update configuration files to support new crawling parameters and Kafka integration
- Add progress model enhancements to track crawling state and handle edge cases
- Improve main application initialization to properly handle lifespan events and task auto-start
This change enables efficient incremental data collection where new data is prioritized and published immediately, reducing latency and improving system responsiveness.
This commit is contained in:
2026-01-15 17:46:55 +08:00
parent 63cd432a0c
commit 3acc0a9221
8 changed files with 402 additions and 60 deletions

View File

@@ -37,6 +37,8 @@ class CrawlerConfig(BaseModel):
interval: int = 300
filter_days: int = 7
max_workers: int = 5
max_expired_batches: int = 3 # 连续过期批次阈值
auto_start: bool = True # 容器启动时自动开始采集
class DatabaseConfig(BaseModel):