- Extract database models from monolithic main.py (2,373 lines) into organized modules - Implement service layer pattern with dedicated business logic classes - Split API endpoints into modular FastAPI routers by functionality - Add centralized configuration management with environment variable handling - Create proper separation of concerns across data, service, and presentation layers **Architecture Changes:** - models/: SQLAlchemy database models (CVE, SigmaRule, RuleTemplate, BulkProcessingJob) - config/: Centralized settings and database configuration - services/: Business logic (CVEService, SigmaRuleService, GitHubExploitAnalyzer) - routers/: Modular API endpoints (cves, sigma_rules, bulk_operations, llm_operations) - schemas/: Pydantic request/response models **Key Improvements:** - 95% reduction in main.py size (2,373 → 120 lines) - Updated 15+ backend files with proper import structure - Eliminated circular dependencies and tight coupling - Enhanced testability with isolated service components - Better code organization for team collaboration **Backward Compatibility:** - All API endpoints maintain same URLs and behavior - Zero breaking changes to existing functionality - Database schema unchanged - Environment variables preserved 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
23 lines
No EOL
1,018 B
Python
23 lines
No EOL
1,018 B
Python
from sqlalchemy import Column, String, Text, TIMESTAMP, Integer, JSON
|
|
from sqlalchemy.dialects.postgresql import UUID
|
|
import uuid
|
|
from datetime import datetime
|
|
from .base import Base
|
|
|
|
|
|
class BulkProcessingJob(Base):
|
|
__tablename__ = "bulk_processing_jobs"
|
|
|
|
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
|
job_type = Column(String(50), nullable=False) # 'nvd_bulk_seed', 'nomi_sec_sync', 'incremental_update'
|
|
status = Column(String(20), default='pending') # 'pending', 'running', 'completed', 'failed', 'cancelled'
|
|
year = Column(Integer) # For year-based processing
|
|
total_items = Column(Integer, default=0)
|
|
processed_items = Column(Integer, default=0)
|
|
failed_items = Column(Integer, default=0)
|
|
error_message = Column(Text)
|
|
job_metadata = Column(JSON) # Additional job-specific data
|
|
started_at = Column(TIMESTAMP)
|
|
completed_at = Column(TIMESTAMP)
|
|
cancelled_at = Column(TIMESTAMP)
|
|
created_at = Column(TIMESTAMP, default=datetime.utcnow) |