1.5 KiB
1.5 KiB
2026-02-27 Session Notes
DataFetcher SQLite Migration Plan
Session began with a detailed plan to refactor the data fetcher in the pm-kalshi crate to write trades directly to SQLite (historical.db) instead of CSV. The backtester now loads from SQLite but the database has 3.5M markets with zero trades because the fetcher never populated it.
Key Implementation Strategy
The plan involves six coordinated changes:
- Add
Arc<SqliteStore>field to DataFetcher struct - Replace
append_trades_csv()method withinsert_trades_batch()that calls the store's batch insert method - Remove all CSV file operations from
fetch_range() - Update
get_available_data()to queryhistorical_tradestable instead of scanning files - Update DataFetcher construction in AppState and main.rs to create and share a SqliteStore for historical.db
- Update backtest handler to use the shared historical_store instead of opening a new connection per request
Technical Details Captured
- Trade tuple format:
(timestamp, ticker, price, volume, side)as strings/i64 - Price conversion: yes_price is in cents (integer), divide by 100.0 for decimal format
- Batch size: 10,000 trades per batch (reusing existing batch size constant)
- SqliteStore API methods:
insert_historical_trades_batch()andcount_historical_trades() - Files to modify: fetcher.rs, web/mod.rs, main.rs, web/handlers.rs
Session Status
Plan documented and ready for implementation. No code changes have been made yet.