27 lines
1.5 KiB
Markdown
27 lines
1.5 KiB
Markdown
# 2026-02-27 Session Notes
|
|
|
|
## DataFetcher SQLite Migration Plan
|
|
|
|
Session began with a detailed plan to refactor the data fetcher in the pm-kalshi crate to write trades directly to SQLite (`historical.db`) instead of CSV. The backtester now loads from SQLite but the database has 3.5M markets with zero trades because the fetcher never populated it.
|
|
|
|
## Key Implementation Strategy
|
|
|
|
The plan involves six coordinated changes:
|
|
1. Add `Arc<SqliteStore>` field to DataFetcher struct
|
|
2. Replace `append_trades_csv()` method with `insert_trades_batch()` that calls the store's batch insert method
|
|
3. Remove all CSV file operations from `fetch_range()`
|
|
4. Update `get_available_data()` to query `historical_trades` table instead of scanning files
|
|
5. Update DataFetcher construction in AppState and main.rs to create and share a SqliteStore for historical.db
|
|
6. Update backtest handler to use the shared historical_store instead of opening a new connection per request
|
|
|
|
## Technical Details Captured
|
|
|
|
- Trade tuple format: `(timestamp, ticker, price, volume, side)` as strings/i64
|
|
- Price conversion: yes_price is in cents (integer), divide by 100.0 for decimal format
|
|
- Batch size: 10,000 trades per batch (reusing existing batch size constant)
|
|
- SqliteStore API methods: `insert_historical_trades_batch()` and `count_historical_trades()`
|
|
- Files to modify: fetcher.rs, web/mod.rs, main.rs, web/handlers.rs
|
|
|
|
## Session Status
|
|
|
|
Plan documented and ready for implementation. No code changes have been made yet. |