The Index Optimization Trap
The Problem
A product team requested an API returning the top 20 trending products by category. The initial solution involved adding a composite index on the products table:
CREATE INDEX idx_products_category_status_score
ON products (category_id, status, interest_score DESC);
The index dramatically improved query performance, making the solution appear successful.
The Hidden Cost
The critical oversight: interest_score was constantly updated through user interactions (views, likes, cart additions):
UPDATE products
SET interest_score = interest_score + 1
WHERE id = ?;
While the SELECT query became faster, write operations became prohibitively expensive. Database indexes require structural reorganization when indexed columns change, meaning each score increment forced the database to remove and reinsert records in the index.
Production Impact
The system gradually degraded without crashing:
- API latency increased progressively
- Database CPU spiked
- Disk I/O skyrocketed
- Update statements filled slow query logs
Popular products experienced “hotspot” contention — thousands of concurrent updates hitting the same rows created severe database contention.
The Solution
Rather than forcing the transactional database to handle real-time ranking, responsibilities were separated:
New Architecture:
- User actions increment scores in Redis Sorted Sets
- Ranking queries fetch top product IDs from Redis
- Product details retrieved via database
id IN (...)queries
This approach accepted eventual consistency trade-offs but eliminated write contention and stabilized the system.
Key Insight
“An index that speeds up a query does not necessarily make the system healthier.”
The lesson: optimize entire workloads, not individual queries. Before indexing frequently-updated columns, evaluate total system impact.