Frontend Performance Optimization — Lessons from Enterprise Healthcare
The Challenge
At Carelon (part of Elevance Health), I work on pharmacy products that serve healthcare providers and patients across the United States. When you're building tools that healthcare professionals rely on daily, performance isn't optional — it's critical.
A slow dashboard can mean delayed prescription processing, frustrated providers, and ultimately, patients waiting longer for their medications.
Measuring the Problem
Before optimizing anything, we needed to understand the baseline. Here's what we measured:
- Initial page load — Time to first meaningful paint
- API response handling — How quickly data appeared after fetching
- Re-render frequency — Unnecessary component re-renders eating up CPU
- Bundle size — JavaScript shipped to the browser
Technique 1: RTK Query Caching
One of the biggest wins came from implementing proper caching with RTK Query. Instead of refetching data on every page navigation, we set up tag-based cache invalidation:
// Before: Every navigation triggers a fresh API call
const { data } = useGetPrescriptionsQuery(patientId);
// After: Cached results with smart invalidation
const { data } = useGetPrescriptionsQuery(patientId, {
// Only refetch if the cache tag is invalidated
refetchOnMountOrArgChange: 300, // 5 minutes
});
Impact: Reduced redundant API calls by roughly 60%, making page transitions near-instant for previously viewed data.
Technique 2: Server-Side Rendering Where It Matters
Not everything needs SSR, but for critical paths — like the provider dashboard landing page — pre-rendering on the server made a huge difference:
- Eliminated the loading spinner on initial visit
- Improved Largest Contentful Paint (LCP) by 40%
- Better SEO for any provider-facing pages that needed indexing
The key insight was being selective — we only SSR'd the pages where first-load performance directly impacted user experience.
Technique 3: Bundle Optimization
We audited our bundle using next/bundle-analyzer and found several quick wins:
- Dynamic imports for heavy components (charts, PDF viewers)
- Tree-shaking by switching from barrel exports to direct imports
- Removing duplicate dependencies that had crept in over time
// Before: Imports the entire icon library
import { Search } from "lucide-react";
// This was already optimal — lucide-react supports tree-shaking!
// But other libraries weren't as well-structured
Technique 4: Virtualized Lists
For pages that displayed hundreds of prescription records, we implemented list virtualization to only render items currently visible in the viewport. This prevented the browser from creating thousands of DOM nodes for a single table.
Results
After implementing these optimizations across the platform:
| Metric | Before | After | Improvement | |--------|--------|-------|-------------| | Initial Load | 3.2s | 2.2s | 31% faster | | Page Transitions | 800ms | 200ms | 75% faster | | Bundle Size | 1.4MB | 980KB | 30% smaller | | API Calls (avg session) | 47 | 19 | 60% fewer |
Key Takeaways
- Measure first, optimize second — Don't guess where the bottlenecks are
- Caching is your best friend — Proper cache invalidation eliminates most unnecessary network requests
- Be selective with SSR — Use it where it matters, not everywhere
- Bundle analysis is non-negotiable — Run it regularly, especially in large teams where dependencies accumulate
Performance optimization is an ongoing process, not a one-time task. Setting up monitoring with OpenTelemetry helped us catch regressions early and maintain the gains we'd achieved.