Discover a battle-tested Laravel Next.js architecture blueprint designed to solve cache invalidation and authentication challenges in production environments.

As a tech lead building AI agents and distributed systems, I live by the proverb: "The cobbler's children have no shoes." We ship bulletproof APIs for clients, then watch our own portfolio sites crumble because of cross-domain auth bugs and cache invalidation prayers. When Yabasha.dev started serving stale blog content during every deployment—and users hit CSRF errors only in production—I stopped treating my platform like a side project and built a real protocol. This is that protocol.
ContentInvalidated → Horizon queues idempotent jobs → Redis tracks state → Next.js revalidates via retry-able API calls. No deploy-window misses, no silent failures.revalidation_id traced through logs, queues, and Redis—debugging becomes grep, not guesswork.Yabasha.dev is my living portfolio—a playground for AI agents, RAG pipelines, and full-stack architecture experiments. But for months, it was also a source of 3 a.m. alerts. Every deployment followed the same ritual:
Authentication was worse. I'd "fix" CSRF mismatches locally, push to production, and watch Safari users get logged out because of cross-domain cookie quirks. I was babysitting cache state more than building features.
Success looked like this: I update a post; within seconds, the change is live; if revalidation fails, it retries automatically; if it keeps failing, I get one alert with a clear trace—not user complaints. I wanted to apply the same rigor I demand in client systems to my own platform. The result is a Laravel Next.js architecture blueprint that's survived 40+ deployments without a single stale-page incident.
revalidation_state hash. One source of truth.What I rejected: Direct webhook calls from model observers to Next.js—fragile, untraceable, and fail exactly during deploy windows. Token-based auth for the frontend—adds complexity and doesn't solve CSRF.
The friction wasn't in writing code—it was in coordinated state change across deploy boundaries. Breaking that down:
localhost:8000 → localhost:3000 behaves nothing like api.yabasha.dev → yabasha.dev. Cookies without SameSite=None; Secure work locally then fail in prod. CSRF tokens expire mid-session if session lifetime config drifts.revalidateTag() called synchronously from a Laravel controller assumes the frontend is always up. During a 30-second deployment window, that call vanishes into the void. No retry. No log.Constraints I operated under:
I opted for a monorepo at https://github.com/yabasha/monolith
Why: Atomic commits across API and frontend, shared TypeScript interfaces for API payloads, and one CI pipeline. When I change a validation rule in Laravel, the Next.js form types update in the same PR.
When to split: If a separate mobile team needed independent release cycles, or if frontend bundle size grew enough to warrant isolated CI caching. For now, the cohesion wins.
Folder structure:
.
├── apps/
│ ├── backend/ # Laravel 12 API + Filament Admin
│ │ ├── app/ # Core application code
│ │ ├── bootstrap/
│ │ ├── config/
│ │ ├── database/
│ │ ├── public/
│ │ ├── resources/
│ │ ├── routes/
│ │ ├── storage/
│ │ ├── tests/
│ │ ├── composer.json
│ │ ├── artisan
│ │ └── ... # Standard Laravel structure
│ │
│ └── web/ # Next.js 16 frontend application
│ ├── src/
│ │ ├── app/ # App Router pages/layouts
│ │ ├── components/ # App-specific components
│ │ ├── lib/ # Utilities/helpers
│ │ └── styles/
│ ├── public/
│ ├── next.config.ts # Includes transpilePackages config
│ ├── package.json
│ ├── tailwind.config.ts
│ └── tsconfig.json
│
├── packages/
│ └── ui/ # Shared UI library (@yabasha/ui)
│ ├── src/
│ │ ├── components/ # Shared components (shadcn/ui style)
│ │ ├── lib/
│ │ │ └── utils.ts # cn(), helpers, etc.
│ │ └── index.ts # Export surface
│ ├── package.json
│ ├── tsconfig.json
│ └── tailwind.config.ts # (optional) if needed for building
│
├── docker/
│ └── nginx/
│ └── default.conf # Nginx config for Laravel
│
├── docker-compose.yml # MySQL + Redis + Horizon + Nginx + Backend
├── package.json # Root Bun workspaces + proxy scripts
├── bun.lock
├── tsconfig.json # Base TS config shared by web + ui
└── README.md
No tokens in localStorage. No manual Authorization headers.
apps/api/config/sanctum.php):
'stateful' => ['localhost:3000', 'yabasha.dev', '*.yabasha.dev']'expiration' => 720 (12 hours), matching session lifetime.config/cors.php):
'supports_credentials' => true'paths' => ['api/*', 'sanctum/csrf-cookie']apps/web/lib/api.ts):// CSRF cookie must be fetched first; Laravel sets XSRF-TOKEN cookie
export async function getCsrfCookie() {
await fetch(`${API_URL}/sanctum/csrf-cookie`, {
credentials: 'include',
mode: 'cors'
});
}
// Subsequent requests include cookies automatically
export async function apiClient(endpoint: string, options: RequestInit = {}) {
const res = await fetch(`${API_URL}${endpoint}`, {
...options,
credentials: 'include',
mode: 'cors',
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
...options.headers,
},
});
if (res.status === 419) {
// CSRF mismatch: re-fetch token and retry once
await getCsrfCookie();
return apiClient(endpoint, options);
}
return res;
}
Tradeoff: Requires CORS config discipline. Benefit: XSS can't steal httpOnly cookies; no token refresh dance.
Instead of calling revalidateTag() directly, I emit a domain event and let queues handle reliability.
Flow:
app/Events/ContentInvalidated.php):<?php
namespace App\\Events;
use Illuminate\\Foundation\\Events\\Dispatchable;
use Illuminate\\Queue\\SerializesModels;
class ContentInvalidated
{
use Dispatchable, SerializesModels;
public function __construct(
public string $type, // 'post', 'tag', 'author'
public string $id,
public array $tags, // e.g., ['posts', 'post-123', 'author-456']
public string $revalidation_id,
) {}
}
app/Jobs/RevalidateNextJsCache.php):<?php
namespace App\\Jobs;
use Illuminate\\Bus\\Queueable;
use Illuminate\\Contracts\\Queue\\ShouldBeUnique;
use Illuminate\\Foundation\\Bus\\Dispatchable;
use Illuminate\\Support\\Facades\\Http;
use Illuminate\\Support\\Facades\\Redis;
class RevalidateNextJsCache implements ShouldBeUnique
{
use Dispatchable, Queueable;
public $tries = 5;
public $backoff = [10, 30, 60, 120, 300]; // seconds
public function __construct(
public array $tags,
public string $revalidation_id
) {}
public function uniqueId(): string
{
return $this->revalidation_id; // idempotent per invalidation
}
public function handle(): void
{
// Mark job as inflight in Redis
Redis::hset('revalidation_state', $this->revalidation_id, json_encode([
'status' => 'inflight',
'attempt' => $this->attempts(),
'tags' => $this->tags,
'started_at' => now()->toISOString(),
]));
$response = Http::withHeaders([
'X-Revalidation-Id' => $this->revalidation_id,
])->post(config('services.nextjs.url') . '/api/revalidate', [
'tags' => $this->tags,
]);
if ($response->failed()) {
// Update state before retry
Redis::hset('revalidation_state', $this->revalidation_id, json_encode([
'status' => 'retrying',
'attempt' => $this->attempts(),
'error' => $response->body(),
]));
$response->throw();
}
// Success: cleanup
Redis::hdel('revalidation_state', $this->revalidation_id);
}
public function failed(\\Throwable $e): void
{
Redis::hset('revalidation_state', $this->revalidation_id, json_encode([
'status' => 'failed',
'error' => $e->getMessage(),
'final_attempt' => $this->attempts(),
]));
}
}
apps/web/app/api/revalidate/route.ts):import { revalidateTag } from 'next/cache';
import { NextRequest, NextResponse } from 'next/server';
export async function POST(request: NextRequest) {
const revalidation_id = request.headers.get('x-revalidation-id');
const { tags } = await request.json();
// Idempotency: if we've seen this ID, skip
const seen = await redis.get(`revalidations:processed:${revalidation_id}`);
if (seen) {
return NextResponse.json({ status: 'already_processed' });
}
try {
for (const tag of tags) {
revalidateTag(tag);
}
// Mark as processed (24h TTL)
await redis.set(`revalidations:processed:${revalidation_id}`, '1', { ex: 86400 });
return NextResponse.json({ status: 'success', revalidated: tags });
} catch (error) {
// Log with context for debugging
console.error('Revalidation failed', { revalidation_id, tags, error });
return NextResponse.json({ status: 'error' }, { status: 500 });
}
}
Why this survives edge cases: If the Next.js API is down, Horizon retries with exponential backoff. If the job exhausts retries, state in Redis shows failure. If revalidation succeeds but the response is lost, idempotency prevents double-work.
Horizon's config/horizon.php:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['revalidation', 'default'],
'balance' => 'auto',
'maxProcesses' => 10,
'maxJobs' => 50, // backpressure: reject after 50 inflight
'retry_after' => 120,
'timeout' => 60,
],
],
],
Backpressure: When queue depth > 100, Laravel rejects new ContentInvalidated events with a 503. Filament panel shows a toast: "Changes saved; sync may be delayed." I accept it; systems survive.
Deploy API first (Laravel Cloud):
-isolation mode; fail if not zero-downtime safe.Health-check gate:
# In CI, after API deploy
until curl -f <https://api.yabasha.dev/health>; do sleep 5; done
Deploy frontend (VPS):
MAX_REVALIDATION_RETRY=0 env var to pause new invalidations during deploy.POST /api/revalidate/resume.Resume invalidations:
// Artisan command run via Laravel Cloud hook
public function handle() {
$failed = Redis::hgetall('revalidation_state');
foreach ($failed as $id => $payload) {
$data = json_decode($payload);
if ($data->status === 'failed') {
RevalidateNextJsCache::dispatch($data->tags, $id)->onQueue('revalidation');
}
}
}
```
This creates a **deployment window** where invalidations queue but don't execute, eliminating the race condition.
## Advanced Insight: The Invalidation State Machine
Most guides treat revalidation as fire-and-forget. I model it as a state machine:
```
┌─────────────┐
│ pending │───job dispatched──►┌─────────────┐
└─────────────┘ │ inflight │───success──►┌─────────────┐
└─────────────┘ │ completed │
│ └─────────────┘
│ fail & retry < max
▼
┌─────────────┐
│ failed │───manual review──►┌─────────────┐
└─────────────┘ │ dead-letter│
└─────────────┘
```
**Decision matrix**: when to use sync vs. async invalidation?
| Use Sync (direct webhook) | Use Async (queue + state) |
|---------------------------|---------------------------|
| < 10 pages to purge | > 10 tags or wildcard |
| Dev/staging environment | Production |
| Zero infra cost | Budget for Redis + Horizon|
| Can tolerate silent failure| Must audit every change |
## Failure Modes & Mitigations
**1. CSRF mismatch after deployment**
- **Symptom**: 419 errors spike post-deploy.
- **Root cause**: Session encryption key rotated, invalidating existing sessions.
- **Mitigation**: Keep `APP_KEY` stable across deploys; stagger session cookie rotation over 24h by setting `previous_keys` in `config/session.php`.
**2. Revalidation endpoint down during deploy**
- **Symptom**: Failed jobs in Horizon; pages stay stale.
- **Root cause**: Vercel deploy takes 30–60s; endpoint returns 404.
- **Mitigation**: Pause-invalidations gate in CI; resume with backlog processing after health check passes.
**3. Queue job fails mid-invalidation**
- **Symptom**: Redis state stuck in `inflight` for hours.
- **Root cause**: Worker OOM or timeout before completion.
- **Mitigation**: Set job `timeout` < Horizon `retry_after`; use `failed()` hook to mark state as `failed` for manual replay.
**4. Redis connection pool exhaustion**
- **Symptom**: Horizon can't spawn workers; queues back up.
- **Root cause**: Too many concurrent revalidations; each job holds a Redis connection.
- **Mitigation**: Cap `maxJobs` per supervisor; implement semaphore in job `handle()` to limit concurrent HTTP calls to Next.js.
**5. Cache stampede after bulk invalidation**
- **Symptom**: Next.js origin gets hammered with 1000+ requests.
- **Root cause**: Revalidating a popular tag triggers parallel rebuilds.
- **Mitigation**: Add `stale-while-revalidate` headers; use Next.js `experimental.isrMemoryCacheSize` to buffer requests; rate-limit at Cloudflare edge.
## Results & Workflow Impact
**Before**: 2–3 stale-content incidents per week; manual Redis purges; debugging required grep across two log groups.
**After**: Zero stale-page incidents in 40+ deployments over 3 months. Invalidation is auditable (`revalidation_id` appears in Laravel logs, Horizon payloads, and Next.js access logs).
**Workflow today**:
1. I edit a post in Laravel Filament admin.
2. Observer fires `ContentInvalidated` with tags `['posts', 'post-123']`.
3. Horizon queues the job; I see it in the dashboard instantly.
4. Job posts to `/api/revalidate`; if it fails, Horizon retries with backoff.
5. Success or final failure is recorded in Redis; I have a simple Artisan command to list failed invalidations.
**Measurable outcomes**:
- **Queue failure rate**: ~0.3% (mostly network blips); auto-retries resolve 95% without intervention.
- **Time-to-live (TTL) for changes**: p95 < 15 seconds (async job processing + Next.js rebuild).
- **Deployment incident rate**: Down from 30% to 0% of deploys causing visible stale content.
*Unknown*: Exact cost per revalidation. I measure "tokens" in terms of Horizon job count and Redis memory, not dollars. If I needed cost attribution, I'd add a `cost_usd` field to the job payload using Vercel's API usage headers.
## Tested With / Versions
As of **December 2025**:
- Laravel 12.7.0 (PHP 8.3.6)
- Laravel Sanctum 4.0.2
- Laravel Horizon 5.25.0
- Next.js 16.2.1 (React 19)
- Redis 7.2.4 (phpredis 6.0.2)
- PostgreSQL 15.5
- Docker Compose 2.24
- Deployed on Laravel Cloud (backend) and Vercel Pro (frontend)
## Key Takeaways
maxJobs backpressure is simpler and more effective than rate-limiting in application code.bun workspaces.stateful domains and httpOnly cookies.supports_credentials => true.ContentInvalidated event and RevalidateNextJsCache job.ShouldBeUnique and revalidation_id.maxJobs for backpressure./api/revalidate route with idempotency check.handle() and failed().trace_id header.invalidations:failed to surface dead letters.The proverb stung because it was true: I wasn't shoeing my own children. Treating Yabasha.dev like a "real" system—complete with state machines, backpressure, and deployment gates—didn't slow me down; it freed me from babysitting. The Cache Handshake pattern is now my default for any API + ISR architecture. If you're a solo dev or small team, resist the temptation to "just call the webhook." Build the protocol. Your future self, debugging at 3 a.m., will thank you.
The broader lesson?
Operational excellence is a habit, not a budget.
You don't need a platform team to implement idempotency keys or backpressure. You need to decide your own platform is worth the effort.
If you're wrestling with the same auth or cache ghosts, I've open-sourced the core pieces of this blueprint on Yabasha.dev. For teams that need a faster ramp, I offer architecture reviews and pair programming sessions to adapt these patterns to your constraints—whether you're in Amman, Amsterdam, or anywhere between.

AI Engineer & Full-Stack Tech Lead
Expertise: 20+ years full-stack development. Specializing in architecting cognitive systems, RAG architectures, and scalable web platforms for the MENA region.