V Volkanic
Backend

Queues and background jobs: a practical guide for Laravel

When to move work off the request cycle, how to structure jobs reliably, and the operational patterns that keep queues healthy in production.

9 min de lectura
LaravelPHPQueuesArchitectureBackend

The decision of whether to process something synchronously or push it to a queue is one you’ll face constantly in backend work. Get it wrong and you’ll either block users unnecessarily or lose data silently.

Here’s the mental model and the implementation patterns I use.

When to use a queue

A job belongs in a queue when:

  • It touches an external service. Email delivery, webhook calls, SMS, storage uploads. External services are slow and unreliable. Don’t make users wait for them.
  • It’s expensive and the user doesn’t need the result immediately. Generating a PDF, running a report, resizing images.
  • It needs retry semantics. HTTP failures, transient database errors, rate-limited APIs. Queues give you automatic retry with backoff.
  • It needs to be decoupled from peak traffic. Spreading notification sends over time instead of hammering the provider on every batch action.

Do not use a queue when the result is needed immediately in the same request (use synchronous code), or when the operation is cheap and reliable.

Structuring jobs for reliability

A job class should do one thing. The temptation is to put business logic directly in the job — resist it.

class SendReservationConfirmation implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function __construct(
        private readonly int $reservationId,
    ) {}

    public function handle(ReservationService $service, MailerService $mailer): void
    {
        $reservation = $service->findOrFail($this->reservationId);

        if ($reservation->confirmationAlreadySent()) {
            return; // idempotent
        }

        $mailer->sendConfirmation($reservation);
        $service->markConfirmationSent($reservation);
    }
}

A few things to notice:

Pass IDs, not models. The model gets serialized to the queue payload. If the model changes between dispatch and execution, you get stale data. Fetch it fresh inside handle().

Idempotency. Jobs can run more than once (retries, accidental duplicate dispatch). Make them safe to re-execute. Check if the work was already done; if yes, bail out cleanly.

Inject dependencies through handle(), not the constructor. The constructor only receives the job’s input data.

Retry strategy and failure handling

class SendReservationConfirmation implements ShouldQueue
{
    public int $tries = 5;
    public int $backoff = 60; // seconds between retries

    public function retryUntil(): DateTime
    {
        return now()->addHours(24);
    }

    public function failed(Throwable $e): void
    {
        // Alert, log, notify — do not re-throw
        Log::error('Confirmation send failed permanently', [
            'reservation_id' => $this->reservationId,
            'error' => $e->getMessage(),
        ]);
    }
}

retryUntil is often better than $tries for time-sensitive jobs. Use $backoff with exponential values for jobs calling rate-limited APIs.

The failed() method is your last chance — this runs after all retries are exhausted. Alert here, don’t re-throw.

Queue configuration for production

Split jobs into named queues by priority and type:

QUEUE_CONNECTION=redis

// config/queue.php
'redis' => [
    'driver' => 'redis',
    'connection' => 'default',
    'queue' => env('REDIS_QUEUE', 'default'),
    'retry_after' => 90,
    'block_for' => null,
    'after_commit' => true, // dispatch after DB transaction commits
],

Run workers per queue with supervisor:

[program:worker-critical]
command=php artisan queue:work redis --queue=critical --tries=3 --timeout=60

[program:worker-default]
command=php artisan queue:work redis --queue=default --tries=5 --timeout=120

[program:worker-reports]
command=php artisan queue:work redis --queue=reports --tries=2 --timeout=300

after_commit: true is essential. Without it, a job dispatched inside a database transaction can execute before the transaction commits — and fail to find the records it needs.

Monitoring

A queue without visibility is a black box that fails silently. You want:

  1. Queue depth monitoring. Alert when a queue depth exceeds a threshold and jobs aren’t being processed.
  2. Failed job alerting. Forward failed jobs to a Slack channel or PagerDuty.
  3. Job execution time. Long-running jobs indicate a problem.

Laravel Horizon gives you all of this for Redis queues. If you’re not using Redis, set up a queue:failed monitor in your CI health checks.

The mistake I see most often

Dispatching jobs inside a loop:

// don't do this
foreach ($reservations as $reservation) {
    SendConfirmation::dispatch($reservation->id); // N round-trips to Redis
}

Use Bus::batch() for related jobs, or collect IDs and dispatch a single job that processes the batch internally. Your Redis connection will thank you.