# 5 Laravel Scheduler Failures That Only Show Up in Production

**Author:** Mozex | **Published:** 2026-04-15 | **Tags:** Laravel, PHP, DevOps | **URL:** https://mozex.dev/blog/17-5-laravel-scheduler-failures-that-only-show-up-in-production

---


"But it worked locally." The Laravel scheduler is the duct tape of every production app I've worked on. It triggers the reports, cleans up expired data, pings the health checks, rotates the tokens, sends the digests. It works quietly for weeks, then one morning the team realises nothing has run for six days and nobody got an alert.

If you've already read my [queue worker version of this list](https://mozex.dev/blog/5-laravel-queue-failures-that-only-show-up-in-production), the scheduler has its own set of silent failure modes. Every one below cost me or a client real money or real trust. None of them show up in `php artisan schedule:list`. All of them are avoidable once you know where to look.

Examples target Laravel 11 and later (so `routes/console.php` and `bootstrap/app.php`). On Laravel 10 and earlier, the same calls live in the `schedule()` method of `App\Console\Kernel`.

<!--more-->

## 1. The cron entry silently stops firing

The scheduler runs if and only if `schedule:run` runs every minute. Most deployment guides tell you to add this line to crontab and never mention it again:

```bash
* * * * * cd /home/forge/app && php artisan schedule:run >> /dev/null 2>&1
```

That redirect to `/dev/null` is the problem. When cron stops running, when the PHP binary moves after a server update, when the user running the crontab loses permissions on your app directory, you won't hear a sound. Your scheduled tasks just stop.

The first time this hit me, a client's daily sales report hadn't been sent for eleven days. The server had rebooted for a kernel patch and the deploy user's crontab was intact, but the system cron service had come back in a masked state. Eleven days of silence.

A few things fix this.

**Log the cron output somewhere you can read it.**

```bash
* * * * * cd /home/forge/app && php artisan schedule:run >> storage/logs/scheduler.log 2>&1
```

**Add a heartbeat that pings a dead-man's-switch service.** [Oh Dear](https://ohdear.app/) and [Healthchecks.io](https://healthchecks.io/) do this well. A one-liner that runs every minute and hits an HTTPS endpoint. If the ping stops coming, you get an email.

```php
Schedule::call(fn () => Http::get(config('services.healthcheck.url')))
    ->everyMinute()
    ->withoutOverlapping(2)
    ->runInBackground();
```

`withoutOverlapping()` with no argument sits on a **24-hour lock** (more on that in a minute). If the HTTP call ever hangs and the process dies with the lock held, your heartbeat goes silent for a full day. Pass a short expiration so a crashed heartbeat recovers within a couple of cycles. `runInBackground()` keeps the ping from blocking the scheduler while the request is in flight.

**Verify `schedule:list` against reality.** The list command tells you what Laravel thinks is scheduled. It says nothing about whether the OS is actually running the command.

## 2. `withoutOverlapping()` locks expire mid-task

`withoutOverlapping()` uses the cache to hold a lock while the task runs. If the lock is still there when the scheduler starts the next tick, the new run skips. Simple idea, two traps.

The first trap: the default lock expires after **24 hours**. If your task takes longer than that, a second instance starts while the first is still running. I've seen this corrupt a reporting database. Two queries competing for the same write, both finishing, both trying to insert the final row.

The second trap: if your task dies without releasing the lock (a fatal error, a hard kill, a server reboot mid-run), the lock stays until it expires. Your scheduled task looks healthy in logs. It just never runs again until the default 24 hours tick over.

Both traps have the same fix. Pass the expected maximum runtime to `withoutOverlapping()`:

```php
Schedule::command('reports:generate')
    ->hourly()
    ->withoutOverlapping(30); // minutes
```

Set it longer than your worst-case runtime and short enough that a crashed task recovers within one scheduling cycle. For a task that runs hourly and usually finishes in five minutes, a 30-minute lock is a reasonable compromise. If the task runs past 30 minutes, something is wrong and you want the next run to pick up.

## 3. `onOneServer()` silently runs on every server

This one costs people thousands of dollars in duplicate API charges and I've never seen it warned about in a tutorial.

`onOneServer()` prevents a task from running on more than one server in a multi-server setup. It works by grabbing a lock in the cache before the task runs. Every other server checks the same cache, sees the lock, and skips. That's the whole mechanism.

Which means the cache has to be the *same cache* on every server.

The [Laravel docs put this plainly](https://laravel.com/docs/scheduling#running-tasks-on-one-server): "Using this feature requires that your application's default cache driver is set to the `database`, `memcached`, `dynamodb`, or `redis` cache driver. In addition, all servers must be communicating with the same central cache server." The default `file` driver writes locks to the local filesystem, so server A's lock is invisible to server B. Both servers "acquire" the lock against their own disk and both run the task. The `array` driver is even worse: it doesn't persist between PHP processes at all. If that task calls a paid API or writes to the database, you're charged twice and you've broken data integrity.

Check `config/cache.php`. If the default store is `file` (or `array`) and you're calling `onOneServer()` anywhere, this affects you right now.

The fix is either switching the default cache to a shared driver (database, Redis, Memcached, DynamoDB), or telling the scheduler explicitly which cache to use for its locks while leaving the app's default alone:

```php
use Illuminate\Support\Facades\Schedule;

Schedule::useCache('redis');

Schedule::command('reports:aggregate')
    ->daily()
    ->onOneServer();
```

`Schedule::useCache()` lives in `bootstrap/app.php` (inside the `withSchedule` closure) or at the top of `routes/console.php`. Set it once and every scheduled task uses that cache for locking, regardless of the application's default cache driver.

One more thing about this section that catches people: `onOneServer()` only coordinates across servers. It does nothing about overlapping runs on the *same* server. If you care about both, chain it with `withoutOverlapping()`.

## 4. Silent failures with no email and no alert

The scheduler catches exceptions. It writes them to the log. That's it. No retries, no alerts, no dashboard. If you're not actively watching `laravel.log`, a failing scheduled task is indistinguishable from a running one.

The behaviour depends on how you registered the task:

- `Schedule::command()` and the shell-command variant fail when the process exits with a non-zero code. `onFailure()` fires. `emailOutputOnFailure()` sends the captured stderr.
- `Schedule::call()` catches exceptions inside the closure. The exception is reported through your application's exception handler (so Sentry and friends still see it), but the scheduler itself counts the task as having run. `emailOutputOnFailure()` doesn't apply here because there's no process output to capture, and in my experience `onFailure()` on a closure is unreliable enough that I don't trust it alone.
- `Schedule::job()` dispatches the job to the queue and returns immediately. If the queue is down or the job throws, the scheduler never knows. `onFailure()` will not fire.

That last one bites people. You schedule a job, see it in `schedule:list`, and assume it ran. The scheduler did its part: it pushed the job onto Redis. Whether the job actually ran is a separate concern handled by your queue worker.

Every scheduled command I write now has an explicit failure path:

```php
Schedule::command('reports:generate')
    ->daily()
    ->emailOutputOnFailure('ops@company.com')
    ->onFailure(function () {
        report(new ScheduledTaskFailed('reports:generate'));
    });
```

For closures and jobs, `emailOutputOnFailure()` doesn't apply. Wrap the work in a try/catch and report it yourself:

```php
Schedule::call(function () {
    try {
        app(ReconcilePayments::class)->run();
    } catch (Throwable $e) {
        report($e);
        throw $e;
    }
})->hourly();
```

Better yet, lean on external heartbeat monitoring. [Oh Dear](https://ohdear.app/), Healthchecks.io, and [Sentry cron monitoring](https://sentry.io/for/cron-monitoring/) all have Laravel scheduler integrations. They expect a heartbeat and alert you when one goes missing, which catches both "the task failed" and "the scheduler itself stopped running."

## 5. Timezone drift and the DST edge case

`Schedule::command(...)->dailyAt('03:00')` runs at 3am in your *application's* timezone: whatever `config('app.timezone')` resolves to, which defaults to UTC. Not the server's OS timezone. Not your customer's timezone. Laravel calls `date_default_timezone_set()` during bootstrap, so the OS setting doesn't come into it.

If your `app.timezone` is UTC (the default) and your business runs somewhere else, the task fires at a time nobody expects. I've debugged a "reports are wrong" ticket that turned out to be a batch job running three hours earlier than the user thought, because `app.timezone` was UTC and the user was in a UTC+3 country.

Fix it per task with `timezone()`:

```php
Schedule::command('reports:daily')
    ->dailyAt('03:00')
    ->timezone('Europe/Istanbul');
```

Or set the default for every scheduled task in `config/app.php`:

```php
'timezone' => 'UTC',
'schedule_timezone' => 'Europe/Istanbul',
```

`schedule_timezone` lets you keep the app in UTC (which is what you want for stored timestamps) while interpreting every scheduled time declaration in your business timezone. Fewer places to get it wrong.

The harder edge case is daylight saving time. Twice a year, an hour either doesn't exist or happens twice. A task scheduled at `dailyAt('02:30')` in a DST timezone will either skip or run twice on those two days. Laravel's own docs [recommend avoiding timezone scheduling when possible](https://laravel.com/docs/scheduling#timezones) for exactly this reason. The safer pattern is to keep `schedule_timezone` on UTC and do the human-friendly conversion at the edges of your app (in views, in notifications) rather than baking it into scheduled times.

The other timezone trap, which bit me during a Laravel 11 upgrade: `routes/console.php` is evaluated on *every* artisan invocation, not just `schedule:run`. If you put time-sensitive logic inside a scheduling closure that reads the current time, that logic runs during deploy scripts, queue work commands, and tinker sessions too. Keep the scheduling declaration pure. Read time inside the task body, not outside it.

## The common thread

Every failure above has the same shape. The scheduler looks healthy. `schedule:list` is correct. The code is fine. But something between the code and reality (the OS, the cache driver, the timezone, the queue worker, the cron service) has quietly broken the contract.

The defence is external observability. Stop trusting the scheduler to tell you when it breaks. It has no way to know that it broke. Every scheduled task worth running deserves either a heartbeat ping, an `onFailure` handler, or ideally both. If the next heartbeat doesn't arrive within the expected window, you hear about it before the customer does.

If any of this was news to you, go check your crontab, your cache driver, and your scheduled task timezones right now. The bug is probably already there. You just haven't found it yet.