# How I Audit a New Laravel Codebase in 30 Minutes

**Author:** Mozex | **Published:** 2026-04-13 | **Tags:** Laravel, PHP, Architecture, Code Review | **URL:** https://mozex.dev/blog/15-how-i-audit-a-new-laravel-codebase-in-30-minutes

---


When a client asks me to look at their Laravel application, I don't start by reading code. I run a specific sequence of checks that tells me more in 30 minutes than reading source files for a full day would.

This isn't a full audit. It's triage. After working with Laravel for over a decade, I've found that a handful of structural checks reveal 80% of the problems. The patterns are surprisingly consistent.

Here's the exact process I follow.

<!--more-->

## The .env and Configuration Check

First thing I do after cloning the repo. Always.

I open `.env.example` and compare it to whatever the production environment looks like (I ask the client for a sanitized copy of their production `.env`). The specifics I'm after:

- Is `APP_DEBUG` set to `true` in production? More common than you'd think.
- Is `APP_KEY` actually set? I've seen production apps running without one.
- How many third-party services are configured? This maps the integration surface.
- Are there values that should change between environments but don't?

Then I run one command:

```bash
grep -rn --include="*.php" -E "\benv\(" app/ routes/ database/
```

If that returns results, the application will break when someone runs `php artisan config:cache`. This is one of the most common production bugs in Laravel. The `env()` helper returns `null` after config caching unless the call is inside a config file. Finding `env()` scattered through application code tells me the original developer either never deployed to a real server or never cached config.

It's a two-minute check that reveals a lot about the team's deployment maturity.

## Composer Dependencies

```bash
composer outdated --direct
```

I don't care about transitive dependencies yet. I want to know:

- How many major versions behind is the project?
- Are there abandoned packages in `composer.json`?
- Is the Laravel version itself current, or are we talking about a Laravel 9 app in 2026?

The version gap tells me how much maintenance debt has piled up. A project two minor versions behind is normal. A project two major versions behind needs a conversation about upgrade strategy before anything else.

I also check for packages that belong in `require-dev` but ended up in `require`: testing frameworks, debug bars, IDE helpers. I've seen `barryvdh/laravel-debugbar` loaded in production more than once. The performance cost is real, and it exposes internal application data to anyone who knows where to look.

## The Database Layer

This is where most of the serious problems hide.

```bash
php artisan migrate:status
```

The migration count gives me a rough sense of the project's history. More than 100 migrations on a mid-sized app usually means nobody ever squashed them. Not a crisis on its own, but it correlates with other maintenance patterns.

What matters more: I open the migration files and scan for missing indexes. The pattern I'm looking for:

```php
// Foreign key with auto-index on MySQL - fine there
// PostgreSQL does NOT auto-create this index, so add one explicitly
$table->foreignId('user_id')->constrained();

// Integer column, no index, no constraint - this is a problem
$table->unsignedBigInteger('user_id');
```

The second version works until you have 50,000 rows and a query that joins on `user_id`. Then it becomes a production incident at 2 AM. And if you're on PostgreSQL, even the first version needs an explicit `->index()` call, because Postgres doesn't auto-index foreign key columns the way MySQL does.

Next, I check for N+1 query protection:

```php
// In AppServiceProvider::boot()
Model::preventLazyLoading(!app()->isProduction());
```

If this line exists, good. The team cares about query performance. If it doesn't, I add it temporarily and load a few pages. The exception count tells me everything about the relationship loading discipline in the codebase.

While I'm looking at models, I search for `$guarded = []`. Disabling mass assignment protection is a security hole that takes five minutes to fix but can take months to discover through a data breach.

## The Route File

```bash
php artisan route:list
```

What I'm checking:

**Route count.** A 500-route application probably needs to be split into modules or at least organized into route groups with clear boundaries.

**Missing middleware.** Specifically, routes that accept POST, PUT, or DELETE requests without authentication. I've found admin endpoints with no auth middleware more than once.

**Closures in route files.** Routes defined with closures can't be cached with `php artisan route:cache`. On a production app with hundreds of routes, that's a performance penalty on every single request.

I also look for routes that handle sensitive operations without rate limiting:

```php
// No rate limiting on a public endpoint
Route::post('/api/login', [AuthController::class, 'login']);

// Better
Route::post('/api/login', [AuthController::class, 'login'])
    ->middleware('throttle:5,1');
```

A login endpoint without throttling is an open invitation for credential stuffing.

## The Controller Layer

I don't read every controller. I pick the three largest ones by file size and scan them.

Fat controllers are the single most reliable indicator of overall code quality. If the largest controller is under 100 lines, the architecture is probably solid. If it's 800 lines with database queries, validation logic, business rules, and email notifications all crammed into one method, I know what I'm dealing with.

The contrast looks like this:

```php
// Separation exists
public function store(StoreOrderRequest $request): OrderResource
{
    $order = app(CreateOrder::class)->execute($request->validated());

    return new OrderResource($order);
}
```

```php
// Everything in one place
public function store(Request $request)
{
    $request->validate([/* 30 lines of rules */]);

    $user = auth()->user();
    $order = new Order();
    $order->user_id = $user->id;
    // ... 80 more lines of queries, logic, and notifications

    return response()->json($order);
}
```

Both versions produce the same result. But the second one is a maintenance burden that compounds with every feature addition. And returning a raw model instead of an API Resource means the API response is coupled to the database schema. Add a column, and your API contract changes silently.

## The Test Suite

```bash
php artisan test
```

Three possible outcomes:

1. **Tests pass.** Good starting point.
2. **Tests fail.** The codebase has drifted from its test suite, meaning tests aren't part of the regular workflow.
3. **No tests exist.** Honestly, the most common result.

When tests do exist, I check what they actually verify. A suite full of `$response->assertOk()` without checking response bodies creates false confidence. Tests that only assert HTTP status codes catch less than you'd expect.

The ratio between feature tests and unit tests tells a story too. An application with 200 feature tests and zero unit tests means the team routes everything through the HTTP layer. That approach works for smaller codebases but becomes painfully slow as the application grows.

## Error Handling

I look at `bootstrap/app.php` (Laravel 11+) or `app/Exceptions/Handler.php` (older versions).

What I hope to find:

- Exceptions reported to an external service: Laravel Nightwatch, Sentry, Flare, Bugsnag, something.
- Custom rendering for API routes (structured JSON errors, not HTML 500 pages).
- Specific exception types handled intentionally, not a blanket catch-all.

What I usually find: the default handler, completely untouched. Production errors go to `storage/logs/laravel.log`, and nobody checks it until a customer emails support. By then, the context that would have helped debug the issue has been buried under thousands of unrelated log lines.

## The Deployment Setup

I check for evidence that deployments are automated and optimized:

- A `Dockerfile`, CI/CD pipeline, or deployment tool configuration (Forge, Envoyer, Ploi, GitHub Actions).
- Whether `config:cache`, `route:cache`, and `view:cache` run during deployment.
- Queue worker configuration (Supervisor, systemd, or Horizon).
- Scheduled commands and whether a cron entry exists to trigger them.

Missing cache commands means every single request parses config files, route definitions, and Blade templates from disk. On an application handling real traffic, that overhead adds up fast.

## Red Flags That Predict Everything

After doing this enough times, certain patterns become predictive:

**Environment files committed to the repository.** Files like `.env.production` or `.env.staging` in version control mean secrets are in git history. Even after deletion, `git log` remembers.

**A multi-gigabyte `storage/logs` directory.** Nobody is watching the logs. Log rotation is either misconfigured or was never set up. The `daily` driver is there, but the default channel has been writing to a single file since the first deployment.

**The `vendor/` directory committed to git.** The repository is bloated, dependency updates are painful, and the team probably had a bad CI experience once and decided to commit everything. Fixable, but it points to a missing or broken deployment pipeline.

**Uploaded files living in `public/`.** No object storage, no CDN. User uploads served from the same disk as the application code. One traffic spike and the server runs out of space.

**Raw SQL queries in controllers.** Usually this means someone decided Eloquent was "too slow" for a query that just needed an index or proper eager loading. The raw queries then bypass model events, scopes, and soft deletes, creating data inconsistencies that show up weeks later.

## What I Skip

Code style. Formatting. Tabs versus spaces. PSR-12 compliance. I skip all of this in the first 30 minutes. Laravel Pint fixes it with one command. Debating style before addressing architecture is the wrong priority.

I also skip reading individual business logic until the structural checks are done. Understanding what the app does comes second. Understanding how it's built comes first.

A well-structured application with wrong business logic is fixable. Spaghetti code that happens to produce correct results is a rewrite waiting to happen.

## After the 30 Minutes

By this point, the picture is clear. I can tell a client:

- Their application is solid and needs targeted improvements (rare but satisfying).
- There are specific problems with a clear path to fix them (the most common outcome).
- The architecture needs significant work before new features are safe to build on top of.
- Starting fresh would genuinely be faster than untangling what exists (rare, but I've said it when it was true).

None of this required reading thousands of lines of source code. Structural checks reveal how an application was built, how it's maintained, and how it behaves under pressure. The line-by-line code review comes later, once the foundation is understood.

If you're inheriting a codebase or evaluating a contractor's work, try this sequence before diving into source files. Thirty minutes of structural checks will save you weeks of surprises.