Shortcuts
BlogApril 20, 2025

Mohamed Elbarry
Testing Strategies
I spent a full afternoon chasing a flaky test that passed locally and failed in CI. The cause wasn’t timing or randomness—we were asserting on implementation details, and a refactor had changed the DOM. That’s when I started taking the pyramid and test design seriously. Here’s how I think about it now. We had this gate for Lumin Search so broken builds never reached production. The idea’s been around forever (Martin Fowler’s test pyramid bliki is a good read if you want the backstory). A common rule of thumb is 70% unit, 20% integration, 10% E2E—not gospel, but it reflects the trade-off: unit tests are fast and cheap, E2E catches real flows but is slow and brittle. A lot of teams end up with a “diamond” or ice-cream cone (too many E2E, not enough unit) because E2E feels like “testing like a user”; the result is flaky CI and long feedback loops. I aim for unit tests in the tens of milliseconds so I can run them on every save. The Practical Test Pyramid goes deeper with examples. Unit tests should hit one behavior, run in isolation, and not touch the network or DB. If a test is slow or flaky, I treat it as a smell. I use Vitest and Testing Library. Example for a service that talks to an API we don’t want in unit tests:
Tsx
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { render, screen, fireEvent } from '@testing-library/react';
import { UserService } from '../services/UserService';

vi.mock('../api/userApi', () => ({
  fetchUser: vi.fn(),
  updateUser: vi.fn()
}));

describe('UserService', () => {
  let userService: UserService;

  beforeEach(() => {
    userService = new UserService();
  });

  describe('createUser', () => {
    it('should create a user with valid data', async () => {
      const userData = {
        name: 'John Doe',
        email: 'john@example.com'
      };

      const result = await userService.createUser(userData);

      expect(result).toMatchObject({
        id: expect.any(String),
        name: userData.name,
        email: userData.email
      });
    });
  });
});
Test what the user sees and does, not class names or internal state. That way refactors don’t break tests for no reason.
Tsx
import { render, screen, fireEvent } from '@testing-library/react';
import { UserCard } from '../components/UserCard';

describe('UserCard', () => {
  const mockUser = {
    id: '1',
    name: 'John Doe',
    email: 'john@example.com',
    avatar: 'https://example.com/avatar.webp'
  };

  it('should render user information', () => {
    render(<UserCard user={mockUser} />);

    expect(screen.getByText('John Doe')).toBeInTheDocument();
    expect(screen.getByText('john@example.com')).toBeInTheDocument();
  });

  it('should call onEdit when edit button is clicked', () => {
    const mockOnEdit = vi.fn();
    render(<UserCard user={mockUser} onEdit={mockOnEdit} />);

    fireEvent.click(screen.getByRole('button', { name: /edit/i }));

    expect(mockOnEdit).toHaveBeenCalledWith(mockUser);
  });
});
Integration tests hit the real DB or a test DB and check that layers work together. I keep the count low and the scenarios high-value (e.g. “create user then fetch by id”) so the suite stays under a few minutes.
Tsx
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
import request from 'supertest';
import app from '../app';

describe('User API Integration', () => {
  beforeEach(async () => {
    await setupTestDatabase();
  });

  afterEach(async () => {
    await cleanupTestDatabase();
  });

  describe('POST /api/users', () => {
    it('should create a new user', async () => {
      const userData = {
        name: 'John Doe',
        email: 'john@example.com',
        password: 'password123'
      };

      const response = await request(app)
        .post('/api/users')
        .send(userData)
        .expect(201);

      expect(response.body).toMatchObject({
        id: expect.any(String),
        name: userData.name,
        email: userData.email
      });
      expect(response.body.password).toBeUndefined();
    });
  });
});
E2E runs in a real browser and catches UI and flow bugs that unit and integration tests miss. The downside: they’re slower and more brittle. I use them for a small set of critical paths (login, checkout, key settings) and run them in CI, not on every commit. Playwright’s docs on writing tests are solid if you’re setting this up.
Tsx
import { test, expect } from '@playwright/test';

test.describe('User Management', () => {
  test.beforeEach(async ({ page }) => {
    await page.goto('/login');
    await page.fill('[data-testid="email"]', 'admin@example.com');
    await page.fill('[data-testid="password"]', 'password123');
    await page.click('[data-testid="login-button"]');
    await expect(page).toHaveURL('/dashboard');
  });

  test('should create a new user', async ({ page }) => {
    await page.goto('/users');

    await page.click('[data-testid="create-user-button"]');

    await page.fill('[data-testid="user-name"]', 'John Doe');
    await page.fill('[data-testid="user-email"]', 'john@example.com');
    await page.fill('[data-testid="user-password"]', 'password123');

    await page.click('[data-testid="submit-button"]');

    await expect(page.locator('[data-testid="user-list"]')).toContainText('John Doe');
  });
});
Write the test first, see it fail, then implement. I don’t do it for every line, but for non-trivial logic (validation, calculations) it keeps the API clear and avoids over-testing after the fact.
Tsx
// 1. Write a failing test
describe('UserService', () => {
  describe('validateEmail', () => {
    it('should return true for valid email', () => {
      const userService = new UserService();
      expect(userService.validateEmail('test@example.com')).toBe(true);
    });

    it('should return false for invalid email', () => {
      const userService = new UserService();
      expect(userService.validateEmail('invalid-email')).toBe(false);
    });
  });
});

// 2. Write minimal code to make test pass
export class UserService {
  validateEmail(email: string): boolean {
    const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
    return emailRegex.test(email);
  }
}

// 3. Refactor if needed; 4. Repeat
Mock external calls so unit tests don’t depend on APIs or DBs. Too many mocks and the test becomes a mirror of the implementation; I mock at boundaries (HTTP, DB) and keep the rest real.
Tsx
import { vi } from 'vitest';

vi.mock('../services/apiService', () => ({
  fetchUser: vi.fn(),
  updateUser: vi.fn()
}));

vi.mock('../database/connection', () => ({
  query: vi.fn()
}));

describe('UserService with mocks', () => {
  it('should fetch user from API', async () => {
    const mockFetchUser = vi.mocked(fetchUser);
    mockFetchUser.mockResolvedValue({
      id: '1',
      name: 'John Doe',
      email: 'john@example.com'
    });

    const userService = new UserService();
    const user = await userService.getUser('1');

    expect(mockFetchUser).toHaveBeenCalledWith('1');
    expect(user).toMatchObject({
      id: '1',
      name: 'John Doe',
      email: 'john@example.com'
    });
  });
});
For APIs that need to hold load, I run Artillery (or similar) against a staging environment. I’ve found that targeting a concrete number (e.g. p99 under 200ms for a given endpoint) makes the results actionable.
Yaml
# Load testing with Artillery
config:
  target: 'http://localhost:3000'
  phases:
    - duration: 60
      arrivalRate: 10
    - duration: 120
      arrivalRate: 20

scenarios:
  - name: "User API Load Test"
    weight: 70
    flow:
      - post:
          url: "/api/users"
          json:
            name: "Test User {{ $randomString() }}"
            email: "test{{ $randomString() }}@example.com"
            password: "password123"
Test behavior, not implementation—so refactors don’t break tests for no reason. One assertion focus per test where it’s practical makes it easier to see what broke. Descriptive test names that read like a sentence (“should return 401 when token is expired”) help a lot. And keep the unit suite fast so it’s cheap to run often; save the slow stuff for CI. If you only do one thing this week: find one test that’s testing implementation details and rewrite it to assert on observable behavior instead. Vitest docs and Testing Library are good references. Hope that helps. I'm currently looking for new challenges in the AI and Full Stack space. If you're building something interesting, let's chat.
Share this post: