Best practices for building major applications (2026)
Consolidated guidance for this organization: timeless engineering principles plus practices that work well with AI-assisted coding. Use this page for onboarding; record binding choices in ADRs under docs/adrs/.
Core engineering principles
- KISS — Simplicity matters more when tools can generate complexity quickly. Simple code is easier for humans and assistants to reason about, debug, and change.
- DRY (pragmatic) — Avoid premature abstraction. Rule of three: duplicate twice, abstract on the third occurrence.
- YAGNI — Do not build for hypothetical futures. Prefer adding features when needs are concrete.
- SOLID — Still the backbone of maintainable object-oriented design.
- Separation of concerns — Clear boundaries between presentation, business logic, and data access make refactors safer.
Architecture decisions
- Modular monolith first — Operational cost of microservices is rarely justified before product–market fit; extract services when scaling, team, or deployment boundaries are clear (see ADR: Modular monolith first).
- Document ADRs — Short Markdown records in
apps/docs/docs/adrs/capture why a decision was made; assistants and reviewers use them for consistency. - Observability from day one — Structured logging, metrics, and distributed tracing (OpenTelemetry as default); see ADR: Observability.
- Boring technology for the core — Postgres-class databases, mainstream languages, well-supported frameworks; reserve novelty for differentiators.
- API-first design — Even internal services benefit from explicit contracts (OpenAPI, gRPC, GraphQL); see ADR: API-first.
AI-assisted coding
- Treat AI output as a junior engineer’s PR, not finished code—especially for security, edge cases, and architectural fit.
- Maintain repo context via root
AGENTS.mdand.cursor/rules/so assistants align with conventions. - Tests as specification — Tests should be human-written or human-reviewed; AI-generated tests alone do not validate AI-generated behavior.
- Use AI for the boring 80%; reserve humans for core business logic, security-sensitive paths, and architectural judgment.
- Small, reviewable diffs — Large unreviewed AI patches are a leading source of regressions.
- Pin dependencies and verify package names before install (slopsquatting / hallucinated package names are a real risk).
- Do not paste secrets, customer data, or proprietary algorithms into third-party AI unless data handling is approved.
- Review AI-generated SQL, regex, and security-sensitive code with extra scrutiny.
Code quality and process
- Continuous integration with strong gates: lint, type-check, tests, security scanning, dependency audits—fail fast (see ADR: CI quality gates).
- Type systems where practical—types are executable documentation assistants use.
- Code reviews remain mandatory to catch hallucinations, plausible-wrong logic, and pattern drift.
- Trunk-based development with feature flags where appropriate; long-lived branches plus heavy AI churn increase merge pain.
- Conventional commits and semantic versioning improve history clarity for humans and tools.
Security
- Shift-left — Threat modeling during design; SAST in CI; dependency scanning (Dependabot, Snyk, or similar); secrets scanning on commits.
- Least privilege for IAM, databases, API keys, and agent credentials.
- Do not rely on obscurity—assume training data and prompts can leak.
- Supply chain — Commit lockfiles; consider SBOM requirements for regulated environments.
- Treat AI agents like contractors: scoped credentials, audited actions, no casual production access.
Data and state
- The database is the most precious asset—migrations reversible where possible, human-reviewed, tested against realistic data; never run unreviewed migrations from an agent.
- Backup and restore — Test restores on a schedule; untested backups do not exist.
- PII — Minimize data; GDPR, CCPA, and state laws make minimization a legal baseline.
Operational excellence
- Infrastructure as code for production—avoid one-off console changes.
- Progressive delivery — Feature flags, canaries, blue-green; fast rollback is mandatory.
- SLOs over informal SLAs — Define measurable “good enough”; use error budgets to balance features vs reliability.
- Runbooks — Keep on-call docs current; draft postmortems blamelessly and share learnings.
Team and process
- Prefer small teams owning vertical slices — Conway’s Law applies whether you plan for it or not.
- Document why, not what—the code shows what; ADRs and focused comments show intent.
- Onboarding should work for humans and assistants via README,
AGENTS.md, and this site.
Anti-patterns (2026)
- Auto-merging AI output without human review.
- Shipping large volumes of code nobody understands or can maintain.
- Skipping tests because “the model looked right.”
- Outsourcing architecture without business context.
- Accumulating AI debt—opaque code with no local ownership.
- Using AI to skip fundamentals—you still need judgment to evaluate output.