Backend Engineering
Interview Preparation

Questions are easy to memorise and forget. This section is built differently — every topic follows a logical mental model so the answer becomes something you reason to, not something you recall. Understand the chain, and you never blank on an answer again.

How to Use This Section

Most interview prep fails because it treats questions as isolated facts to memorise. You memorise 200 answers, panic on question 201, and forget half of the 200 under pressure. This section takes a different approach: every topic is anchored to a mental model — a logical chain that, once understood, regenerates the answer from first principles.

The Mental Model Approach
❌ Memorisation Approach
"ACID means Atomicity, Consistency, Isolation, Durability"

You remember the acronym. But when asked "why does Atomicity matter for payment systems?" you freeze. The fact exists in isolation.
✓ Mental Model Approach
"A transaction either fully succeeds or fully undoes itself. Think: bank transfer. If debit succeeds but credit fails, the money vanishes. Atomicity prevents that."

You now own the answer, can apply it to any scenario, and cannot forget it.
Interview Round Types & What Each Tests
💻
Coding Round
Data structures, algorithms, Java fundamentals. LeetCode Medium.
⚙️
System Design
Design a URL shortener, order system, notification platform. Architecture thinking.
🔬
Technical Deep-Dive
Spring internals, JVM, transactions, concurrency, database internals.
🐝
Code Review
Find bugs in provided code. Identify race conditions, N+1 queries, security holes.
👤
Behavioural
Past experience, technical decisions, team conflict, production incidents.

Topic 1 — Spring Core & Internals

The mental model: Spring is a container that owns object lifetimes and wires them together. Every Spring question is an extension of this idea. If you understand how beans are created, proxied, and destroyed, you can reason through any Spring question.

Spring Mental Model Chain
You write class
OrderService.java
Spring scans it
Creates BeanDefinition
Instantiates it
new OrderService(repo)
Wraps in proxy
CGLIB for @Transactional
You get proxy
Not the real object
Q: What is the difference between @Component, @Service, @Repository, and @Controller?
Chain: All four are stereotypes of @Component — they all trigger component scanning and register a bean. The difference is intent and behaviour. @Repository additionally enables Spring’s persistence exception translation (converts JDBC/JPA exceptions to Spring’s DataAccessException hierarchy). @Controller enables request mapping resolution. @Service and @Component are functionally identical — @Service is just documentation that the class is a business service. Interview tip: Mention exception translation for @Repository — most candidates miss this.
Q: What is the difference between @Bean and @Component?
Chain: Both register beans, but the control level differs. @Component is class-level — Spring instantiates the class via its constructor. You write the class. @Bean is method-level on a @Configuration class — you write the instantiation logic yourself. Use @Bean when: the class comes from a third-party library you can’t annotate, you need to configure the object with multiple parameters, or you need conditional creation logic. Use @Component for classes you own.
Q: Explain bean scopes. When would you use prototype scope?
Chain: Scope defines how many instances exist and for how long. Singleton (default): one instance per ApplicationContext, shared by all users — safe only if stateless. Prototype: new instance every time getBean() is called. Request: one per HTTP request. Session: one per HTTP session. Use prototype when the bean maintains mutable state that must not be shared: a stateful command object, a non-thread-safe parser, or a builder accumulating data. The critical gotcha: a prototype bean injected into a singleton is only created once (at singleton creation) — use ObjectProvider<T> to get a fresh prototype each time.
Q: Why does @Transactional on a private method silently fail?
Chain: @Transactional works via a CGLIB proxy (a subclass of your bean). Subclasses can only override public and protected methods. A private method cannot be overridden, so the proxy never intercepts the call. Spring doesn’t throw an error — it simply creates the proxy without overriding the method, and your annotation is silently ignored. Same applies to final methods. Fix: Make the method public. If the method should not be part of the public API, this is also a hint that you should extract it to a separate package-private service bean.

Topic 2 — Database & JPA

The mental model: JPA is a translation layer between Java objects and relational rows. Every JPA question is about the cost of that translation, its failure modes, and when to bypass it entirely.

ACID — The Mental Anchor
A
Atomicity
All or nothing. Bank transfer: debit AND credit must both succeed, or neither happens.
C
Consistency
DB constraints always hold. Foreign keys, unique constraints, check constraints never violated.
I
Isolation
Concurrent transactions don’t see each other’s in-progress changes. Levels: READ COMMITTED, REPEATABLE READ, SERIALIZABLE.
D
Durability
Once committed, data survives crashes. Achieved via write-ahead log (WAL) in PostgreSQL.
Q: What is the N+1 query problem? How do you detect and fix it?
Chain: You load N entities, then JPA fires one query per entity to load a lazy association — totalling N+1 queries. Example: load 100 orders, then access order.getCustomer() for each — 101 queries instead of 1. Detect: Enable SQL logging (spring.jpa.show-sql=true) or use Hibernate Statistics. Look for repetitive queries with only the ID changing. In tests, use the @QueryCount assertion library or assert that Hibernate’s statistics show the expected query count. Fix: Use JPQL JOIN FETCH: SELECT o FROM Order o JOIN FETCH o.customer; or use @EntityGraph on the repository method; or use a DTO projection with a single query. For collections (one-to-many), use @BatchSize(size=25) as a middle ground — reduces N+1 to N/25+1.
Q: What are isolation levels and what anomalies does each prevent?
Chain — easiest to remember as a ladder: Each level adds one more protection. READ UNCOMMITTED: you can read rows another transaction hasn’t committed yet (dirty read). Almost never used. READ COMMITTED (PostgreSQL default): prevents dirty reads — you only see committed data. But if you read the same row twice in one transaction, another transaction might commit between reads (non-repeatable read). REPEATABLE READ: prevents non-repeatable reads — your transaction sees a consistent snapshot. But a new row inserted by another transaction may appear (phantom read). SERIALIZABLE: prevents all anomalies — transactions execute as if they ran one at a time. Maximum correctness, maximum locking cost. For payment systems: use REPEATABLE READ or SERIALIZABLE when reading then writing the same data (check balance then debit).
Q: What is the difference between optimistic and pessimistic locking? When do you use each?
Chain — think about where you pay the cost: Pessimistic locking (SELECT FOR UPDATE) pays the cost upfront — it locks the row immediately, blocks other transactions from reading or writing it. Use when conflicts are frequent (multiple users trying to book the last seat simultaneously) or when the cost of a failed transaction is high (payment, inventory deduction). Optimistic locking (@Version column) pays no upfront cost — it checks at commit time whether anyone else modified the row since you read it. If yes, throws OptimisticLockException. Use when conflicts are rare (user editing their own profile — unlikely two tabs update simultaneously). The tradeoff: pessimistic = guaranteed success but lower throughput; optimistic = high throughput but requires retry logic on conflict.
Q: When would you NOT use JPA/Hibernate?
Chain — JPA shines for CRUD, breaks for bulk operations: Avoid JPA when: (1) Bulk updates/deletesUPDATE orders SET status='CANCELLED' WHERE created_at < '2020-01-01' in JPA loads every row into memory as entities, then deletes/updates each. Use @Modifying @Query with JPQL or native SQL instead. (2) Complex reporting queries — multi-join aggregations with GROUP BY, HAVING, and window functions are unnatural in JPQL. Use native SQL or a reporting tool. (3) High-write throughput — JPA’s entity change detection (dirty checking) adds overhead on every flush. Use JDBC batch inserts or Spring Data JDBC. (4) Non-relational data — graph relationships, document storage, time-series data — use the appropriate database and driver directly.

Topic 3 — Concurrency & Thread Safety

The mental model: concurrency bugs only happen when shared mutable state is accessed without coordination. If state is immutable, or if it’s not shared, there is no bug. Every concurrency question reduces to: what is shared, what is mutable, and who coordinates access?

Concurrency Decision Tree
Is the data shared between threads?
↳ NO → No concurrency concern. Local variables, method params — thread-safe by definition.
↳ YES ↓
Is it mutable?
↳ NO → Immutable shared state is safe. final fields, records, String — no coordination needed.
↳ YES ↓
Is it a single variable?
↳ YES → Use AtomicInteger / AtomicReference / volatile.
↳ NO → Use synchronized block, ReentrantLock, or ConcurrentHashMap. Consider making it immutable.
Q: Are Spring beans thread-safe? What happens if a singleton bean has instance variables?
Chain: Spring beans are not inherently thread-safe. A singleton bean is shared across all requests — if it has mutable instance variables, concurrent requests will race to read/write those variables. A counter field incremented without synchronized or AtomicLong will produce incorrect counts under concurrent access. The correct approach: Singleton beans should be stateless — no mutable instance state. Any state that must exist per-request goes in local method variables (stack-allocated, not shared). If you genuinely need per-request state accessible across layers, use a ThreadLocal (MDC uses this) but always clear it in a finally block. @RequestScope beans are also thread-safe since each request gets its own instance.
Q: What is a race condition? Give a real Spring Boot example.
Chain: A race condition occurs when the outcome depends on which thread executes first. Classic example in Spring: a service checks a condition and then acts on it, but another thread changes the condition between the check and the action. Inventory: Thread A reads stock = 5. Thread B reads stock = 5. Thread A decrements to 4 and saves. Thread B also decrements from 5 to 4 and saves — you’ve sold the same item twice. Fix options: (1) Database-level: UPDATE inventory SET stock = stock - 1 WHERE product_id = ? AND stock > 0 — atomic SQL, no race. (2) Pessimistic lock: SELECT FOR UPDATE acquires a row lock before the check. (3) Optimistic lock: @Version column — if two threads update simultaneously, one gets OptimisticLockException and retries. (4) Redis: use DECR command (single-threaded by nature — no race possible).
Q: What is a deadlock? How would you prevent it in a Spring application?
Chain: A deadlock occurs when two transactions each hold a lock the other needs. Transaction A locks row X, then tries to lock row Y. Transaction B locks row Y, then tries to lock row X. Both wait forever. Prevention strategies: (1) Consistent lock ordering — always acquire locks in the same order. If you always lock product before order (never the reverse), circular wait is impossible. (2) Short transactions — hold locks for as little time as possible. Never call external APIs or do slow operations inside a transaction. (3) Timeout — set @Transactional(timeout=5) so a deadlocked transaction gives up after 5 seconds. (4) Detect via PostgreSQL: SELECT * FROM pg_stat_activity WHERE wait_event_type = 'Lock' shows waiting transactions. PostgreSQL detects deadlocks automatically and aborts one transaction, but it’s better to prevent them.

Topic 4 — REST API Design

The mental model: a good API is a contract. It makes correct usage easy, incorrect usage impossible, and evolution painless. Every API design question asks: does this contract serve clients well under change?

REST API Design Checklist
Resource Design
✓ Nouns, not verbs: /orders not /getOrders
✓ Plural collections: /orders not /order
✓ Nested for owned resources: /orders/{id}/items
✓ Actions as sub-resources: POST /orders/{id}/cancel
✓ Consistent case: kebab-case for URLs
HTTP Semantics
✓ GET: safe & idempotent (no side effects)
✓ POST: create, not idempotent
✓ PUT: full replace, idempotent
✓ PATCH: partial update
✓ DELETE: idempotent (404 is OK on repeat)
Status Codes
✓ 201 Created with Location header on POST
✓ 204 No Content on DELETE
✓ 400 for invalid input (with error detail)
✓ 404 for missing resource (not 400)
✓ 409 for state conflicts (duplicate order)
✓ 422 for semantically invalid input
Resilience
✓ Idempotency-Key header for POST
✓ Pagination on all list endpoints
✓ Rate limiting with 429 + Retry-After
✓ Consistent error body: code, message, details
✓ Versioning strategy before first client
Q: How would you version a REST API? What are the tradeoffs of each approach?
Chain — three strategies, each with a different cost: (1) URI versioning (/api/v1/orders): explicit, easy to route in load balancers, easy to deprecate by removing the route. Cost: clients must update URLs; pollutes the URL with infrastructure concerns. Most widely adopted. (2) Header versioning (Accept: application/vnd.company.v1+json): keeps URLs clean, RESTfully correct. Cost: not visible in browser, harder to test, requires custom content negotiation in Spring (RequestMappingHandlerMapping needs customisation). (3) Query param versioning (/orders?version=1): easiest to implement. Cost: version gets included in cache keys inconsistently; semantically wrong (version is not a filter). Recommendation: URI versioning for public APIs, header versioning for internal microservice APIs where all clients are controlled.
Q: What is idempotency and why does it matter for payment APIs?
Chain: An operation is idempotent if performing it multiple times has the same effect as performing it once. GET is naturally idempotent — reading the same data twice changes nothing. POST is not — submitting a payment twice charges the customer twice. In distributed systems, retries are inevitable: network timeouts, mobile clients reconnecting, load balancer retries. Without idempotency, a retry becomes a double-charge. Implementation: The client generates a UUID Idempotency-Key header. The server stores the response keyed by this UUID. If the same key appears again, return the stored response without re-executing. Key design decisions: (1) store keys in the database in the same transaction as the operation, so a crash can’t cause "key stored but operation not executed"; (2) expire keys after 24-48 hours; (3) validate that the same key isn’t used for different request bodies — reject with 422 if the body changed.

Topic 5 — Caching

The mental model: a cache is a bet that you’ll read the same data again before it changes. Every caching question is about whether the bet is worth making, and what happens when you lose it.

Cache Strategy Decision Guide
Does the data change frequently?
↳ YES, every request → Don’t cache. Per-user dynamic data, live inventory counts.
↳ NO ↓
Is stale data harmful?
↳ YES (prices, stock levels) → Short TTL (seconds) or event-driven invalidation.
↳ NO (product descriptions, config) → Long TTL (hours/days). Cache-aside or write-through.
Is it shared across users?
↳ YES → Distributed cache (Redis). One cache hit serves all users.
↳ NO → Local cache (Caffeine). No network hop. Per-instance.
Q: What is cache stampede and how do you prevent it?
Chain: When a popular cache entry expires, thousands of concurrent requests all miss the cache simultaneously, all go to the database, and the database collapses under the load. The problem is that cache misses cluster at the exact moment of expiry. Prevention strategies: (1) Probabilistic early expiration: before the TTL fully expires, a small percentage of requests proactively refresh the cache — spreading the refresh load over time rather than letting it all hit at once. (2) Cache locking: only the first miss thread fetches from DB (with a distributed lock); all other threads wait and then serve the newly populated cache entry. (3) Background refresh: always serve from cache (even stale), but trigger an async refresh when the entry is near expiry. (4) Staggered TTLs: add random jitter to TTLs (TTL + random(0, TTL * 0.2)) so entries don’t all expire at the same moment.
Q: What is the difference between cache-aside, write-through, and write-behind caching?
Chain — the question is when you write to cache: Cache-aside (most common): application reads from cache; on miss, reads from DB and populates cache. On write, updates DB and invalidates (or updates) cache. Application owns all cache logic. Risk: cache and DB can be inconsistent between the write-to-DB and the cache update. Write-through: every DB write goes through the cache — write to cache, cache writes to DB synchronously. Always consistent. Cost: every write is a cache write (even for data nobody will read), and writes are slower (two hops). Write-behind (write-back): write to cache, return success immediately; cache asynchronously writes to DB. Fastest writes. Risk: if cache node crashes before DB write, data is lost. Use only for non-critical data or when you can accept some loss.

Topic 6 — Security

The mental model: security is about answering three questions: Who are you? What are you allowed to do? Can I trust this request? Authentication answers #1, authorisation answers #2, and everything else (CSRF, CORS, rate limiting) answers #3.

JWT Authentication Flow
CLIENT POST /auth/login { email, password }
SERVER Verify credentials → Sign JWT { sub: userId, roles: [...], exp: ... } with secret
CLIENT Stores JWT. Every request: Authorization: Bearer <token>
SERVER Validates signature → Checks expiry → Extracts claims → Authorises
KEY POINT No DB lookup needed for auth — the token is self-contained. Stateless.
Q: What are the tradeoffs of JWT vs session-based authentication?
Chain — stateless vs stateful: JWT: stateless — the server stores nothing. Every request is independently verifiable by signature. Scales horizontally without a shared session store. Weakness: you cannot invalidate a JWT before its expiry. If a user logs out or a token is stolen, you can’t revoke it without building a token blacklist — which reintroduces state. Sessions: stateful — the server stores session data (in Redis for multi-instance). Token revocation is instant (delete the session). Weakness: every instance needs access to the shared session store; one more dependency. Recommendation: JWT with short expiry (15min) + refresh tokens (longer-lived, stored in HttpOnly cookie, checked against a database). This gives stateless access tokens with revocable refresh tokens — the best of both worlds.
Q: What is the difference between authentication and authorisation? How does Spring Security implement both?
Chain: Authentication = proving identity (who are you?). Authorisation = checking permission (what can you do?). You must authenticate before you can be authorised. In Spring Security: Authentication is handled by the filter chain. UsernamePasswordAuthenticationFilter or your custom JWT filter validates credentials and stores an Authentication object in SecurityContextHolder. Authorisation is handled after authentication. FilterSecurityInterceptor checks @PreAuthorize expressions and HTTP security rules against the Authentication object’s granted authorities. The AccessDecisionManager (or the newer AuthorizationManager) makes the final allow/deny decision. Roles are coarse-grained (ROLE_ADMIN); use method security (@PreAuthorize("@orderSecurity.isOwner(#orderId, principal)")) for fine-grained per-resource access control.

Topic 7 — System Design Framework

The mental model: system design interviews test whether you think like an engineer under uncertainty. There is no single correct answer. The interviewer watches how you decompose a problem, make tradeoffs explicit, and handle constraints. Follow a consistent framework so you never freeze.

The S.C.A.L.E. Framework for System Design
S
Scope & Requirements (5 min)
Clarify functional requirements (what must it do?). Define non-functional: scale (users, RPS), availability (99.9%?), latency (p99 < 200ms?), consistency. Ask before designing — requirements drive every decision.
C
Capacity Estimation (3 min)
Back-of-envelope: daily active users, requests per second, storage per day. These numbers tell you whether you need 1 server or 1,000. Example: 10M DAU, 100 actions/day = 12,000 RPS peak. A single PostgreSQL handles ~5,000 writes/s. You need sharding or a read replica minimum.
A
API Design (5 min)
Define the key endpoints or events. Inputs, outputs, HTTP methods, status codes. This forces you to think about the data model before jumping to infrastructure. APIs define contracts — get them right early.
L
Low-Level Design (10 min)
Database schema, data model, component architecture. Which database? (relational for transactional, Redis for caching, Cassandra for time-series). Caching strategy? Async vs sync for each operation? This is the core of the interview.
E
Evolution & Edge Cases (5 min)
How does this system fail? What breaks at 10x scale? What are the hardest edge cases (duplicate requests, network partition, thundering herd)? Show you think beyond the happy path. Propose improvements: sharding, event sourcing, CDN, multi-region.
Q: How would you approach the question "Design a rate limiter"?
Chain using S.C.A.L.E.: Scope: What are we rate limiting? Per-user, per-IP, per-API-key? What limits? (100 req/min). Should it be distributed (multiple service instances)? Capacity: 10M users × 100 req/min = 1B/min potential events. In-memory per instance won’t work for a distributed fleet. Need Redis. Algorithm choice: Token Bucket: replenishes at fixed rate, allows bursts (user can use their 100-req bucket all at once). Sliding Window: precise, higher memory cost. Fixed Window: simple but allows double-rate at window boundary. Implementation: Redis with the INCR + EXPIRE pattern for fixed window: INCR user:123:minute:1234, set TTL on first increment. For token bucket: Redis Lua script for atomic check-and-decrement. Edge cases: Redis failure — fail open (allow) or fail closed (reject)? Race between INCR and EXPIRE? (Use Lua script for atomicity). Distributed clock skew affecting window boundaries?
Q: How do you choose between SQL and NoSQL databases?
Chain — SQL unless you have a specific reason not to: Choose SQL (PostgreSQL) when: data has relationships and you need JOINs; you need ACID transactions across multiple tables; schema is well-defined; data integrity constraints matter (foreign keys, unique constraints). SQL handles 99% of backend applications well. Choose NoSQL for specific problems: Redis for caching, session storage, rate limiting, leaderboards (in-memory, sub-millisecond, data structures). MongoDB for document data with variable schema (CMS content, product catalogues with different attributes per category). Cassandra for write-heavy time-series data at massive scale (IoT, event logging, ~1M writes/s). Elasticsearch for full-text search, log aggregation, faceted search. The mistake most candidates make: recommending MongoDB for everything. MongoDB is appropriate for genuinely document-shaped data with unpredictable schema — not as a default because it’s "simpler".

Topic 8 — Microservices & Distributed Systems

The mental model: microservices trade code complexity for operational complexity. Every microservices question is about whether that trade is worth making, and how to manage the operational complexity once you’ve made it.

CAP Theorem — Visual
Consistency Availability Partition Tolerance CA: Traditional RDBMS (single node) CP: HBase, Zookeeper MongoDB (strict) AP: Cassandra, CouchDB DNS, CDN Pick 2 (P always needed in dist. systems)
Q: How do you handle distributed transactions across microservices?
Chain — you can’t use a database transaction, so you need a protocol: Two-Phase Commit (2PC): a coordinator asks all participants to prepare (lock resources), then commits if all agree. Guarantees strong consistency but blocks if coordinator crashes — operationally fragile in practice. Saga Pattern (industry standard): break the transaction into a sequence of local transactions, each publishing an event/message that triggers the next. On failure, execute compensating transactions (reverse operations) in reverse order. Two implementations: (1) Choreography — each service listens for events and decides what to do next (decentralised, harder to reason about); (2) Orchestration — a saga orchestrator sends commands and receives replies (centralised, easier to visualise with tools like Temporal). Key insight: Sagas provide eventual consistency, not strong consistency. If you need strong consistency across services, your service boundaries are wrong — put that data in one service with one database.
Q: What is a circuit breaker and why is it essential in microservices?
Chain — think of it as electrical circuit protection: Without a circuit breaker, a slow downstream service causes your thread pool to fill with waiting threads. All requests queue up, your entire service becomes unresponsive, and the failure cascades upstream. A circuit breaker monitors failure rates and, when a threshold is exceeded (e.g., 50% failures in 30 seconds), it opens — fast-failing all calls to the downstream service without trying. This prevents resource exhaustion and gives the downstream service time to recover. After a timeout (e.g., 60 seconds), it enters half-open — allows one test request. If it succeeds, the circuit closes (normal operation). If it fails, it re-opens. In Spring Boot: Resilience4j’s @CircuitBreaker annotation. Configure with a fallback method that returns a cached response, a default value, or a meaningful error. Circuit breakers are also the answer to "how do you handle cascading failures" — it’s the same root answer.
Q: What is eventual consistency and when is it acceptable?
Chain: Eventual consistency means that, given no new updates, all replicas will converge to the same value — but there is a window during which different parts of the system may see different values. This is acceptable when: the business can tolerate brief inconsistency (showing a product as "in stock" when it’s actually sold out for 100ms — the user will get an error at checkout anyway, so the inconsistency caused no harm); the alternative (distributed locking or 2PC) would reduce availability or performance unacceptably. It is not acceptable when: financial accuracy is required in real-time (bank balance must always reflect actual funds); compliance requires an audit trail with guaranteed ordering (medical records). The practical test: ask what bad outcome results from the inconsistency. If the answer is "the user sees slightly stale data briefly", eventual consistency is fine. If the answer is "the user loses money", it’s not.

Mock Interview: Live Debugging Round

Debugging rounds give you a broken application and ask you to diagnose it. The pattern: look at symptoms, form a hypothesis, find evidence, confirm or eliminate, propose fix. Never guess — always reason from evidence.

Debugging Thought Process
Symptom
p99 latency spiked from 80ms to 4s. Error rate unchanged at 0.1%. Started 2 hours ago.
Hypotheses
DB slow query? Connection pool exhaustion? External API timeout? GC pressure? Hot key in Redis? Deployment 2h ago?
Evidence
Check: Grafana hikaricp_connections_pending (spiking!). Thread dump shows threads blocked on HikariPool.getConnection(). PostgreSQL pg_stat_activity shows 3 queries running > 60s.
Root Cause
A new @Scheduled job runs every 2 hours and executes a full-table scan without pagination. It holds a DB connection for 90 seconds, starving the pool.
Fix
Paginate the query. Set connection timeout. Separate connection pool for batch jobs. Add slow query alert (>10s) to catch this class of bug before prod impact.

Common Code Review Bugs to Spot

Code Review — Find All the Bugs
@RestController
public class UserController {

    @Autowired
    private UserService userService;  // BUG 1: Field injection — prefer constructor

    private List<User> userCache = new ArrayList<>(); // BUG 2: Mutable instance state in singleton

    @GetMapping("/users/{id}")
    public User getUser(@PathVariable String id) {
        User user = userCache.stream()
            .filter(u -> u.getId().equals(id))
            .findFirst()
            .orElse(null);  // BUG 3: Return null — should throw or return Optional

        if (user == null) {
            user = userService.findById(id); // BUG 4: No null check on service result
            userCache.add(user); // BUG 5: Race condition — not thread-safe, unbounded growth
        }
        return user; // BUG 6: Returns null if not found — should be 404
    }

    @DeleteMapping("/users/{id}")
    @Transactional
    public void deleteUser(@PathVariable String id) {
        userService.delete(id); // BUG 7: @Transactional on controller — won't work,
                                // controllers should not be transaction boundaries
        userCache.remove(id);   // BUG 8: Should also invalidate cache, but List.remove(String)
    }                           // won't work — need to remove by predicate
}

// FIXED VERSION:
@RestController
@RequiredArgsConstructor
public class UserController {

    private final UserService userService; // Constructor injection

    @GetMapping("/users/{id}")
    public ResponseEntity<UserResponse> getUser(@PathVariable UUID id) {
        return userService.findById(id)
            .map(UserResponse::from)
            .map(ResponseEntity::ok)
            .orElse(ResponseEntity.notFound().build());
    }

    @DeleteMapping("/users/{id}")
    public ResponseEntity<Void> deleteUser(@PathVariable UUID id) {
        userService.delete(id); // Transaction boundary belongs in service layer
        return ResponseEntity.noContent().build();
    }
}

Rapid-Fire Revision Cards

The questions interviewers ask in the first 5 minutes to gauge your level. Get these right without hesitation.

🔮 SPRING BOOT
@SpringBootApplication = @Configuration + @EnableAutoConfiguration + @ComponentScan
Auto-config order: user beans > auto-config beans (@ConditionalOnMissingBean)
Embedded server: Tomcat (default), Jetty, Undertow — swap by excluding tomcat starter
@Value vs @ConfigurationProperties: single value vs typed config class (prefer @ConfigurationProperties)
💸 JPA & TRANSACTIONS
Persistence Context: first-level cache; tracks entity changes; flushed on transaction commit
Lazy loading: works only within open session — LazyInitializationException outside transaction
@Transactional(readOnly=true): hints Hibernate to skip dirty checking; faster flush
N+1 fix: JOIN FETCH, @EntityGraph, or @BatchSize(size=N)
⛳ REST & HTTP
Idempotent methods: GET, PUT, DELETE, HEAD — safe to retry. POST is NOT.
201 vs 200: 201 = resource created (POST); include Location header
HATEOAS: responses include links to related actions; clients don’t hard-code URLs
Content negotiation: Accept: application/json tells server desired response format
🔑 SECURITY
JWT structure: header.payload.signature — only signature is verified; payload is base64 (not encrypted)
CSRF: relevant for cookie-based auth; not needed for JWT (Bearer token)
BCrypt: adaptive hash — work factor increases over time to counter faster hardware
OAuth2: delegated authorisation — "Login with Google" grants limited access without sharing password
⚡ PERFORMANCE
Connection pool sizing: connections = (core_count * 2) + effective_spindle_count (HikariCP formula)
Cache eviction: LRU (Least Recently Used) vs LFU (Least Frequently Used) vs TTL-based
GC pause impact: STW (stop-the-world) pauses cause p99 spikes; G1GC/ZGC minimize pauses
Index types: B-tree (range queries), Hash (equality), GIN (JSON/arrays), BRIN (sequential)
🌎 DISTRIBUTED
CAP theorem: partition tolerance is given; choose C or A under partition
Saga vs 2PC: saga = eventual consistency + compensation; 2PC = strong consistency + blocking
Kafka guarantees: at-least-once by default; exactly-once with idempotent producer + transactions
Consistent hashing: adding/removing nodes rehashes only K/n keys (not all keys)
🎉

Platform Complete — You Did It

You have completed the Spring Boot Engineering Platform — all 15 sections from Java foundations to production engineering to interview mastery. You now have the mental models, the architectural intuition, and the production awareness of a senior backend engineer. Go build something that matters.