Ace Your Interviews 🎯

Browse our collection of interview questions across various technologies.

Node.jsBeginner

What is Node.js and how does it differ from browser JavaScript?

Answer

Node.js is a JavaScript runtime built on Chrome's V8 engine that runs JavaScript outside the browser — on servers, in CLIs, and in build tools. Differences: Node.js has no DOM or browser APIs (no window, document, localStorage), but has access to the file system (fs), network (http), OS (os), and child processes. Node.js uses CommonJS modules (require) historically but supports ESM (import) since Node.js 12+. Node.js has the global object instead of window.

Node.jsBeginner

What is the Event Loop in Node.js?

Answer

The event loop is Node.js's mechanism for handling async operations on a single thread. When an async operation (DB query, file read, HTTP call) is initiated, Node.js offloads it to libuv, registers a callback, and immediately continues processing other work. When the async operation completes, libuv places the callback in the appropriate queue. The event loop continuously checks these queues (timers, I/O callbacks, poll, check, close callbacks) and executes callbacks in order. This is what makes Node.js non-blocking — the thread is never idle waiting for I/O.

Node.jsBeginner

What is the difference between synchronous and asynchronous code in Node.js?

Answer

Synchronous code executes sequentially — each line blocks until complete. fs.readFileSync() waits for the file to be read before the next line executes. In Node.js, synchronous code blocks the event loop — no other requests can be processed during that time. Asynchronous code registers a callback and returns immediately — the actual work happens in the background. fs.promises.readFile() returns a Promise, Node.js offloads the I/O to libuv, and the callback runs when complete. Always use async APIs in Node.js server code.

Node.jsBeginner

What is Express.js and what does it add over the built-in http module?

Answer

Express.js is a minimal web framework for Node.js. Node's built-in http module gives you raw HTTP handling — you parse URLs, headers, and bodies manually, match routes manually, and write raw HTTP responses. Express adds: declarative routing (app.get('/users', handler)), middleware (composable functions that process requests), request/response helpers (res.json(), req.params, req.query, req.body), and a plugin ecosystem. Express eliminates ~80% of the boilerplate code you'd write with raw http.

Node.jsBeginner

What is middleware in Express.js?

Answer

Middleware is a function with (req, res, next) parameters that runs in the request pipeline before route handlers. Middleware can: modify req/res (add headers, parse body), end the request (send an error response), or call next() to pass control to the next middleware. Middleware is registered with app.use() and runs for all routes, or it can be route-specific. Order matters — middleware runs in the order it's registered. Common middleware: cors(), helmet(), express.json() (body parser), authentication checker.

Node.jsBeginner

What is the difference between req.params, req.query, and req.body?

Answer

req.params: URL path parameters — GET /products/:id where id is a param. Accessed as req.params.id. Part of the URL path. req.query: URL query string parameters — GET /products?category=electronics&page=2. Accessed as req.query.category. After the ? in the URL. Always strings. req.body: Request body — for POST/PUT/PATCH requests. Contains the JSON or form data sent in the request body. Requires express.json() or express.urlencoded() middleware to parse. Not available on GET requests.

Node.jsBeginner

What is a Mongoose Schema and Model?

Answer

A Schema defines the structure of documents in a MongoDB collection — field names, types, validation rules, default values, and options. const productSchema = new Schema({ name: { type: String, required: true }, price: Number }). A Model is a constructor compiled from a Schema that provides an interface to MongoDB — ProductModel.find(), ProductModel.create(), ProductModel.findById(). Schema describes the shape; Model is the query interface. One Schema → one Model → one MongoDB collection.

Node.jsBeginner

What is JWT and how is it used for authentication in Node.js?

Answer

JWT (JSON Web Token) is a compact, URL-safe token format that encodes a payload (user ID, role, expiry) as a signed string. In Node.js: on login, sign a JWT with the user's data and a secret key (jsonwebtoken.sign(payload, secret, { expiresIn: '15m' })). Send the token to the client. On subsequent requests, client sends Authorization: Bearer <token>. Verify with jwt.verify(token, secret) — if valid, returns the payload. If tampered or expired, throws an error. No session storage needed — the token itself carries authentication state.

Node.jsBeginner

What is the purpose of .env files in Node.js projects?

Answer

.env files store environment-specific configuration that shouldn't be hardcoded in code — database connection strings, API keys, JWT secrets, port numbers. The dotenv package reads this file and loads variables into process.env. .env files are listed in .gitignore — never committed to version control — so secrets aren't exposed in the repository. Different environments (development, test, production) have different .env values. .env.example is committed as a template showing which variables are needed without their values.

Node.jsBeginner

What HTTP status codes should a REST API use for common scenarios?

Answer

200 OK: successful GET or PATCH. 201 Created: successful POST that created a resource. 204 No Content: successful DELETE. 400 Bad Request: invalid input, validation failure. 401 Unauthorized: not authenticated (no token or invalid token). 403 Forbidden: authenticated but insufficient permissions. 404 Not Found: resource doesn't exist. 409 Conflict: duplicate resource (email already exists). 422 Unprocessable Entity: valid format but business rule violation. 429 Too Many Requests: rate limit exceeded. 500 Internal Server Error: unexpected server error.

Node.jsIntermediate

Explain the difference between authentication and authorization.

Answer

Authentication is verifying who you are — confirming your identity. In Node.js: the authenticate middleware verifies the JWT token and attaches req.user to the request. Authorization is verifying what you're allowed to do — confirming your permissions. In Node.js: the authorize middleware checks req.user.role against the required roles for the route. They're separate concerns: you can be authenticated (valid token) but unauthorized (wrong role). The order matters: authenticate always runs before authorize.

Node.jsIntermediate

What is an N+1 query problem and how do you solve it in Mongoose?

Answer

N+1 occurs when fetching N records triggers N additional queries. Example: fetch 20 orders (1 query), then loop and fetch each order's user (20 queries) = 21 total queries. Solution: use Mongoose's .populate() to fetch related documents in a second efficient query (2 total queries instead of 21). For deeply nested relationships: use aggregation with $lookup. For very large datasets: denormalize by embedding frequently-needed fields directly (store product name and price in order items at creation time — no join needed).

Node.jsIntermediate

How does the Service Layer pattern improve a Node.js codebase?

Answer

The Service Layer separates business logic from HTTP concerns. Controllers handle: parsing req.body, calling service methods, formatting res.json(). Services handle: business rules, database queries, external API calls. Benefits:

1

Services are testable without an HTTP server — call service.createOrder() directly in unit tests.

2

Services are reusable — the same order creation logic called from a REST route, a Socket.io event, or a scheduled job.

3

Controllers stay thin and readable.

4

Separation of concerns makes debugging easier — an HTTP 400 is a controller problem; a business rule failure is a service problem.

Node.jsIntermediate

How do you handle file uploads in Node.js and store them in production?

Answer

Use Multer for parsing multipart/form-data requests. For production storage: never save to local disk (ephemeral in containers, not shared across multiple server instances). Use Cloudinary (images/videos) or AWS S3 (any file type) with a streaming upload approach. multer-storage-cloudinary uploads directly to Cloudinary from the multer middleware — the file is never written to disk. Validate file type (MIME type check in fileFilter) and size (limits.fileSize) in Multer. Store the returned CDN URL in your database, not the file itself.

Node.jsIntermediate

What is the difference between Mongoose's .lean() and regular queries?

Answer

A regular Mongoose query returns Mongoose Document objects — they have methods (.save(), .validate()), getters, setters, virtual properties, and the full Mongoose overhead. .lean() returns plain JavaScript objects — no Mongoose methods, no overhead. Performance difference: lean queries are 2–5x faster and use significantly less memory. Use .lean() for: read-only data that you'll serialize to JSON (API responses). Don't use .lean() for: documents you need to modify and .save(), or documents where you need Mongoose virtuals or methods.

Node.jsIntermediate

How do you implement pagination in a Node.js API?

Answer

Offset pagination (skip/limit): calculate skip = (page - 1) * limit. Simple to implement, but slow for large offsets (MongoDB must scan and skip documents). Always return total count with Promise.all([Model.find().skip(skip).limit(limit), Model.countDocuments()]). Cursor-based pagination: instead of skip, filter by _id > lastId. Efficient regardless of dataset size (uses index). Better for infinite scroll. Return a nextCursor (the last ID) that the client sends in the next request. Use offset for page-number UIs (page 1, 2, 3); cursor for infinite scroll.

Node.jsIntermediate

How do you secure a Node.js API against common vulnerabilities?

Answer

NoSQL injection: use express-mongo-sanitize (strips $ and . from inputs). XSS: helmet() sets security headers (X-Content-Type-Options, etc.), sanitize user-generated content before rendering. CSRF: use CSRF tokens for cookie-based sessions (not needed for JWT in Authorization header). Rate limiting: express-rate-limit on all endpoints, stricter on auth. SQL injection (for PostgreSQL): use parameterized queries (Prisma and pg both do this by default — never string-concatenate SQL). DOS: limit request body size (express.json({ limit: '10kb' })), rate limit.

Node.jsIntermediate

What is Redis and what problems does it solve in a Node.js API?

Answer

Redis is an in-memory data store (much faster than a database). Node.js use cases:

1

Caching — store expensive query results (product listings, user profiles) for 5 minutes. Reduces DB load 10x for read-heavy APIs.

2

Session storage — store JWT refresh tokens or server-side sessions.

3

Rate limiting counters — cross-process-safe increment (in-memory per-process counters don't work with clustering).

4

Job queues — BullMQ uses Redis as a reliable queue backend.

5

Pub/Sub — Socket.io Redis adapter allows multi-process WebSocket broadcasting.

Node.jsIntermediate

How do you implement background jobs in Node.js?

Answer

Use BullMQ (built on Redis) for reliable job queues. Create a queue: const queue = new Queue('emails', { connection: redisConnection }). Add jobs: queue.add('send-order-confirmation', { orderId, email }). Process jobs in a worker: const worker = new Worker('emails', async (job) => { await sendEmail(job.data) }, { connection: redisConnection }). BullMQ handles: retry on failure, delayed jobs, scheduled (cron) jobs, job priorities, concurrency control, and job completion events. Use cases: sending emails, generating reports, processing images, sending notifications.

Node.jsIntermediate

What is the difference between monolithic and microservices architecture for Node.js?

Answer

Monolith: one Node.js application contains all features (auth, products, orders, payments). Simple to develop, deploy, and debug. One database. Appropriate for: MVP, small teams, early-stage products. Microservices: separate Node.js services per domain (UserService, ProductService, OrderService), each deployed independently with its own database. Services communicate via REST, gRPC, or message queues. Appropriate for: large teams (each team owns a service), independent scaling (scale ProductService without scaling UserService), and fault isolation. Don't start with microservices — extract services from a well-structured monolith when team size or scaling requirements demand it.

Node.jsIntermediate

How do you handle database transactions in Mongoose and Prisma?

Answer

Mongoose: await mongoose.startSession(), session.startTransaction(), pass { session } to all operations in the transaction, session.commitTransaction() or session.abortTransaction() in the catch. Requires a MongoDB replica set (even a 1-member replica set for local dev). Prisma: prisma.$transaction(async (tx) => { ... }). Pass tx (the transactional client) to all operations inside. If any operation throws, all changes roll back automatically. Use transactions for: any multi-document/multi-table operation that must be atomic (order creation + stock decrement, payment + order status update).

Node.jsIntermediate

How do you write testable Node.js services?

Answer

Design services as plain TypeScript classes with no Express dependencies (no req/res objects). Inject dependencies (the database model, other services, external API clients) through the constructor or as parameters — this enables injecting mocks in tests. Example: class OrderService { constructor(private db: typeof OrderModel, private emailService: EmailService) {} }. In tests: new OrderService(mockOrderModel, mockEmailService). Services should throw AppError for expected errors (not found, unauthorized) — tests can assert on these with expect().rejects.toThrow(). Use mongodb-memory-server for integration tests that need a real database.

Node.jsAdvanced

How does Node.js handle CPU-intensive tasks given its single-threaded model?

Answer

Node.js's single thread is ideal for I/O-bound work (most API work) but blocking for CPU-bound work. Options:

1

Worker Threads (worker_threads module) — spawn separate threads for CPU work, communicate via postMessage.

2

Child Processes — spawn separate Node.js processes for truly independent CPU work.

3

Cluster — multiple Node.js processes sharing the same port, leveraging all CPU cores.

4

Offload to external service — send image processing to Cloudinary, PDF generation to a Lambda function, heavy computation to a microservice. Most API servers don't encounter CPU-bound work at all — profile first.

Node.jsAdvanced

How would you architect a Node.js application to handle 100,000 concurrent users?

Answer

Horizontal scaling: multiple Node.js instances behind a load balancer (nginx, AWS ALB). Stateless app: JWT tokens (no server sessions), Redis for shared state (rate limit counters, socket rooms). Caching: Redis cache for read-heavy endpoints, CDN for static assets. Database optimization: read replicas for queries, proper indexing, connection pooling (max 10–20 per process). Async operations: message queues (BullMQ/Kafka) for work that can be deferred. PM2 cluster mode (one process per CPU) + horizontal scaling + Redis for shared state is the Node.js scalability pattern.

Node.jsAdvanced

Explain how you would design a rate limiting system for a public API.

Answer

Sliding window rate limiting (most accurate, prevents boundary bursts): for each request, store timestamps in a Redis sorted set (ZADD with score = timestamp). Count entries within the window (ZCOUNT from now-window to now). If count >= limit, reject with 429. Remove old entries (ZREMRANGEBYSCORE). Alternatively, use token bucket: bucket starts full (max tokens), each request consumes a token, tokens refill at a fixed rate. For a public API: different limits per tier (free: 100/hour, pro: 10000/hour), keyed by API key not IP, return Retry-After header on 429. Use Lua scripts in Redis for atomic check-and-increment.

Node.jsAdvanced

How do you implement event-driven architecture in Node.js?

Answer

Internal: Node.js's built-in EventEmitter for in-process events. An OrderService emits 'order.created'; EmailService, InventoryService, and AnalyticsService listen and react independently — decoupled. External: message brokers for cross-service events. RabbitMQ (AMQP): durable queues, message acknowledgment, dead-letter queues for failures. Apache Kafka: high-throughput event streaming, consumer groups, message replay from any offset. BullMQ: job queues backed by Redis. Choose by scale: EventEmitter for in-process, BullMQ for async jobs, Kafka for high-throughput event streams, RabbitMQ for reliable message delivery between services.

Node.jsAdvanced

How do you handle database migrations in production Node.js applications?

Answer

Prisma Migrate: define schema in schema.prisma, run prisma migrate dev (development) to generate SQL migration files, prisma migrate deploy (production) to apply pending migrations. Migrations are committed to git and run in CI before deploying the application. TypeORM: synchronize: false in production, use migrations directory with Up/Down methods. Mongoose: no built-in migration system — use migrate-mongo or mongodb-migrate. Key principles: migrations run before code deployment (not after), always idempotent, tested in staging first, have a rollback plan (Down migration), never run synchronize: true or autoMigrate in production.

Node.jsAdvanced

What is tRPC and how does it change the Node.js + TypeScript developer experience?

Answer

tRPC (TypeScript Remote Procedure Call) creates end-to-end type safety between a Node.js backend and a TypeScript frontend with no code generation. Define procedures on the server (input schema with Zod, output type inferred). Import the router type on the client — client calls are fully typed with input validation and return type inference. The client uses React Query under the hood. Trpc.product.list.useQuery() is a typed React Query hook. Eliminates: REST endpoint documentation (types are the docs), client-side type definitions duplicating server types, runtime type mismatch bugs. Trade-off: requires TypeScript on both ends, not suitable for public APIs consumed by non-TypeScript clients.

Node.jsAdvanced

How do you implement comprehensive observability in a Node.js API?

Answer

Three pillars:

1

Logs — structured JSON logging with Pino (fast, low overhead). Log every request with ID, method, path, status, duration. Log errors with stack traces. Ship to a log aggregator (Datadog, Loki, CloudWatch).

2

Metrics — expose a /metrics endpoint with Prometheus counters, histograms, gauges. Track: request rate, error rate, response duration (P50/P95/P99), active connections, event loop lag. Visualize with Grafana.

3

Traces — distributed tracing with OpenTelemetry. Each request gets a trace ID that spans across services, showing the full call graph. Essential for microservices. Correlate logs to traces via the trace ID.

Node.jsAdvanced

How do you prevent and detect memory leaks in Node.js?

Answer

Common sources: event listeners never removed (use emitter.on and store the reference to call .off() in cleanup), closures keeping large objects alive, global Maps/Sets that grow without bounds, connection leaks (database connections not released), module-level caches that grow indefinitely. Detection: monitor process.memoryUsage().heapUsed in metrics. Memory that grows and never decreases is a leak. Use Node.js's --inspect flag + Chrome DevTools Memory tab for heap snapshots — compare two snapshots to find objects that grew. Use clinic.js heapprofiler for automated heap analysis. Fix: explicitly remove event listeners, implement LRU cache instead of unbounded Map, use weak references (WeakMap/WeakSet) for caches.

Node.jsAdvanced

Explain how you would implement a multi-tenant SaaS backend in Node.js.

Answer

Tenant isolation strategies:

1

Database per tenant — each tenant has their own MongoDB database or PostgreSQL database. Complete isolation. Prisma: switch database URL per request based on tenant. High operational overhead.

2

Schema per tenant (PostgreSQL) — each tenant has their own schema in one database. Good isolation, manageable overhead.

3

Row-level isolation — single database, tenantId field on every table, every query filters by tenantId. Simplest to implement, lowest cost. Tenant resolution: subdomain (tenant1.app.com → extract 'tenant1' in middleware, look up tenant in DB), JWT claim (token includes tenantId), API key header. Middleware attaches tenant to req.tenant; all service methods receive and filter by tenant. Most critical: ensure every query includes the tenantId filter — a missing filter is a data leak.

Node.jsAdvanced

How do you design API versioning for a public Node.js API?

Answer

URL versioning (/api/v1, /api/v2) is the most explicit and widely used: clients always know their version, easy to route to different handlers. Header versioning (Accept: application/vnd.api.v2+json) is cleaner but harder to debug and test. Strategy: mount v1 and v2 routers separately. When making breaking changes: create v2 router with the new behavior, keep v1 running. Set deprecation headers on v1 (Deprecation: true, Sunset: date). Give clients 6–12 months migration window. Never make breaking changes within a version. Breaking changes: removing fields, changing field types, changing status codes, removing endpoints, changing authentication mechanism.

Node.jsAdvanced

How does Prisma improve over Mongoose for large TypeScript Node.js projects?

Answer

Prisma advantages over Mongoose:

1

Type safety — every query result is fully typed from the schema, no casting or any types needed.

2

Schema-first — schema.prisma is the single source of truth; TypeScript types auto-generated.

3

Migrations — first-class migration system with SQL migration files in version control.

4

Relations — Prisma's include/select syntax for joins is type-safe and explicit; Mongoose's populate is loosely typed.

5

Prisma Studio — visual database editor. Mongoose advantages: simpler for MongoDB-specific features (array operators, embedded documents, aggregation pipeline), schema flexibility that PostgreSQL can't match. Choose Prisma for: PostgreSQL, TypeScript-first projects. Mongoose for: MongoDB with complex document structures or aggregation-heavy workloads.

Ready for a real challenge?

Master Node.js