High Level Design Patterns

Network Protocols

1. OSI Model 7 Layers

The OSI (Open Systems Interconnection) model is a conceptual framework used to understand network communication. It consists of seven layers, each responsible for specific functions in the process of communication between devices.

  1. Physical Layer (Layer 1)

    • Function: Transmits raw bit streams over physical media (cables, fiber optics).

    • Example: Ethernet cables, radio frequencies.

  2. Data Link Layer (Layer 2)

    • Function: Provides error detection and correction. Responsible for node-to-node data transfer.

    • Example: MAC addresses, switches, Ethernet.

  3. Network Layer (Layer 3)

    • Function: Routes packets from source to destination across different networks.

    • Example: IP addresses, routers.

  4. Transport Layer (Layer 4)

    • Function: Provides reliable or unreliable data transfer, manages flow control, error checking, and segmentation.

    • Protocols: TCP (reliable, connection-oriented), UDP (unreliable, connectionless).

    • Example:

      • TCP: Ensures ordered and error-free delivery of packets (e.g., file transfers, web browsing).

      • UDP: Faster but doesn’t guarantee delivery (e.g., video streaming, VoIP).

  5. Session Layer (Layer 5)

    • Function: Establishes, maintains, and terminates communication sessions between applications.

    • Example: Network authentication protocols, RPC (Remote Procedure Call).

  6. Presentation Layer (Layer 6)

    • Function: Translates data between the application layer and the network format (e.g., encryption, data compression).

    • Example: SSL/TLS (encryption).

  7. Application Layer (Layer 7)

    • Function: Provides network services to applications (e.g., file transfer, email, remote login).

    • Protocols: HTTP, FTP, SMTP, IMAP, POP.

    • Example: Web browsers, email clients, file-sharing services.

Focus on Application Layer: This layer interacts with end-user applications and defines the protocols for data exchange (e.g., HTTP for web browsing, SMTP for email). It directly serves the user and manages communication between software applications.

Focus on Transport Layer: The transport layer is responsible for the reliability and efficiency of data transfer. It ensures that data sent by the application layer is correctly segmented, transmitted, and reassembled on the receiving end. TCP and UDP are the most common transport layer protocols.


2. Network Protocols

Client-Server Protocol

  • In the client-server model, the client requests resources or services from the server, which responds with the requested data.

  • Client-Server Protocol: Defined by rules governing how data is requested and sent between clients and servers.

    • Example: HTTP, where the client (browser) sends a request to a web server, which responds with the requested web page.

Peer-to-Peer (P2P) Protocol

  • In the peer-to-peer (P2P) model, each node in the network can act as both a client and a server, sharing resources directly with other peers without centralized control.

  • P2P Protocol: Used in decentralized file-sharing systems like BitTorrent.

    • Example: Distributed file sharing in P2P networks like BitTorrent, where files are transferred directly between peers without an intermediary server.

Client-Server Model

  • Definition: A network architecture where clients (devices or applications) request resources or services from a centralized server.

  • Advantages: Centralized control, easy to manage, scalable.

  • Example: A web application where a client browser interacts with a web server.

Peer-to-Peer (P2P) Model

  • Definition: A decentralized network architecture where each peer (node) acts as both a client and a server, directly sharing resources with other peers.

  • Advantages: Distributed load, no single point of failure, scalable for sharing large amounts of data.

  • Example: File-sharing systems like BitTorrent, blockchain networks.

WebSockets

  • Definition: A protocol providing full-duplex communication channels over a single TCP connection. It allows real-time data exchange between client and server.

  • Use Case: Web applications needing real-time updates like chat apps, online gaming, or live stock prices.

  • Example: A chat application where WebSockets allow messages to be instantly sent and received by all participants.


3. HTTP vs TCP vs UDP vs FTP vs SMTP (POP, IMAP)

HTTP (Hypertext Transfer Protocol)

  • Layer: Application (Layer 7)

  • Definition: The protocol used for transmitting web pages over the internet.

  • Features: Stateless, request-response model (GET, POST, etc.).

  • Use Case: Web browsing (loading websites), APIs.

  • Example: A user visiting https://www.example.com sends an HTTP request to retrieve the web page.

TCP (Transmission Control Protocol)

  • Layer: Transport (Layer 4)

  • Definition: A reliable, connection-oriented protocol that ensures the ordered delivery of data between systems.

  • Features: Error-checking, flow control, data segmentation and reassembly.

  • Use Case: File transfers, web browsing, email (where reliability is critical).

  • Example: Downloading a file over the internet where TCP ensures the entire file is received in order and without errors.

UDP (User Datagram Protocol)

  • Layer: Transport (Layer 4)

  • Definition: A connectionless protocol that sends data without ensuring it arrives or is in the correct order.

  • Features: Low overhead, faster than TCP but unreliable.

  • Use Case: Real-time applications like video streaming, online gaming, VoIP.

  • Example: Streaming a video where slight data loss is acceptable to maintain speed.

FTP (File Transfer Protocol)

  • Layer: Application (Layer 7)

  • Definition: A protocol used for transferring files between a client and server.

  • Features: Supports both binary and text file transfers, requires login credentials.

  • Use Case: Uploading or downloading files to/from a server.

  • Example: A user uploading images to a website's server using FTP.

SMTP (Simple Mail Transfer Protocol)

  • Layer: Application (Layer 7)

  • Definition: A protocol used to send emails from a client to a server or between mail servers.

  • Features: Used to send (but not retrieve) emails.

  • Use Case: Sending emails from an email client to a server.

  • Example: Sending an email from yourname@example.com via an SMTP server.

POP (Post Office Protocol)

  • Layer: Application (Layer 7)

  • Definition: A protocol used by email clients to retrieve emails from a mail server.

  • Features: Downloads emails to the client and typically deletes them from the server.

  • Use Case: Accessing emails on a single device (e.g., desktop).

  • Example: Downloading emails to your computer using a mail client like Outlook with POP3.

IMAP (Internet Message Access Protocol)

  • Layer: Application (Layer 7)

  • Definition: A protocol used by email clients to retrieve emails from a mail server.

  • Features: Synchronizes emails across multiple devices without removing them from the server.

  • Use Case: Accessing the same email account from multiple devices.

  • Example: Reading emails on your phone and then seeing the same emails on your desktop, with full synchronization.

CAP Theorem (Brewer's Theorem)

The CAP theorem states that in a distributed data system, it is impossible to simultaneously achieve all three of the following properties:

  1. Consistency (C): Every read receives the most recent write or an error.

  2. Availability (A): Every request (read or write) receives a response (success or failure), even if the data is not the most recent.

  3. Partition Tolerance (P): The system continues to operate, even if there is a network partition that causes communication failure between nodes.

Key Points of CAP Theorem:

  • A distributed system can only guarantee two out of the three properties at the same time.

  • Partition tolerance (P) is generally a must in distributed systems, as network issues can always happen (e.g., a node or link going down). This means that in practice, you must make a trade-off between consistency (C) and availability (A).

Breakdown of CAP:

1. Consistency (C):

  • All nodes in the distributed system have the same data at the same time.

  • Example: In a consistent system, if you write data to one node and then immediately read it from another node, the read will return the data you just wrote (assuming no errors).

  • Trade-off: May increase response time since the system has to ensure every node has the most recent data before responding.

2. Availability (A):

  • The system is operational and able to respond to any request (read or write), even if the response may not reflect the latest write.

  • Example: In an available system, you can always read or write data, but the data you read may not be the latest.

  • Trade-off: In highly available systems, the system may return stale or outdated data during failures to maintain response times.

3. Partition Tolerance (P):

  • The system can handle network failures or partitions, where communication between nodes is interrupted.

  • Example: A network partition occurs, but the system can still function and serve requests by handling nodes that cannot communicate with each other.


CAP Trade-offs:

  1. CP (Consistency + Partition Tolerance):

    • The system sacrifices availability to ensure that data is always consistent across nodes.

    • If a network partition occurs, some parts of the system may become unavailable until the partition is resolved.

    • Example: Traditional relational databases (e.g., MySQL in a distributed setup) often prioritize CP.

  2. AP (Availability + Partition Tolerance):

    • The system sacrifices strict consistency to ensure that the system remains available even during network partitions.

    • Data may be out of sync between nodes, leading to eventual consistency, meaning data will eventually become consistent once the partition is resolved.

    • Example: NoSQL databases like Cassandra and DynamoDB, which prioritize high availability and partition tolerance.

  3. CA (Consistency + Availability):

    • The system sacrifices partition tolerance, meaning it cannot function if a network partition occurs.

    • This is typically not achievable in large-scale distributed systems, as network partitions are inevitable.

    • Example: A single-node relational database (e.g., PostgreSQL on one server) can offer CA because there is no partition to worry about.


Real-World Examples:

  • CP Example (Consistency + Partition Tolerance):

    • MongoDB in a configuration where consistency is prioritized (with strong consistency settings).

    • HBase, which sacrifices availability to ensure strong consistency across nodes.

  • AP Example (Availability + Partition Tolerance):

    • Cassandra and DynamoDB prioritize availability over consistency, meaning they will return a response even if the data is stale during network issues.
  • CA Example (Consistency + Availability):

    • Relational databases (RDBMS) like MySQL or PostgreSQL in a single-node setup (no network partitions) provide consistency and availability, but they can't scale to distributed systems where partition tolerance becomes a factor.

Microservice

Microservice patterns are design strategies for building modular, loosely coupled services that work together to form a distributed system. These patterns help in designing microservices to ensure they are scalable, maintainable, and easily deployable. Some key patterns include:

  1. Decomposition Patterns (breaking monoliths into microservices)

  2. Database per Service (each microservice owns its own database)

  3. API Gateway (handles routing, security, and client requests)

  4. Circuit Breaker (to handle failures gracefully)

  5. Service Registry and Discovery (locating services dynamically)


Monolithic Architecture

Definition:

A monolithic architecture is a traditional model of software development where the entire application (UI, business logic, database access) is built as a single, tightly coupled unit.

Characteristics:

  • Single Codebase: All functionalities are in one large codebase.

  • Tight Coupling: Components of the application are interdependent.

  • Single Deployment: The entire application is deployed as a single unit.

Example:

An e-commerce application where all the modules (authentication, inventory, payment, shipping) are built and deployed as one large application.

Microservice Patterns

Microservice patterns are design strategies for building modular, loosely coupled services that work together to form a distributed system. These patterns help in designing microservices to ensure they are scalable, maintainable, and easily deployable. Some key patterns include:

  1. Decomposition Patterns (breaking monoliths into microservices)

  2. Database per Service (each microservice owns its own database)

  3. API Gateway (handles routing, security, and client requests)

  4. Circuit Breaker (to handle failures gracefully)

  5. Service Registry and Discovery (locating services dynamically)


Monolithic Architecture

Definition:

A monolithic architecture is a traditional model of software development where the entire application (UI, business logic, database access) is built as a single, tightly coupled unit.

Characteristics:

  • Single Codebase: All functionalities are in one large codebase.

  • Tight Coupling: Components of the application are interdependent.

  • Single Deployment: The entire application is deployed as a single unit.

Example:

An e-commerce application where all the modules (authentication, inventory, payment, shipping) are built and deployed as one large application.


Microservice Architecture

Definition:

In a microservice architecture, an application is built as a collection of small, loosely coupled services, each responsible for a specific piece of business functionality. Each service can be developed, deployed, and scaled independently.

Characteristics:

  • Small, Single-Responsibility Services: Each service is focused on one piece of functionality.

  • Independent Deployment: Services are independently deployable.

  • Technology Heterogeneity: Services can be developed in different programming languages or technologies.

  • Decentralized Data Management: Each microservice typically manages its own database.

Example:

In the same e-commerce example, the application is broken into separate services like User Service, Product Service, Payment Service, Order Service, each independently managed and deployed.


Microservice Phases

To transition to or build a microservice architecture, here are key phases involved:

  1. Monolith Identification and Planning:

    • Identify the boundaries of your monolithic application and plan the transition to microservices by decomposing the monolith.
  2. Service Identification:

    • Identify services that can operate independently with minimal shared dependencies.

    • Ensure each service adheres to the Single Responsibility Principle.

  3. Service Development:

    • Develop the microservices, typically with different teams working on individual services.

    • Implement appropriate communication mechanisms (e.g., REST APIs, messaging).

  4. Database Decoupling:

    • Implement the Database per Service pattern where each microservice manages its own database.
  5. Service Integration:

    • Use patterns like API Gateway, Service Registry and Discovery, Circuit Breaker, and messaging systems for integrating microservices and handling failures.
  6. Testing and Monitoring:

    • Implement testing at multiple levels (unit, integration, end-to-end) and monitoring for individual services.
  7. Deployment and Scaling:

    • Use CI/CD pipelines for automating deployment.

    • Services can be independently scaled based on their resource needs.


Decomposition Patterns in Microservices (in detail)

Decomposition patterns are strategies for breaking down a monolithic application into smaller, independent services. There are two main ways to decompose monoliths:

  1. Decompose by Business Capability:

    • Focus on the core business functions and split the monolith based on business capabilities (e.g., order management, payment processing, user management).

    • Each business capability can be encapsulated as its own service.

    • Example: An e-commerce app can be decomposed into services such as User Service, Order Service, Payment Service, Inventory Service.

  2. Decompose by Subdomain (Bounded Context):

    • A bounded context in Domain-Driven Design (DDD) represents a boundary within which a particular model applies.

    • Decomposing the monolith into bounded contexts means each microservice operates in its own subdomain.

    • Example: A CRM system may have subdomains like Sales, Support, and Billing, each managed by separate microservices.

  3. Decomposition by Layer:

    • Decomposing monolith by separating the layers (e.g., UI, business logic, data access).

    • Example: Break the monolithic data access logic into a separate microservice that handles database operations.

Challenges:

  • Identifying proper boundaries is difficult and requires careful planning.

  • Dependencies between services need to be managed effectively to avoid tightly coupling microservices.


Advantages and Disadvantages of Monolithic and Microservice Architectures

Monolithic Architecture:

Advantages:

  1. Simpler Development:

    • Easy to develop and test in the early stages since everything is in a single codebase.

    • No inter-service communication issues or network overhead.

  2. Easier Deployment:

    • Entire application is deployed as a single package. No need for complex CI/CD pipelines for multiple services.
  3. Performance:

    • No latency due to network calls between services. All function calls are within the same process.
  4. Simple Debugging:

    • Debugging is simpler because you only need to troubleshoot one application as opposed to multiple services.

Disadvantages:

  1. Tight Coupling:

    • Any small change requires rebuilding and redeploying the entire application.
  2. Scaling:

    • Scaling a monolith often means scaling the entire application, even if only one part (e.g., payment processing) needs more resources.
  3. Large Codebase:

    • Over time, the codebase becomes large and difficult to maintain, making development slow and error-prone.
  4. Technological Lock-in:

    • Harder to adopt new technologies because the entire application is tied to a single technology stack.

Microservice Architecture:

Advantages:

  1. Independent Deployment:

    • Each microservice can be deployed independently, allowing for more frequent releases.
  2. Scalability:

    • Each service can be scaled independently, allowing efficient use of resources. For example, you can scale the Order Service without affecting the Payment Service.
  3. Flexibility in Technology:

    • Microservices can be built using different technology stacks, depending on the use case and team expertise.
  4. Resilience:

    • Failures in one microservice do not necessarily bring down the entire system. Techniques like circuit breakers can be used to handle failures gracefully.
  5. Team Autonomy:

    • Small, cross-functional teams can independently develop, test, and deploy their services, reducing bottlenecks in development.

Disadvantages:

  1. Complexity:

    • Managing a system with many services introduces complexity in terms of deployment, testing, and monitoring.
  2. Inter-Service Communication:

    • Services must communicate over the network (e.g., via HTTP/REST or messaging), which adds latency and complexity. You need to handle issues like network failures and retries.
  3. Data Management:

    • Microservices often require decentralized data management, which can make transactions and data consistency more difficult (e.g., eventual consistency).
  4. Operational Overhead:

    • More services mean more operational overhead, including setting up CI/CD pipelines, managing infrastructure, monitoring, logging, and security across services.

Monolithic vs Microservices: A Summary

AspectMonolithic ArchitectureMicroservice Architecture
DeploymentSingle unitIndependent, per service
DevelopmentSingle, large teamSmall, autonomous teams
ScalabilityEntire application is scaled togetherIndependent scaling for each service
MaintenanceHarder as the application growsEasier due to modularity
FlexibilityTightly coupled, one tech stackEach service can use different technologies
Fault ToleranceEntire app can fail due to one componentFailure in one service doesn't affect others
Data ConsistencyEasier to maintainEventual consistency, harder to maintain
CommunicationInternal calls, no network overheadNetwork calls introduce latency, more complex