Mastering Periodic Polling And Streaming for Systems Design

Jenkins job pools Java source code to build the code periodically. Kafka requests a streaming application and interacts with Postgres or Amazon S3.

We often use Periodic pooling and streaming in our day-to-day jobs, and it’s also crucial to know what those concepts are for Systems Design.

In Jenkins, we often configure pooling to build our apps periodically. Kafka uses streaming by providing a distributed and fault-tolerant platform that allows applications to publish, subscribe, and process streams of records in real-time.

What is Periodic Pooling?

Pooling, also known as polling, is a technique in software development where a client periodically checks a server or a data source for updates or changes. Here are some advantages of using polling in software development:

Simplicity: Polling doesn’t require complex event-driven architectures or real-time streaming infrastructure. Polling can be easily implemented using standard HTTP requests or similar communication protocols.

Compatibility: We use polling in various environments and technologies. It is compatible with most programming languages and frameworks. Additionally, it can work with various types of data sources, such as databases, APIs, or file systems.

Control over Request Frequency: With polling, you control the frequency of requests made to the server or data source. You can adjust the polling interval based on the specific requirements of your application. This flexibility allows you to balance the trade-off between real-time updates and the load on the server.

Support for Non-Real-Time Updates: Polling suits scenarios where real-time updates are not critical. If the data or information you are retrieving doesn’t need to be immediately up-to-date, polling can be an effective solution. It allows you to retrieve updates at a reasonable interval without the need for continuous streaming or event-driven architectures.

Lower Complexity on the Server-Side: Polling puts less burden on the server or data source than real-time streaming or event-driven systems. The server only needs to respond to requests when polled by the client rather than maintaining persistent connections or handling continuous data streams.

Compatibility with Caching: The client can cache the response received from the server during a polling request and use it for subsequent requests until new data is available. Caching can reduce server load and improve performance by serving responses from the cache instead of making frequent requests.

It’s important to note that polling also has some limitations. It can introduce latency as the client needs to wait for the next polling interval to receive updates. It can also result in unnecessary network traffic if updates are infrequent or the client polls too frequently. In scenarios where real-time updates are critical, or network efficiency is a concern, alternative approaches like streaming or event-driven architectures may be more appropriate.

In general, polling can be a simple and effective solution for scenarios where real-time updates are not required, and periodic checks for updates are sufficient.

What is the Problem that Streaming Solve?

Streaming solves several problems in software engineering, particularly in the context of data processing and real-time applications. Here are some critical problems that streaming helps address:

Real-time data processing: Traditional batch processing approaches have limitations when processing and analyzing data in real-time. Streaming allows for continuous data processing as it arrives, enabling real-time insights and actions. Streaming is crucial in applications that require immediate processing and response, such as fraud detection, monitoring systems, or real-time analytics.

Scalability: Streaming architectures provide scalability by distributing data processing across multiple nodes or partitions. Instead of processing data centrally, streaming systems can scale horizontally by adding more processing nodes as the workload increases. Scaling allows applications to handle large volumes of data and accommodate fluctuating traffic or demand without sacrificing performance.

Low-latency processing: Streaming systems enable low-latency processing by reducing data ingestion and processing time. Unlike batch processing, which typically operates on fixed intervals or large datasets, streaming processes data incrementally as it arrives. This near real-time processing minimizes the delay between data generation and analysis, making it suitable for applications that require immediate or near-immediate responses.

Continuous data integration: Streaming facilitates the integration of diverse data sources in real-time. It allows for the seamless ingestion and processing of data from various systems, devices, or sensors, enabling a unified view of the data. Streaming is valuable in scenarios where data is generated from multiple sources and needs to be processed and analyzed together, such as IoT applications or real-time monitoring systems.

Fault tolerance and resilience: Streaming systems are designed to handle failures and ensure data integrity. They often provide data replication, fault detection, and automatic recovery mechanisms. In the event of a failure, streaming systems can continue processing data without significant interruptions, minimizing the impact on applications and maintaining data consistency.

Event-driven architecture: Streaming enables event-driven architectures, allowing applications to respond to real-time events. Instead of relying on polling or manual triggers, streaming systems deliver events to subscribing applications in real time. This approach improves responsiveness, agility, and scalability, as applications can process events as they occur. Streaming technologies like Apache Kafka, Apache Flink, and Amazon Kinesis are commonly used for event-driven architectures, enabling seamless communication and integration between components in distributed systems.

Technologies that Use Periodic Pooling

Various technologies and frameworks use periodic pooling to check for updates or perform tasks at regular intervals. Some examples include:

HTTP Polling: Web applications often use periodic HTTP polling to check for updates or fetch new data from a server at regular intervals. Clients request the server at predetermined intervals to retrieve the latest information.

Database Polling: In some scenarios, applications may use periodic polling to check for new or updated records in a database. This check can be helpful in tasks such as data synchronization or detecting changes in a database table.

Messaging Systems: Messaging systems like RabbitMQ or Apache ActiveMQ can periodically poll to check for new messages in a queue or topic. Consumers periodically poll the messaging system to retrieve and process any new messages.

Job Scheduling: Job scheduling frameworks like Quartz or cron-based systems use periodic pooling to execute scheduled tasks at predefined intervals. The scheduler periodically checks for pending jobs and triggers their execution based on the configured schedule.

Monitoring and Health Checks: Systems monitoring tools or health check frameworks often employ periodic pooling to check the status or health of various components or services. We perform these checks regularly to ensure the system is functioning correctly.

These are just a few examples of technologies that utilize periodic pooling to check for updates, perform tasks, or monitor resources at regular intervals. The specific use cases and implementation details can vary depending on the requirements of the application or system being developed.

How Does Streaming Work?

Streaming can be facilitated through socket connections. Here’s a simplified explanation of how streaming works using socket-based communication:

Server-Side Setup: The streaming process begins with setting up a server that listens for incoming connections. The server establishes a socket, a communication endpoint, and binds it to a specific IP address and port.

Client-Side Connection: On the client side, a connection is established to the server by creating a socket and providing the server’s IP address and port number. The client socket connects to the server socket, forming a connection between the two.

Data Transmission: Once the connection is established, data can be transmitted from the server to the client in a streaming fashion. The server can continuously send data in smaller chunks or packets to the client over the open socket connection.

Data Reception: On the client side, the socket continuously receives the incoming data packets as the server transmits them. The client can process or consume the received data immediately without waiting for the entire data set.

Real-time Processing: As the client receives the data packets, it can perform real-time processing on the received data. The processing can include tasks such as data analysis, transformations, storage, or any other application-specific operations.

Continuous Communication: The data transmission and reception between the server and the client continue in a constant and ongoing manner. The server keeps sending data packets, and the client keeps receiving and processing them as they arrive.

Socket-based communication provides a reliable and efficient way to establish a streaming connection between a server and a client. It allows for bidirectional communication, enabling both data transmission from the server to the client and potential feedback or responses from the client to the server.

It’s important to note that while socket-based communication is a common method for implementing streaming, there are also other technologies and protocols specifically designed for streaming, such as HTTP-based streaming protocols (e.g., HLS or DASH) or message queue systems (e.g., Apache Kafka or RabbitMQ). The choice of technology depends on the specific requirements and characteristics of the streaming application.

What Is the Use Case of Streaming?

Streaming is well-suited for various use cases in software engineering where real-time data processing, continuous data integration, or immediate response is required. Here are some everyday use cases where streaming is beneficial:

Real-time Analytics: Streaming is valuable for real-time analytics applications where data is continuously generated, and immediate insights are needed. Examples include monitoring systems, fraud detection, stock market analysis, social media sentiment analysis, and network traffic analysis. Streaming enables the processing and analysis of incoming data as it arrives, allowing organizations to make timely decisions based on up-to-date information.

Internet of Things (IoT): Streaming is essential in IoT applications, where many devices generate a continuous stream of sensor data. Streaming enables real-time processing and analysis of IoT data, enabling applications like smart home automation, industrial monitoring and control, predictive maintenance, or environmental monitoring. By processing IoT data in real-time, organizations can respond to events, trigger actions, or detect anomalies promptly.

Log Processing and Monitoring: Streaming is valuable for processing and analyzing log data in real-time. Applications can continuously stream logs from various sources, such as servers, applications, or network devices, and perform real-time analysis to detect errors, performance issues, or security breaches. Streaming log data allows organizations to proactively monitor systems, identify problems, and take immediate corrective actions.

Real-time Collaboration and Messaging: Streaming is useful in building real-time collaboration and messaging applications. Examples include chat applications, collaborative document editing, multiplayer gaming, or video conferencing. Streaming enables instant message delivery, real-time updates, and synchronized communication between multiple users or devices.

Event-driven Architectures: Streaming is foundational for event-driven architectures, where applications respond to events or triggers in real-time. Streaming allows applications to process and react to events as they occur, enabling event-driven workflows, notifications, or automated actions based on specific events. Event-driven architectures are relevant in domains like workflow automation, event processing systems, or event-driven microservices.

Continuous Data Integration: Streaming is beneficial for integrating and processing data from multiple sources in real-time. For example, streaming can capture data from various systems, databases, or APIs in a data pipeline and perform real-time transformations, aggregations, or enrichments. Streaming gives organizations a unified and up-to-date view of their data, crucial for applications like real-time dashboards, data synchronization, or data-driven decision-making.

These are just a few examples of how streaming can be applied in software engineering. The suitability of streaming depends on the application’s specific requirements, the need for real-time processing, and the nature of the data being handled.

Technologies that Use Streaming

Several technologies and frameworks leverage streaming as a fundamental concept for real-time processing and delivering data. Here are some notable examples:

Apache Kafka: Kafka is a distributed streaming platform that provides a high-throughput, fault-tolerant, and scalable solution for handling real-time data streams. It is widely used for building event-driven architectures, real-time analytics, and data pipeline applications.

Apache Flink: Flink is an open-source stream processing framework that enables distributed, fault-tolerant, and low-latency data processing of streaming data. It supports event time processing, windowing, state management, and complex event processing.

Apache Spark Streaming: Spark Streaming is a component of the Apache Spark ecosystem that allows processing and analyzing real-time data streams. It provides high-level abstractions for stream processing and integrates with Spark’s batch processing, machine learning, and graph processing capabilities.

Amazon Kinesis: Amazon Kinesis is a managed service by Amazon Web Services (AWS) for handling real-time streaming data at scale. It offers capabilities to collect, process, and analyze streaming data from various sources, such as logs, IoT devices, and clickstreams.

Apache Storm: Storm is a distributed real-time computation system allowing stream processing and real-time analytics. It provides fault-tolerant processing of streams with scalable and reliable data processing capabilities.

Confluent Platform: Confluent Platform is built around Apache Kafka and provides an enterprise-ready streaming platform. It offers additional features and tooling for managing and monitoring Kafka clusters and integrating them with other data systems.

Microsoft Azure Stream Analytics: Azure Stream Analytics is a real-time service on the Microsoft Azure cloud platform. It enables processing and analyzing streaming data from various sources, including IoT devices, social media, and logs.

These technologies empower developers to efficiently handle and process real-time data streams, enabling use cases such as real-time analytics, event-driven architectures, IoT data processing, and more.

Summary from Pooling and Streaming

Either the concept of pooling or streaming is crucial when designing systems. That’s because tools like Jenkins use pooling, and Kafka uses streaming, for example. Once we master those concepts, it’s much easier to understand the tools that implement them. To remember the key points, let’s recap:

Pooling:

  • Periodically checking a server or data source for updates or changes.
  • Simple approach.
  • Compatibility with various technologies.
  • Control over request frequency.
  • Suitable for non-real-time updates.
  • Places less burden on the server side.
  • It can work well with caching mechanisms.

Streaming:

  • Continuous flow of data in real-time.
  • Data is transmitted and processed incrementally.
  • Enables real-time processing and analysis.
  • Suitable for real-time analytics, IoT, log processing, and collaboration applications.
  • Supports event-driven architectures.
  • Provides immediate insights and actions.
  • Requires specialized infrastructure and protocols.
  • Handles large volumes of data.
  • Enables synchronized communication and real-time collaboration.
Written by
Rafael del Nero
Join the discussion