Kafka Usecases 2

1 Comments

We have discussed some of the usecases of Apache Kafka in the previous post. We will continue to explore other usecases in this post.

Metrics:

Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

Log Aggregation:

Log aggregation means collecting the physical log files from the servers and place them in a centralized place either in a file server or in a Hadoop File System (HDFS) for processing.

Log management plays a vital role in an IT infrastructure and Kafka offers equally good performance to the log aggregation tools like Scribe or Flume.

Stream processing:

Stream processing is an important usecase of Kafka, in which the raw data from a topic is aggregated, enriched, or transformed to new topics for further processing/consumption.

Stream processing is effectively used in many fields for optimized user experience. Some of its uses are,

  • Stock Market Survillance
  • Smart Device applications
  • Geofencing/vehicle tracking
  • Sports tracking

Kafka Streams are used in Apache Kafka to achieve stream processing. Some of the organizations using Kafka Streaming are,

The New York Times

Pinterest

Rabobank

Line

Trivago

More details in https://kafka.apache.org/documentation/streams/

 

Event Sourcing:

Event Sourcing means, to ensure that every state change of an application is maintained in the time-ordered sequence of records.

The idea behind Event Sourcing is, capturing every state change of an application in an event object in the same sequence order in which they are occuring.

Kafka's support for very large stored log data makes it an excellent backend for an application built in this style.

Commit log:

In simple, commit log is a record of transactions, which keeps track of the latest events/commits in a large distributed system. All commits will be recorded to the log in a central system, which keeps track of them before applied. It helps in tracking the failed transactions and retry/resume the inflight operations which got failed.

Kafka can act as an external commit log for a distrubuted system. The log campaction feature in kafka helps this functionality( Apache bookkeeper is another product that gives the same usage as Kafka in Commit log).

Log compaction makes sure that the last known value of each record key for a single topic is retained.  It means, in a series of events in a distributed system, kafka stores the last accepted state and so incase of crash/failure it will restore the order from that point.

 

Lets see some useful admin activities and commands in the next post.....



You may also like

1 comment:

  1. Gambling Addiction Help Center in St. Louis | DRMCD
    The gambling addiction hotline 양주 출장안마 is a professional resource where 상주 출장안마 you can share 문경 출장샵 what's helping you and other 진주 출장마사지 residents find the answers. 안산 출장샵 To help you get

    ReplyDelete

Search This Blog

Powered by Blogger.

Blog Archive