Spark Structured Streaming: Micro-batch approach. Treats streams as unbounded tables. Familiar Spark APIs.
df = spark.readStream.format('kafka').load()
result = df.groupBy(window('timestamp', '5 minutes')).count()
Apache Flink: True streaming. Lower latency than Spark. Complex event processing. Steeper learning curve.
Kafka Streams: Library, not framework. Runs in your application. Good for simple transformations.
For interviews, know Spark Streaming basics. Flink knowledge is a bonus for senior roles. Understand windowing concepts regardless of framework.