Skip to content

Latest commit

 

History

History
93 lines (82 loc) · 4.7 KB

File metadata and controls

93 lines (82 loc) · 4.7 KB

Table of Contents

Back to Main Project README

1. How does AbstractQueuedSynchronizer function internally?

Answer

AbstractQueuedSynchronizer is a utility for creating custom synchronizers in Java, using a node-based queue mechanism.

To customize lock behavior, implement tryAcquire/tryRelease for exclusive locks or tryAcquireShared/tryReleaseShared for shared locks. These methods define when a lock can be acquired or released.

The state field tracks the lock's status, and AbstractQueuedSynchronizer automatically manages the queue and wake-up process for waiting threads.

2. How to solve CORS issue?

Answer

  1. Return the necessary CORS headers for all requests directly from the server.
  2. Set up a reverse proxy server (e.g., Nginx) to intercept both preflight and regular requests, adding the necessary CORS headers.

3. How to ensure Kafka messages are not lost?

Answer

  1. Set all broker replicas keep synchronized as OSR nodes may lack the latest messages, risking data loss if elected as the leader.

    server.properties:

    unclean.leader.election.enable = false
  2. Keep meessage flush to disk synchronized.

    server.properties:

    # When the number of messages in a log segment reaches 10000, Kafka forces a flush to disk to persist the data.
    log.flush.interval.messages = 10000
    # Forces a flush operation after the specified time interval in milliseconds. 
    log.flush.interval.ms = 1000
    # Sets the interval in milliseconds at which Kafka checks whether a flush is needed. 
    log.flush.scheduler.interval.ms = 3000
  3. In the Kafka producer, setting acks=all ensures that the producer waits for acknowledgments from all in-sync replicas (ISRs) of the partition before considering the message successfully sent.

    Additionally, a callback function should be set for each sent message to handle success, failure, and retries.
    The callback provides a way to receive feedback from the broker, allowing you to log successes or handle any delivery failures (e.g., network issues or broker unavailability) with appropriate actions such as retries or compensatory measures.

  4. In the Kafka consumer, setting enable.auto.commit=false and manually committing the message offset ensures greater control over message processing.

    • Relying on automatic offset commits can lead to data loss if the consumer crashes before committing the latest offsets.

4. Why must you override hashCode() if you override equals()?

Answer

  1. equals() compares objects for equality, it checks if two objects are the same in memory (same address) by default.
    hashCode() gives a number used to find the object in hash-based collections.

  2. Hash-based collections like HashMap and HashSet use hashCode() first to find the bucket, then use equals() to compare keys.
    If you override equals() but not hashCode(), two equal objects may have different hash codes — causing duplicates in the map or set.