- Java
- Internet
- Spring Boot
AbstractQueuedSynchronizer is a utility for creating custom synchronizers in Java, using a node-based queue mechanism.
To customize lock behavior, implement tryAcquire/tryRelease for exclusive locks or tryAcquireShared/tryReleaseShared for shared locks.
These methods define when a lock can be acquired or released.
The state field tracks the lock's status, and AbstractQueuedSynchronizer automatically manages the queue and wake-up process for waiting threads.
- Return the necessary CORS headers for all requests directly from the server.
- Set up a reverse proxy server (e.g., Nginx) to intercept both preflight and regular requests, adding the necessary CORS headers.
-
Set all broker replicas keep synchronized as OSR nodes may lack the latest messages, risking data loss if elected as the leader.
server.properties:
unclean.leader.election.enable = false -
Keep meessage flush to disk synchronized.
server.properties:
# When the number of messages in a log segment reaches 10000, Kafka forces a flush to disk to persist the data. log.flush.interval.messages = 10000 # Forces a flush operation after the specified time interval in milliseconds. log.flush.interval.ms = 1000 # Sets the interval in milliseconds at which Kafka checks whether a flush is needed. log.flush.scheduler.interval.ms = 3000
-
In the Kafka producer, setting
acks=allensures that the producer waits for acknowledgments from all in-sync replicas (ISRs) of the partition before considering the message successfully sent.Additionally, a callback function should be set for each sent message to handle success, failure, and retries.
The callback provides a way to receive feedback from the broker, allowing you to log successes or handle any delivery failures (e.g., network issues or broker unavailability) with appropriate actions such as retries or compensatory measures. -
In the Kafka consumer, setting
enable.auto.commit=falseand manually committing the message offset ensures greater control over message processing.- Relying on automatic offset commits can lead to data loss if the consumer crashes before committing the latest offsets.
-
equals()compares objects for equality, it checks if two objects are the same in memory (same address) by default.
hashCode()gives a number used to find the object in hash-based collections. -
Hash-based collections like HashMap and HashSet use
hashCode()first to find the bucket, then useequals()to compare keys.
If you overrideequals()but nothashCode(), two equal objects may have different hash codes — causing duplicates in the map or set.