Store remote segment metadata in RocksDB#66
Store remote segment metadata in RocksDB#66ying-zheng wants to merge 5 commits intoharshach:tiered-storagefrom
Conversation
satishd
left a comment
There was a problem hiding this comment.
thanks @ying-zheng for the PR. Left a comment on avoiding inmemory map of <topic-partition, <offset, segment-id>.
| Cache cache; | ||
| RLSMSerDe.RLSMSerializer serializer = new RLSMSerDe.RLSMSerializer(); | ||
| RLSMSerDe.RLSMDeserializer deserializer = new RLSMSerDe.RLSMDeserializer(); | ||
| private Map<TopicPartition, NavigableMap<Long, RemoteLogSegmentId>> partitionsWithSegmentIds = |
There was a problem hiding this comment.
Keeping this as inmemory map will not scale for a larger no of partitions and log segments. It is good to have them in RocksDB. One possibility is to build key something like <topic-partition>:<offset> and we need to check whether that would scale with a large no of partitions and the segments.
There was a problem hiding this comment.
Updated the PR, put the map into rocksDB
satishd
left a comment
There was a problem hiding this comment.
thanks @ying-zheng for the update. Left a comment about deserializing and creating a map for every update/get operation.
| public void update(TopicPartition tp, RemoteLogSegmentMetadata metadata) { | ||
| try { | ||
| WriteBatch batch = new WriteBatch(); | ||
| final NavigableMap<Long, RemoteLogSegmentId> segmentIds = getSegmentIds(tp); |
There was a problem hiding this comment.
Deserializing and building a map for every get or update may be costly. Did you think about searching based on prefix based key instead of realizing all the entries and build a map?
There was a problem hiding this comment.
We also need to check whether the current model causes high contention as we have a single DB instance at brokers for all the topic partition updates it receives from rlmm topic.
There was a problem hiding this comment.
For each topic-partition, the segment id map / list is accessed in 2 cases
- Metadata update, when a new segment is shipped to remote storage or a segment is deleted.
- When a consumer tries to consume a remote segment of topic-partition
Depends on the configuration, 1 happens every several minutes to a couple of hours. 2 only happens when the consumers is consuming very old data. Either case does not happen very frequently. So the performance shouldn't be a problem.
As long as the data is cached in memory, serializing / deserializing should be pretty fast (comparing with serializing / deserializing the corresponding Kafka request / response messages, and network latency)
We can also split the remote-segment-list into smaller pieces. So that the append only happens on the last piece, and delete only happens on the first piece. But, when we looking up an offset, we still need an index to search for an offset, and can find out the "next" offset of a given offset. This means we will need an index for the offsets. If the index is also stored in RocksDB, the data structure would be something like a B+tree.
There was a problem hiding this comment.
ser/des may not be an issue but creating transient data like map instances etc may cause pressure on minor GC.
RLMM on any broker can be subscribed to a large no remote log segment metadata partitions in a cluster and it may process those events. We can discuss/explore different approaches once we run perf test on this standalone store with large ingress of messages onto the remote log segment metadata topic.
No description provided.