First patch for Spillable Channel.. capturing feedback.
Review Request #9544 - Created Feb. 21, 2013 and discarded
Initial patch for capturing feedback on SpillableChannel. Open Issues: a) Config & Overflow Specification : It currently allows configuration (as shown below) for selecting any other channel as a overflow. The current config subsystem doesn't support this mechanism well. To make this happen, i have modified the config subsystem. A postconfig step is introduced to enable a channel to obtain a reference to another configured channel by name. One suggested alternative has been to hard code the file channel as the overflow channel at compile time. Downside would be that if a new SSD optimized channel is introduced in the future, the users of Spillable channel would not be able to take advantage of it. b) Max Transaction Size: This implementation requires a config param maxTransactionBatchSize, which is the size of the largest batchSize setting used by all the sinks and sources attached to this channel. This helps a good deal in optimizing the implementation of put() and take() internally. Consequently, unlike the transactionSize used in other channels, this value should be a much smaller and closer to the actual batchSize settings in the sources/sinks. Hari had mentioned some concerns in this regard which I think had something to do with not all sources being able to know their batchSize upfront. I not totally clear on why it is the case. Would appreciate more insight. Notes: a) Basic algorithm: This implementation avoids copying events between the in-memory queue and the overflow for better performance. at the time of put(), a decision is made to either put the entire transaction in memory or into overflow based on maxTransactionSize and slots available in memory queue. Another queue (called 'drain order queue') is used to memorize the order in which the events were inserted. For example these values in the drain order queue ... +1000,-200,+400,-100 ... indicate the first 1000 elements are in memory, next 200 (the -ve number) are in overflow, next 400 are in memory and so on. This is used at the time of take() to drain elements in the right order directly from memory/overflow and avoid copying back and forth between memory & overflow. Any put/take Transaction will restricted to either the memory or overflow. Allowing it to operate on both will violate correctness of rollback/commit by requiring nested transactions. b) Performance: Currently my measurements are showing decent performance improvements over the MemoryChannel when there is no overflow occurring (between 7% and 70% depending on number of sources/sinks attached). In the case there is overflow, dont have much measurements to provide right now. this will also depend on what we do with point a) # Name the components on this agent agent1.sources = source1 agent1.sinks = sink1 agent1.channels = spillChannel1 fChannel agent1.sources.source1.type = SEQ agent1.sources.source1.batchSize = 10 agent1.sources.source1.channels = spillChannel1 agent1.sinks.sink1.type = null agent1.sinks.sink1.channel = spillChannel1 agent1.sinks.sink1.batchSize = 10 agent1.channels.spillChannel1.type = org.apache.flume.channel.SpillableMemoryChannel agent1.channels.spillChannel1.maxTransactionBatchSize = 10 agent1.channels.spillChannel1.overflowChannel = fChannel agent1.channels.spillChannel1.memoryCapacity = 100 # memory only agent1.channels.spillChannel1.totalCapacity = 10000 # memory + overflow agent1.channels.spillChannel1.keep-alive = 3 agent1.channels.fChannel.type = file agent1.channels.fChannel.checkpointDir = /tmp/fchannel/checkpoint agent1.channels.fChannel.dataDirs = /tmp/fchannel/data agent1.channels.fChannel.keep-alive = 0 agent1.channels.fChannel.capacity = 10000
Tests have been included in this patch.
|This hook violates Flume's isolation of managed components (of the same role) from each other. This could be a big ...||Mike Percy||March 6, 2013, 6:14 a.m.||Open|
Review request changed
Updated (Feb. 21, 2013, 8:40 p.m.)
Posted (March 6, 2013, 6:14 a.m.)
Hi Roshan, First off, thanks for the patch! I have some high level feedback and some requests: 1. I'd like to suggest a different channel composition mechanism. Today, all channels are given a chance to access all other active channels in the postConfigure() hook. See my comments below about that. It would be better to specify the underlying channel types via configuration of the spillable channel and instantiate those underlying implementations in the configure() method. start() and stop() can start and stop the underlying implementation as they do today. 2. What happens when the Flume agent is stopped and then restarted when events are in the channel? To what extent does this channel recover events? 3. Could you provide a summary of the design you went with here? Just some kind of small design doc or explanation, since on the JIRA we discussed multiple directions. Specifically, I'm interested in the algorithm being used (i.e. with the drain order queue) and what the expected behavior is under boundary conditions (full channels, permanently failed channels (bad disk), concurrency under mixed success/failure). 4. What assumptions are made / invariants are required for the underlying channels?
Review request changed
Updated (July 18, 2013, 6:54 p.m.)
- changed from pending to discarded