WIP: Fix crash in pipeline environment#115
WIP: Fix crash in pipeline environment#115vogtp wants to merge 1 commit intologstash-plugins:mainfrom
Conversation
|
Hi, I can't accept this PR, for 2 main reasons:
That said, given your error and your explanations. your problem seems that you have 2 different pipelines (at least) that have the same "aggregate_maps_path" setting. And so, at Logstash startup, the first pipeline loads the file and then deletes it, the second pipeline tries to load the file but fails because it has been deleted by the first one in the meantime. The good solution is to have a different "aggregate_maps_path" setting in each pipeline. |
|
Hi
I never expected this crap to merged.
I completely agree.
Again agreed, but is it working in other pipelined environment than mine?
Correct. I have 4 active pipelines correlating the syslog output of a 4 node mail cluster.
That's where I and the support case started. event.set("[aggregateCache]", "/var/tmp/logstash_aggregate_" + execution_context.pipeline.pipeline_id) ' } task_id => "%{connection_id}" code => "map['pipeline_id'] = execution_context.pipeline.pipeline_id" push_map_as_event_on_timeout => false timeout_task_id_field => "connection_id" timeout => 6000 # 10 minutes timeout #aggregate_maps_path => "/var/tmp/aggregate" #version 1: only one file created -> seems not to work #aggregate_maps_path => "%{aggregateCache}" #version 2: file does not get created aggregate_maps_path => [aggregateCache] #version 3: file does not get created I would like to get around the bug I am hitting... Regards |
Hi
I am trying to save the maps of aggregate in a logstash filter that uses 8 pipelines.
But the plugin crashes with the following exception:
[2021-06-18T10:32:25,956][ERROR][logstash.javapipeline ][smtp11] Pipeline error {:pipeline_id=>"smtp11", :exception=>#<TypeError: nil is not a string>, :backtrace=>["org/jruby/RubyMarshal.java:138:inload'", "/opt/logstash-7.10.2/vendor/bundle/jruby/2.5.0/gems/logstash-filter-aggregate-2.9.2/lib/logstash/filters/aggregate.rb:132:inblock in register'", "org/jruby/RubyIO.java:1158:inopen'", "/opt/logstash-7.10.2/vendor/bundle/jruby/2.5.0/gems/logstash-filter-aggregate-2.9.2/lib/logstash/filters/aggregate.rb:132:inblock in register'", "org/jruby/ext/thread/Mutex.java:164:insynchronize'", "/opt/logstash-7.10.2/vendor/bundle/jruby/2.5.0/gems/logstash-filter-aggregate-2.9.2/lib/logstash/filters/aggregate.rb:97:inregister'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:inregister'", "/opt/logstash-7.10.2/logstash-core/lib/logstash/java_pipeline.rb:228:inblock in register_plugins'", "org/jruby/RubyArray.java:1809:ineach'", "/opt/logstash-7.10.2/logstash-core/lib/logstash/java_pipeline.rb:227:inregister_plugins'", "/opt/logstash-7.10.2/logstash-core/lib/logstash/java_pipeline.rb:586:inmaybe_setup_out_plugins'", "/opt/logstash-7.10.2/logstash-core/lib/logstash/java_pipeline.rb:240:instart_workers'", "/opt/logstash-7.10.2/logstash-core/lib/logstash/java_pipeline.rb:185:inrun'", "/opt/logstash-7.10.2/logstash-core/lib/logstash/java_pipeline.rb:137:inblock in start'"], "pipeline.sources"=>[...], :thread=>"#<Thread:0x153f3631 run>"}(See Elastic Case: https://support.elastic.co/customers/s/case/5004M00000i7jHN)
Attaching the pipeline ID prevents this crash.
My prefered solution would be to add a flag defaulting to true only if more than one pipeline is used, but I have no clue how to do this.
Regards
Patrick