failed to flush chunk

{"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Describe the bug I have a pretty basic setup where I'm trying to write to a Cassandra backend and Loki just isn't creating any chunks. [2022/03/24 04:20:25] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) Environment: Infrastructure: Kubernetes; Deployment tool: helm; Screenshots, promtail config, or terminal output Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. I can see the logs in Kibana that were successfully uploaded, but the missing log could not be found. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=19 assigned to thread #0 [2022/03/24 04:20:25] [debug] [outputes.0] task_id=2 assigned to thread #1 [2022/03/24 04:20:36] [debug] [retry] re-using retry for task_id=0 attempts=4 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available @evheniyt thanks. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input chunk] update output instances with new chunk size diff=633 Sign up for a free GitHub account to open an issue and contact its maintainers and the community. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [SERVICE] Flush 5 Daemon Off Log_Level ${LOG_LEVEL} Parsers_File parsers.conf Plugins_File plugins.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name dummy Rate 1 Tag dummy.log [OUTPUT] Name stdout Match * [OUTPUT] Name kafka Match * Brokers ${BROKER_ADDRESS} Topics bit Timestamp_Key @timestamp Retry_Limit false # Specify the number of extra seconds to monitor a file once is . Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920205.172447077.flb', retry in 912 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0) #Write_Operation upsert Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available [2022/03/24 04:21:06] [debug] [outputes.0] task_id=1 assigned to thread #0 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=34055641 watch_fd=11 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY fluent-bit-1.6.10 Log loss failed to flush chunk. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] 4 new files found on path '/var/log/containers/.log' * ra: fix typo of comment Signed-off-by: Takahiro YAMASHITA <nokute78@gmail.com> * build: add an option for OSS-Fuzz builds (fluent#2502) This will make things a lot easier from the OSS-Fuzz side and also make it easier to construct new fuzzers. Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY name: the name or alias for the output instance. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:19:49] [debug] [retry] re-using retry for task_id=1 attempts=3 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. There same issues Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 18 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) My fluentbit (td-agent-bit) fails to flush chunks: [engine] failed to flush chunk '3743-1581410162.822679017.flb', retry in 617 seconds: task_id=56, input=systemd.1 > output=es.0.This is the only log entry that shows up. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. You signed in with another tab or window. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [ warn] [engine] failed to flush chunk '1-1648192118.5008496.flb', retry in 21 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available [2022/03/24 04:20:00] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 25 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) I am getting these errors. I am trying to send logs of my apps running on an ECS Fargate Cluster to Elastic Cloud. To Reproduce. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [task] created task=0x7ff2f183afc0 id=17 OK Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] task_id=17 assigned to thread #0 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 removing file name /var/log/containers/hello-world-bjfnf_argo_main-0b26876c79c5790bdaf62ba2d9512269459746b1c5711a6445256dc5a4930b65.log [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [http_client] not using http_proxy for header Skip to content Toggle navigation Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 events: IN_ATTRIB [2022/03/18 11:23:17] [ warn] [engine] failed to flush chunk '1-1647602596.725620402.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) [2022/03/18 11:23:17] [error] [output:es:es.0] HTTP status=404 URI=/_bulk, response: {"error":"404 page . fluent bit giving 400 with elastic search - Stack Overflow "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"kuMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. next_retry=2019-01-27 19:00:14 -0500 error_class="ArgumentError" error="Data too big (189382 bytes), would create more than 128 chunks!" plugin_id="object:3fee25617fbc" Because of this cache memory increases and td-agent fails to send messages to graylog Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [http_client] not using http_proxy for header Host 10.3.4.84 [2022/03/24 04:20:06] [error] [outputes.0] could not pack/validate JSON response TLS error: unexpected EOF Issue #6165 fluent/fluent-bit [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1885019 removing file name /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input chunk] update output instances with new chunk size diff=640 Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 8 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=695 I don't see the previous index error; that's good :). Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=1167 Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [out coro] cb_destroy coro_id=2 fluentd 1.4.2. elasticsearch-plugin 7.1.0. elasticsearch 7.1.0. added the waiting-on-feedback label. Bug Report Describe the bug Continuing logging in pod fluent-bit-84pj9 [2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920930.175942635.flb', retry in 11 seconds: task_i. When Retry_Limit is set to no_retries, means that retries are disabled and Scheduler . Fluentbit gets stuck [multiple issues] #3581 - Github Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 16 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-6lqzf_argo_main-5f73e32f330b82717357220ce404309cd9c3f62e1d75f241f74cbc3086597fa4.log Chunk cannot be retried, failed to flush chunk #5916 - Github with the updated value.yaml file. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"l-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=655 [2022/03/24 04:19:34] [debug] [upstream] KA connection #104 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10 What versions are you using? elasticsearch - failed to flush the buffer fluentd - Stack Overflow [2022/03/24 04:19:22] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [out coro] cb_destroy coro_id=15 Bug Report. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:34] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] re-using retry for task_id=4 attempts=2 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"e-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [task] created task=0x7ff2f183a2a0 id=10 OK Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 removing file name /var/log/containers/hello-world-ctlp5_argo_wait-f817c7cb9f30a0ba99fb3976757b495771f6d8f23e1ae5474ef191a309db70fc.log Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=1931990 file has been deleted: /var/log/containers/hello-world-swxx6_argo_main-8738378bea8bd6d3dfd18bf8ef2c5a5687c900539317432114c7472eff9e63c2.log Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [retry] re-using retry for task_id=7 attempts=2 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 file has been deleted: /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AeMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69464185 file has been deleted: /var/log/containers/hello-world-ctlp5_argo_main-276b9a264b409e931e48ca768d7a3f304b89c6673be86a8cc1e957538e9dd7ce.log Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Output always starts working after the restart. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=656 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [task] created task=0x7ff2f183b380 id=19 OK Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header {"took":2217,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"yeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=1756313 removing file name /var/log/containers/hello-world-7mwzw_argo_wait-970c00b906c36cb89ed77fe3fa3cd1abc2702078fee737da0062d3b25680bf9c.log Please check my YAML for input and output plugin of the fluent bit. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [retry] re-using retry for task_id=12 attempts=2 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk - jordanm. fluentbit fails to communicate with fluentd. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [input chunk] update output instances with new chunk size diff=633 Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [retry] new retry created for task_id=4 attempts=1 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [retry] new retry created for task_id=17 attempts=1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] task_id=14 assigned to thread #0 Why I get the "failed to flush chunk" error in fluent-bit? [2022/03/24 04:19:47] [debug] [http_client] not using http_proxy for header Fluentd error: "buffer space has too many data" - Stack Overflow Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [out coro] cb_destroy coro_id=8 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:36] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=695 keep other configs in value.yaml file by default. Describe the bug. [2022/03/24 04:21:08] [debug] [retry] re-using retry for task_id=1 attempts=5 Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [out coro] cb_destroy coro_id=9 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 file has been deleted: /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log Fluentbit to Splunk HEC forwarding issue #2150 - Github [2022/03/22 03:57:47] [ warn] [engine] failed to flush chunk '1-1647920384.178898202.flb', retry in 181 seconds: task_id=190, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Under this scenario what I believe is happening is that the buffer is filled with junk but Fluent . Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY we can close this issue. Retry_Limit False Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=21 Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] task_id=12 assigned to thread #1 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Name es {"took":3354,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"MeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. * [2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] re-using retry for task_id=16 attempts=2 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=695 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"POMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=16 assigned to thread #0 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY The script itself works great, however this container configuration didn't work for me, the problem for me was that I have loki's psp enabled (enforcing non-root execution) which cauesed crond to fail (it must elevate to run the job) I ended up using a simple while loop that wraps the script execution with a 1m sleep and that did the trick. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [task] created task=0x7ff2f183b560 id=20 OK Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. failed to flush chunk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log When running the Loki 1.2.0 Docker image, Loki is reporting that it can't write chunks to disk because there is "no space left on device", although there appears to be plenty of space. If it is not mounted then the link fails to resolve. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/local-path-provisioner-7ff9579c6-mcwsb_kube-system_local-path-provisioner-47a630b5c79ea227664d87ae336d6a7b80fdce7028230c6031175099461cd221.log, inode 444123 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY NoCredentialProviders Issue #483 grafana/loki GitHub Follow. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. It's possible for the HTTP status to be zero because it's unparseable -- specifically, the source uses atoi () -- but flb_http_do () will still return successfully. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=695 no tls required for es. [2022/03/24 04:19:54] [error] [outputes.0] could not pack/validate JSON response Logstash_Format On Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] task_id=21 assigned to thread #0 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=18 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e387c0 id=2 OK Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=1085 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [ warn] [engine] failed to flush chunk '1-1648192109.839317289.flb', retry in 8 seconds: task_id=8, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=6 {"took":2414,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"juMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JeMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZuMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920894.173241698.flb', retry in 58 seconds: task_id=700, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 events: IN_ATTRIB After reallocating, I found that there are a lot of failed to flush the buffer errors logged by the two OOMkilled fluentd pods before. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"AuMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YuMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=103386716 removing file name /var/log/containers/hello-world-6lqzf_argo_main-5f73e32f330b82717357220ce404309cd9c3f62e1d75f241f74cbc3086597fa4.log This error happened for 1.8.12/1.8.15/1.9.0. Bug Report. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [retry] re-using retry for task_id=2 attempts=2 [2022/03/24 04:21:08] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 108 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY

Houses For Rent In Heyburn, Idaho, The Bennington Triangle Documentary, Lois Reitzes Must Retire, Vance Degeneres Net Worth, Colorado Produce Calendar, Articles F

failed to flush chunk