Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion content/docs/logging-infrastructure/fluentd.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,9 @@ kubectl get fluentdconfig example -o jsonpath='{.status}' | jq .
}
```

If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a `FluentdConfig` is already registered to a `Logging` resource and you create another `FluentdConfig` resource in the same namespace, then the first `FluentdConfig` is left intact, while the second one should have the following status:
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. The previously-associated `FluentdConfig` continues to operate normally, and log forwarding remains uninterrupted while you resolve the excess configuration.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added clarification that the aggregator keeps running based on PR #2232 which fixes the bug where creating a second FluentdConfig would tear down the running aggregator. The fix in repository.go preserves the previously-associated configuration recorded in Logging.Status.FluentdConfigName.

Source: kube-logging/logging-operator#2232


For example, if a `FluentdConfig` is already registered to a `Logging` resource and you create another `FluentdConfig` resource in the same namespace, the first `FluentdConfig` is left intact and its aggregator keeps running, while the second one should have the following status:

```shell
kubectl get fluentdconfig example2 -o jsonpath='{.status}' | jq .
Expand Down Expand Up @@ -125,6 +127,8 @@ kubectl get logging example -o jsonpath='{.status}' | jq .
}
```

To resolve this conflict, delete the excess `FluentdConfig` resource. The active aggregator will continue running throughout.

## Custom pvc volume for Fluentd buffers

```yaml
Expand Down
6 changes: 5 additions & 1 deletion content/docs/logging-infrastructure/syslog-ng.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,9 @@ kubectl get syslogngconfig example -o jsonpath='{.status}' | jq .
}
```

If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. For example, if a `syslogNGConfig` is already registered to a `Logging` resource and you create another `syslogNGConfig` resource in the same namespace, then the first `syslogNGConfig` is left intact, while the second one should have the following status:
If there is a conflict, the controller adds a problem to both resources so that both the operations team and the tenant users can notice the problem. The previously-associated `SyslogNGConfig` continues to operate normally, and log forwarding remains uninterrupted while you resolve the excess configuration.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added clarification that the aggregator keeps running based on PR #2232 which fixes the bug where creating a second SyslogNGConfig would tear down the running aggregator. The fix in repository.go preserves the previously-associated configuration recorded in Logging.Status.SyslogNGConfigName.

Source: kube-logging/logging-operator#2232


For example, if a `SyslogNGConfig` is already registered to a `Logging` resource and you create another `SyslogNGConfig` resource in the same namespace, the first `SyslogNGConfig` is left intact and its aggregator keeps running, while the second one should have the following status:

```shell
kubectl get syslogngconfig example2 -o jsonpath='{.status}' | jq .
Expand Down Expand Up @@ -128,6 +130,8 @@ kubectl get logging example -o jsonpath='{.status}' | jq .
}
```

To resolve this conflict, delete the excess `SyslogNGConfig` resource. The active aggregator will continue running throughout.

## Volume mount for buffering

The following example sets a volume mount that syslog-ng can use for buffering messages on the disk (if {{% xref "/docs/configuration/plugins/syslog-ng-outputs/disk_buffer.md" %}} is configured in the output).
Expand Down
Loading