Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/content.zh/docs/dev/datastream/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ public class StatefulFlatMapTest {
//instantiate user-defined function
statefulFlatMapFunction = new StatefulFlatMapFunction();

// wrap user defined function into a the corresponding operator
// wrap user defined function into the corresponding operator
testHarness = new OneInputStreamOperatorTestHarness<>(new StreamFlatMap<>(statefulFlatMapFunction));

// optionally configured the execution environment
Expand Down Expand Up @@ -158,7 +158,7 @@ public class StatefulFlatMapFunctionTest {
//instantiate user-defined function
statefulFlatMapFunction = new StatefulFlatMapFunction();

// wrap user defined function into a the corresponding operator
// wrap user defined function into the corresponding operator
testHarness = new KeyedOneInputStreamOperatorTestHarness<>(new StreamFlatMap<>(statefulFlatMapFunction), new MyStringKeySelector(), Types.STRING);

// open the test harness (will also call open() on RichFunctions)
Expand Down Expand Up @@ -204,7 +204,7 @@ public class PassThroughProcessFunctionTest {
//instantiate user-defined function
PassThroughProcessFunction processFunction = new PassThroughProcessFunction();

// wrap user defined function into a the corresponding operator
// wrap user defined function into the corresponding operator
OneInputStreamOperatorTestHarness<Integer, Integer> harness = ProcessFunctionTestHarnesses
.forProcessFunction(processFunction);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Flink应用程序运行时,JVM会随着时间不断加载各种不同的类。

If you package a Flink job/application such that your application treats Flink like a library (JobManager/TaskManager daemons as spawned as needed),
then typically all classes are in the *application classpath*. This is the recommended way for container-based setups where the container is specifically
created for an job/application and will contain the job/application's jar files.
created for a job/application and will contain the job/application's jar files.

-->

Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/docs/ops/events.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.

# Events

Flink exposes a event reporting system that allows gathering and exposing events to external systems.
Flink exposes an event reporting system that allows gathering and exposing events to external systems.

## Reporting events

Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/docs/sql/reference/ddl/create.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ CREATE TABLE MyTable (

Metadata columns are an extension to the SQL standard and allow to access connector and/or format specific
fields for every row of a table. A metadata column is indicated by the `METADATA` keyword. For example,
a metadata column can be be used to read and write the timestamp from and to Kafka records for time-based
a metadata column can be used to read and write the timestamp from and to Kafka records for time-based
operations. The [connector and format documentation]({{< ref "docs/connectors/table/overview" >}}) lists the
available metadata fields for every component. However, declaring a metadata column in a table's schema
is optional.
Expand Down
2 changes: 1 addition & 1 deletion docs/content.zh/release-notes/flink-2.2.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ BINARY and VARBINARY should now correctly consider the target length.

##### [FLINK-38209](https://issues.apache.org/jira/browse/FLINK-38209)

This is considerable optimization and an breaking change for the StreamingMultiJoinOperator.
This is considerable optimization and a breaking change for the StreamingMultiJoinOperator.
As noted in the release notes, the operator was launched in an experimental state for Flink 2.1
since we're working on relevant optimizations that could be breaking changes.

Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/connectors/table/formats/raw.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ CREATE TABLE nginx_log (
)
```

Then you can read out the raw data as a pure string, and split it into multiple fields using an user-defined-function for further analysing, e.g. `my_split` in the example.
Then you can read out the raw data as a pure string, and split it into multiple fields using a user-defined-function for further analysing, e.g. `my_split` in the example.

```sql
SELECT t.hostname, t.datetime, t.url, t.browser, ...
Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/dev/datastream/application_parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ parameters.getNumberOfParameters();
```

You can use the return values of these methods directly in the `main()` method of the client submitting the application.
For example, you could set the parallelism of a operator like this:
For example, you could set the parallelism of an operator like this:

```java
ParameterTool parameters = ParameterTool.fromArgs(args);
Expand Down
6 changes: 3 additions & 3 deletions docs/content/docs/dev/datastream/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ public class StatefulFlatMapTest {
//instantiate user-defined function
statefulFlatMapFunction = new StatefulFlatMapFunction();

// wrap user defined function into a the corresponding operator
// wrap user defined function into the corresponding operator
testHarness = new OneInputStreamOperatorTestHarness<>(new StreamFlatMap<>(statefulFlatMapFunction));

// optionally configured the execution environment
Expand Down Expand Up @@ -157,7 +157,7 @@ public class StatefulFlatMapFunctionTest {
//instantiate user-defined function
statefulFlatMapFunction = new StatefulFlatMapFunction();

// wrap user defined function into a the corresponding operator
// wrap user defined function into the corresponding operator
testHarness = new KeyedOneInputStreamOperatorTestHarness<>(new StreamFlatMap<>(statefulFlatMapFunction), new MyStringKeySelector(), Types.STRING);

// open the test harness (will also call open() on RichFunctions)
Expand Down Expand Up @@ -203,7 +203,7 @@ public class PassThroughProcessFunctionTest {
//instantiate user-defined function
PassThroughProcessFunction processFunction = new PassThroughProcessFunction();

// wrap user defined function into a the corresponding operator
// wrap user defined function into the corresponding operator
OneInputStreamOperatorTestHarness<Integer, Integer> harness = ProcessFunctionTestHarnesses
.forProcessFunction(processFunction);

Expand Down
8 changes: 4 additions & 4 deletions docs/content/docs/dev/table/functions/udfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -1666,7 +1666,7 @@ def retract(accumulator: ACC, [user defined inputs]): Unit
* be noted that the accumulator may contain the previous aggregated
* results. Therefore user should not replace or clean this instance in the
* custom merge method.
* param: iterable an java.lang.Iterable pointed to a group of accumulators that will be
* param: iterable a java.lang.Iterable pointed to a group of accumulators that will be
* merged.
*/
public void merge(ACC accumulator, java.lang.Iterable<ACC> iterable)
Expand All @@ -1682,7 +1682,7 @@ public void merge(ACC accumulator, java.lang.Iterable<ACC> iterable)
* be noted that the accumulator may contain the previous aggregated
* results. Therefore user should not replace or clean this instance in the
* custom merge method.
* param: iterable an java.lang.Iterable pointed to a group of accumulators that will be
* param: iterable a java.lang.Iterable pointed to a group of accumulators that will be
* merged.
*/
def merge(accumulator: ACC, iterable: java.lang.Iterable[ACC]): Unit
Expand Down Expand Up @@ -2031,7 +2031,7 @@ def retract(accumulator: ACC, [user defined inputs]): Unit
* be noted that the accumulator may contain the previous aggregated
* results. Therefore user should not replace or clean this instance in the
* custom merge method.
* param: iterable an java.lang.Iterable pointed to a group of accumulators that will be
* param: iterable a java.lang.Iterable pointed to a group of accumulators that will be
* merged.
*/
public void merge(ACC accumulator, java.lang.Iterable<ACC> iterable)
Expand All @@ -2047,7 +2047,7 @@ public void merge(ACC accumulator, java.lang.Iterable<ACC> iterable)
* be noted that the accumulator may contain the previous aggregated
* results. Therefore user should not replace or clean this instance in the
* custom merge method.
* param: iterable an java.lang.Iterable pointed to a group of accumulators that will be
* param: iterable a java.lang.Iterable pointed to a group of accumulators that will be
* merged.
*/
def merge(accumulator: ACC, iterable: java.lang.Iterable[ACC]): Unit
Expand Down
4 changes: 2 additions & 2 deletions docs/content/docs/ops/batch/batch_shuffle.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ Here are some exceptions you may encounter (rarely) and the corresponding soluti
| :--------- | :------------------ |
| Insufficient number of network buffers | This means the amount of network memory is not enough to run the target job and you need to increase the total network memory size. Note that since 1.15, `Sort Shuffle` has become the default blocking shuffle implementation and for some cases, it may need more network memory than before, which means there is a small possibility that your batch jobs may suffer from this issue after upgrading to 1.15. If this is the case, you just need to increase the total network memory size. |
| Too many open files | This means that the file descriptors is not enough. If you are using `Hash Shuffle`, please switch to `Sort Shuffle`. If you are already using `Sort Shuffle`, please consider increasing the system limit for file descriptor and check if the user code consumes too many file descriptors. |
| Connection reset by peer | This usually means that the network is unstable or or under heavy burden. Other issues like SSL handshake timeout mentioned above may also cause this problem. If you are using `Hash Shuffle`, please switch to `Sort Shuffle`. If you are already using `Sort Shuffle`, increasing the [network backlog]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-server-backlog) may help. |
| Connection reset by peer | This usually means that the network is unstable or under heavy burden. Other issues like SSL handshake timeout mentioned above may also cause this problem. If you are using `Hash Shuffle`, please switch to `Sort Shuffle`. If you are already using `Sort Shuffle`, increasing the [network backlog]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-server-backlog) may help. |
| Network connection timeout | This usually means that the network is unstable or under heavy burden and increasing the [network connection timeout]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-client-connectTimeoutSec) or enable [connection retry]({{< ref "docs/deployment/config" >}}#taskmanager-network-retries) may help. |
| Socket read/write timeout | This may indicate that the network is slow or under heavy burden and increasing the [network send/receive buffer size]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-sendReceiveBufferSize) may help. If the job is running in Kubernetes environment, using [host network]({{< ref "docs/deployment/config" >}}#kubernetes-hostnetwork-enabled) may also help. |
| Read buffer request timeout | This can happen only when you are using `Sort Shuffle` and it means a fierce contention of the shuffle read memory. To solve the issue, you can increase [taskmanager.memory.framework.off-heap.batch-shuffle.size]({{< ref "docs/deployment/config" >}}#taskmanager-memory-framework-off-heap-batch-shuffle-size) together with [taskmanager.memory.framework.off-heap.size]({{< ref "docs/deployment/config" >}}#taskmanager-memory-framework-off-heap-size). |
Expand All @@ -188,7 +188,7 @@ Here are some exceptions you may encounter (rarely) and the corresponding soluti
| Exceptions | Potential Solutions |
|:--------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Insufficient number of network buffers | This means the amount of network memory is not enough to run the target job and you need to increase the total network memory size. | |
| Connection reset by peer | This usually means that the network is unstable or or under heavy burden. Other issues like SSL handshake timeout may also cause this problem. Increasing the [network backlog]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-server-backlog) may help. |
| Connection reset by peer | This usually means that the network is unstable or under heavy burden. Other issues like SSL handshake timeout may also cause this problem. Increasing the [network backlog]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-server-backlog) may help. |
| Network connection timeout | This usually means that the network is unstable or under heavy burden and increasing the [network connection timeout]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-client-connectTimeoutSec) or enable [connection retry]({{< ref "docs/deployment/config" >}}#taskmanager-network-retries) may help. |
| Socket read/write timeout | This may indicate that the network is slow or under heavy burden and increasing the [network send/receive buffer size]({{< ref "docs/deployment/config" >}}#taskmanager-network-netty-sendReceiveBufferSize) may help. If the job is running in Kubernetes environment, using [host network]({{< ref "docs/deployment/config" >}}#kubernetes-hostnetwork-enabled) may also help. |
| Read buffer request timeout | This means a fierce contention of the shuffle read memory. To solve the issue, you can increase [taskmanager.memory.framework.off-heap.batch-shuffle.size]({{< ref "docs/deployment/config" >}}#taskmanager-memory-framework-off-heap-batch-shuffle-size) together with [taskmanager.memory.framework.off-heap.size]({{< ref "docs/deployment/config" >}}#taskmanager-memory-framework-off-heap-size). |
Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/ops/debugging/debugging_classloading.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Java classpath. The classes from all jobs/applications that are submitted agains

If you package a Flink job/application such that your application treats Flink like a library (JobManager/TaskManager daemons as spawned as needed),
then typically all classes are in the *application classpath*. This is the recommended way for container-based setups where the container is specifically
created for an job/application and will contain the job/application's jar files.
created for a job/application and will contain the job/application's jar files.

-->

Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/ops/events.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.

# Events

Flink exposes a event reporting system that allows gathering and exposing events to external systems.
Flink exposes an event reporting system that allows gathering and exposing events to external systems.

## Reporting events

Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/ops/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ DataStream<String> mappedEvents = events

**Important:** As of 1.3.x this also applies to operators that are part of a chain.

By default all state stored in a savepoint must be matched to the operators of a starting application. However, users can explicitly agree to skip (and thereby discard) state that cannot be matched to an operator when starting a application from a savepoint. Stateful operators for which no state is found in the savepoint are initialized with their default state. Users may enforce best practices by calling `ExecutionConfig#disableAutoGeneratedUIDs` which will fail the job submission if any operator does not contain a custom unique ID.
By default all state stored in a savepoint must be matched to the operators of a starting application. However, users can explicitly agree to skip (and thereby discard) state that cannot be matched to an operator when starting an application from a savepoint. Stateful operators for which no state is found in the savepoint are initialized with their default state. Users may enforce best practices by calling `ExecutionConfig#disableAutoGeneratedUIDs` which will fail the job submission if any operator does not contain a custom unique ID.

#### Stateful Operators and User Functions

Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/sql/reference/ddl/create.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ CREATE TABLE MyTable (

Metadata columns are an extension to the SQL standard and allow to access connector and/or format specific
fields for every row of a table. A metadata column is indicated by the `METADATA` keyword. For example,
a metadata column can be be used to read and write the timestamp from and to Kafka records for time-based
a metadata column can be used to read and write the timestamp from and to Kafka records for time-based
operations. The [connector and format documentation]({{< ref "docs/connectors/table/overview" >}}) lists the
available metadata fields for every component. However, declaring a metadata column in a table's schema
is optional.
Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/sql/reference/queries/hints.md
Original file line number Diff line number Diff line change
Expand Up @@ -604,7 +604,7 @@ e.g., if there is a row in the `Customers` table:
```gitexclude
id=100, country='CN'
```
When processing an record with 'id=100' in the order stream, in 'jdbc' connector, the corresponding
When processing a record with 'id=100' in the order stream, in 'jdbc' connector, the corresponding
lookup result is null (`country='CN'` does not satisfy the condition `c.country = 'US'`) because both
`c.id` and `c.country` are used as lookup keys, so this will trigger a retry.

Expand Down
Loading