Skip to content

cdc(ddl): ensure strict ordering for multi-table DDLs after split (#12450)#12458

Closed
ti-chi-bot wants to merge 1 commit intopingcap:release-6.5from
ti-chi-bot:cherry-pick-12450-to-release-6.5
Closed

cdc(ddl): ensure strict ordering for multi-table DDLs after split (#12450)#12458
ti-chi-bot wants to merge 1 commit intopingcap:release-6.5from
ti-chi-bot:cherry-pick-12450-to-release-6.5

Conversation

@ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #12450

What problem does this PR solve?

Issue Number: close #12449

What is changed and how it works?

This PR addresses an issue where split DDLs from a multi-table RENAME statement could be executed out of order downstream because they share the same CommitTs and the order of ranging map is non-deterministic.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test

Questions

Will it cause performance regression or break compatibility?

None

Do you need to update user documentation, design documentation or monitoring documentation?

None

Release note

Fix the incorrect execution order of split DDLs generated from a multi-table DDL statement (e.g., RENAME TABLE).

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@ti-chi-bot ti-chi-bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-6.5 This PR is cherry-picked to release-6.5 from a source PR. labels Dec 10, 2025
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Dec 10, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign charlescheung96 for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Dec 10, 2025

This cherry pick PR is for a release branch and has not yet been approved by triage owners.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick:

  1. It must be approved by the approvers firstly.
  2. AFTER it has been approved by approvers, please wait for the cherry-pick merging approval from triage owners.
Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot
Copy link
Member Author

@wlwilliamx This PR has conflicts, I have hold it.
Please resolve them or ask others to resolve them, then comment /unhold to remove the hold label.

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Dec 10, 2025

@ti-chi-bot: ## If you want to know how to resolve it, please read the guide in TiDB Dev Guide.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot ti-chi-bot bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 10, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @ti-chi-bot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a crucial fix to ensure the strict ordering of Data Definition Language (DDL) events, particularly those originating from multi-table DDL statements like RENAME TABLE. Previously, such operations could lead to out-of-order execution downstream due to shared commit timestamps and non-deterministic processing. By introducing a sequence number to split DDL events and updating the DDL management logic to respect this order, the change guarantees data consistency and reliability for complex schema modifications.

Highlights

  • DDL Ordering Fix: Resolved an issue where Data Definition Language (DDL) operations involving multiple tables, such as RENAME TABLE, could be executed out of order in downstream systems. This was due to DDL events sharing the same commit timestamp and the non-deterministic iteration order of maps.
  • Sequence Number for DDL Events: Introduced a new Seq field to the DDLEvent struct. This sequence number is assigned to individual DDL events that are split from a multi-table DDL statement, ensuring their strict ordering.
  • Enhanced DDL Event Building: Refactored the DDL event building logic in schema_storage.go to properly handle multi-table DDLs like RENAME TABLE and assign the new Seq field to maintain order.
  • Prioritized DDL Execution: Modified the DDL manager's logic to consider the Seq field when selecting the next DDL event for execution, specifically when multiple DDLs share the same CommitTs.
  • Integration Tests: Added new integration tests to validate that multi-table RENAME TABLE DDLs are correctly applied in the intended order in the downstream system.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to ensure strict ordering for multi-table DDLs by introducing a sequence number (Seq) to DDLEvent. This is a good approach to handle DDLs that share the same commit timestamp. However, the pull request contains several critical issues, including unresolved merge conflicts in multiple files, which will prevent the code from compiling. These must be addressed before merging.

Comment on lines +294 to 436
<<<<<<< HEAD
return !job.IsSynced() && !job.IsDone()
=======
return !job.IsDone()
}

// BuildDDLEvents by parsing the DDL job
func (s *schemaStorage) BuildDDLEvents(
ctx context.Context, job *timodel.Job,
) (ddlEvents []*model.DDLEvent, err error) {
switch job.Type {
case timodel.ActionRenameTables:
// The result contains more than one DDLEvent for a rename tables job.
ddlEvents, err = s.buildRenameEvents(ctx, job)
if err != nil {
return nil, errors.Trace(err)
}
case timodel.ActionCreateTables:
if job.BinlogInfo != nil && job.BinlogInfo.MultipleTableInfos != nil {
querys, err := ddl.SplitQueries(job.Query)
if err != nil {
return nil, errors.Trace(err)
}
multiTableInfos := job.BinlogInfo.MultipleTableInfos
for index, tableInfo := range multiTableInfos {
newTableInfo := model.WrapTableInfo(job.SchemaID, job.SchemaName, job.BinlogInfo.FinishedTS, tableInfo)
job.Query = querys[index]
event := new(model.DDLEvent)
event.FromJob(job, nil, newTableInfo)
ddlEvents = append(ddlEvents, event)
}
} else {
return nil, errors.Errorf("there is no multiple table infos in the create tables job: %s", job)
}
default:
// parse preTableInfo
preSnap, err := s.GetSnapshot(ctx, job.BinlogInfo.FinishedTS-1)
if err != nil {
return nil, errors.Trace(err)
}
preTableInfo, err := preSnap.PreTableInfo(job)
if err != nil {
return nil, errors.Trace(err)
}

// parse tableInfo
var tableInfo *model.TableInfo
err = preSnap.FillSchemaName(job)
if err != nil {
log.Error("build DDL event fail", zap.Any("job", job), zap.Error(err))
return nil, errors.Trace(err)
}
// TODO: find a better way to refactor this. For example, drop table job should not
// have table info.
if job.BinlogInfo != nil && job.BinlogInfo.TableInfo != nil {
tableInfo = model.WrapTableInfo(job.SchemaID, job.SchemaName, job.BinlogInfo.FinishedTS, job.BinlogInfo.TableInfo)

// TODO: remove this after job is fixed by TiDB.
// ref: https://github.com/pingcap/tidb/issues/43819
if job.Type == timodel.ActionExchangeTablePartition {
oldTableInfo, ok := preSnap.PhysicalTableByID(job.BinlogInfo.TableInfo.ID)
if !ok {
return nil, cerror.ErrSchemaStorageTableMiss.GenWithStackByArgs(job.TableID)
}
tableInfo.SchemaID = oldTableInfo.SchemaID
tableInfo.TableName = oldTableInfo.TableName
}
} else {
// Just retrieve the schema name for a DDL job that does not contain TableInfo.
// Currently supported by cdc are: ActionCreateSchema, ActionDropSchema,
// and ActionModifySchemaCharsetAndCollate.
tableInfo = &model.TableInfo{
TableName: model.TableName{Schema: job.SchemaName},
Version: job.BinlogInfo.FinishedTS,
}
}
event := new(model.DDLEvent)
event.FromJob(job, preTableInfo, tableInfo)
ddlEvents = append(ddlEvents, event)
}
return ddlEvents, nil
}

// GetNewJobWithArgs returns a new job with the given args
func GetNewJobWithArgs(job *timodel.Job, args timodel.JobArgs) (*timodel.Job, error) {
job.FillArgs(args)
bytes, err := job.Encode(true)
if err != nil {
return nil, errors.Trace(err)
}
encodedJob := &timodel.Job{}
if err = encodedJob.Decode(bytes); err != nil {
return nil, errors.Trace(err)
}
return encodedJob, nil
}

// TODO: find a better way to refactor this function.
// buildRenameEvents gets a list of DDLEvent from a rename tables DDL job.
func (s *schemaStorage) buildRenameEvents(
ctx context.Context, job *timodel.Job,
) ([]*model.DDLEvent, error) {
var ddlEvents []*model.DDLEvent
args, err := timodel.GetRenameTablesArgs(job)
if err != nil {
return nil, errors.Trace(err)
}

multiTableInfos := job.BinlogInfo.MultipleTableInfos
if len(multiTableInfos) != len(args.RenameTableInfos) {
return nil, cerror.ErrInvalidDDLJob.GenWithStackByArgs(job.ID)
}

preSnap, err := s.GetSnapshot(ctx, job.BinlogInfo.FinishedTS-1)
if err != nil {
return nil, errors.Trace(err)
}

for i, tableInfo := range multiTableInfos {
info := args.RenameTableInfos[i]
newSchema, ok := preSnap.SchemaByID(info.NewSchemaID)
if !ok {
return nil, cerror.ErrSnapshotSchemaNotFound.GenWithStackByArgs(
info.NewSchemaID)
}
newSchemaName := newSchema.Name.O
oldSchemaName := info.OldSchemaName.O
event := new(model.DDLEvent)
preTableInfo, ok := preSnap.PhysicalTableByID(tableInfo.ID)
if !ok {
return nil, cerror.ErrSchemaStorageTableMiss.GenWithStackByArgs(
job.TableID)
}

tableInfo := model.WrapTableInfo(info.NewSchemaID, newSchemaName,
job.BinlogInfo.FinishedTS, tableInfo)
event.FromJobWithArgs(job, preTableInfo, tableInfo, oldSchemaName, newSchemaName)
event.Seq = uint64(i)
ddlEvents = append(ddlEvents, event)
}
return ddlEvents, nil
>>>>>>> 3c7fd0a1fd (cdc(ddl): ensure strict ordering for multi-table DDLs after split (#12450))
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This file contains unresolved merge conflict markers (<<<<<<<, =======, >>>>>>>), which will cause a compilation failure. Please resolve the conflict.

Additionally, there are a couple of issues in the new code:

  1. The receiver for BuildDDLEvents and buildRenameEvents is *schemaStorage, which is an interface. This will cause a compilation error. It should probably be *schemaStorageImpl.
  2. The function BuildDDLEvents calls ddl.SplitQueries, but the ddl package (github.com/pingcap/tidb/ddl) is not imported.

Here is a suggested fix that resolves these issues.

return !job.IsDone()
}

// BuildDDLEvents by parsing the DDL job
func (s *schemaStorageImpl) BuildDDLEvents(
	ctx context.Context, job *timodel.Job,
) (ddlEvents []*model.DDLEvent, err error) {
	switch job.Type {
	case timodel.ActionRenameTables:
		// The result contains more than one DDLEvent for a rename tables job.
		ddlEvents, err = s.buildRenameEvents(ctx, job)
		if err != nil {
			return nil, errors.Trace(err)
		}
	case timodel.ActionCreateTables:
		if job.BinlogInfo != nil && job.BinlogInfo.MultipleTableInfos != nil {
			querys, err := ddl.SplitQueries(job.Query)
			if err != nil {
				return nil, errors.Trace(err)
			}
			multiTableInfos := job.BinlogInfo.MultipleTableInfos
			for index, tableInfo := range multiTableInfos {
				newTableInfo := model.WrapTableInfo(job.SchemaID, job.SchemaName, job.BinlogInfo.FinishedTS, tableInfo)
				job.Query = querys[index]
				event := new(model.DDLEvent)
				event.FromJob(job, nil, newTableInfo)
				ddlEvents = append(ddlEvents, event)
			}
		} else {
			return nil, errors.Errorf("there is no multiple table infos in the create tables job: %s", job)
		}
	default:
		// parse preTableInfo
		preSnap, err := s.GetSnapshot(ctx, job.BinlogInfo.FinishedTS-1)
		if err != nil {
			return nil, errors.Trace(err)
		}
		preTableInfo, err := preSnap.PreTableInfo(job)
		if err != nil {
			return nil, errors.Trace(err)
		}

		// parse tableInfo
		var tableInfo *model.TableInfo
		err = preSnap.FillSchemaName(job)
		if err != nil {
			log.Error("build DDL event fail", zap.Any("job", job), zap.Error(err))
			return nil, errors.Trace(err)
		}
		// TODO: find a better way to refactor this. For example, drop table job should not
		// have table info.
		if job.BinlogInfo != nil && job.BinlogInfo.TableInfo != nil {
			tableInfo = model.WrapTableInfo(job.SchemaID, job.SchemaName, job.BinlogInfo.FinishedTS, job.BinlogInfo.TableInfo)

			// TODO: remove this after job is fixed by TiDB.
			// ref: https://github.com/pingcap/tidb/issues/43819
			if job.Type == timodel.ActionExchangeTablePartition {
				oldTableInfo, ok := preSnap.PhysicalTableByID(job.BinlogInfo.TableInfo.ID)
				if !ok {
					return nil, cerror.ErrSchemaStorageTableMiss.GenWithStackByArgs(job.TableID)
				}
				tableInfo.SchemaID = oldTableInfo.SchemaID
				tableInfo.TableName = oldTableInfo.TableName
			}
		} else {
			// Just retrieve the schema name for a DDL job that does not contain TableInfo.
			// Currently supported by cdc are: ActionCreateSchema, ActionDropSchema,
			// and ActionModifySchemaCharsetAndCollate.
			tableInfo = &model.TableInfo{
				TableName: model.TableName{Schema: job.SchemaName},
				Version:   job.BinlogInfo.FinishedTS,
			}
		}
		event := new(model.DDLEvent)
		event.FromJob(job, preTableInfo, tableInfo)
		ddlEvents = append(ddlEvents, event)
	}
	return ddlEvents, nil
}

// GetNewJobWithArgs returns a new job with the given args
func GetNewJobWithArgs(job *timodel.Job, args timodel.JobArgs) (*timodel.Job, error) {
	job.FillArgs(args)
	bytes, err := job.Encode(true)
	if err != nil {
		return nil, errors.Trace(err)
	}
	encodedJob := &timodel.Job{}
	if err = encodedJob.Decode(bytes); err != nil {
		return nil, errors.Trace(err)
	}
	return encodedJob, nil
}

// TODO: find a better way to refactor this function.
// buildRenameEvents gets a list of DDLEvent from a rename tables DDL job.
func (s *schemaStorageImpl) buildRenameEvents(
	ctx context.Context, job *timodel.Job,
) ([]*model.DDLEvent, error) {
	var ddlEvents []*model.DDLEvent
	args, err := timodel.GetRenameTablesArgs(job)
	if err != nil {
		return nil, errors.Trace(err)
	}

	multiTableInfos := job.BinlogInfo.MultipleTableInfos
	if len(multiTableInfos) != len(args.RenameTableInfos) {
		return nil, cerror.ErrInvalidDDLJob.GenWithStackByArgs(job.ID)
	}

	preSnap, err := s.GetSnapshot(ctx, job.BinlogInfo.FinishedTS-1)
	if err != nil {
		return nil, errors.Trace(err)
	}

	for i, tableInfo := range multiTableInfos {
		info := args.RenameTableInfos[i]
		newSchema, ok := preSnap.SchemaByID(info.NewSchemaID)
		if !ok {
			return nil, cerror.ErrSnapshotSchemaNotFound.GenWithStackByArgs(
				info.NewSchemaID)
		}
		newSchemaName := newSchema.Name.O
		oldSchemaName := info.OldSchemaName.O
		event := new(model.DDLEvent)
		preTableInfo, ok := preSnap.PhysicalTableByID(tableInfo.ID)
		if !ok {
			return nil, cerror.ErrSchemaStorageTableMiss.GenWithStackByArgs(
				job.TableID)
		}

		tableInfo := model.WrapTableInfo(info.NewSchemaID, newSchemaName,
			job.BinlogInfo.FinishedTS, tableInfo)
		event.FromJobWithArgs(job, preTableInfo, tableInfo, oldSchemaName, newSchemaName)
		event.Seq = uint64(i)
		ddlEvents = append(ddlEvents, event)
	}
	return ddlEvents, nil
}

Comment on lines +723 to +734
<<<<<<< HEAD
=======
IsBootstrap bool `msg:"-"`
// BDRRole is the role of the TiDB cluster, it is used to determine whether
// the DDL is executed by the primary cluster.
BDRRole string `msg:"-"`
SQLMode mysql.SQLMode `msg:"-"`
// Seq is used to order the DDLs with the same commit ts
// Only used in the splited DDLEvent generated by a multi-table DDL,
// we need to keep the order of the original multi-table DDL
Seq uint64 `msg:"seq"`
>>>>>>> 3c7fd0a1fd (cdc(ddl): ensure strict ordering for multi-table DDLs after split (#12450))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This file contains unresolved merge conflict markers (<<<<<<<, =======, >>>>>>>), which will cause a compilation failure. Please resolve the conflict before merging.

	IsBootstrap  bool             `msg:"-"`
	// BDRRole is the role of the TiDB cluster, it is used to determine whether
	// the DDL is executed by the primary cluster.
	BDRRole string        `msg:"-"`
	SQLMode mysql.SQLMode `msg:"-"`
	// Seq is used to order the DDLs with the same commit ts
	// Only used in the splited DDLEvent generated by a multi-table DDL,
	// we need to keep the order of the original multi-table DDL
	Seq uint64 `msg:"seq"`

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Dec 10, 2025

@ti-chi-bot: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cdc-integration-kafka-test b25b8e9 link true /test pull-cdc-integration-kafka-test
pull-verify b25b8e9 link true /test pull-verify
pull-cdc-integration-mysql-test b25b8e9 link true /test pull-cdc-integration-mysql-test

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@ti-chi-bot ti-chi-bot bot closed this Dec 30, 2025
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Dec 30, 2025

This pull request is closed because its related version has closed automatic cherry-picking.
If it's still needed, you can reopen it or just regenerate it using bot,
see:

https://prow.tidb.net/command-help#cherrypick
https://book.prow.tidb.net/#/plugins/cherrypicker

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/cherry-pick-not-approved do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. type/cherry-pick-for-release-6.5 This PR is cherry-picked to release-6.5 from a source PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants