feat(ipi): tie ipi to smp feature, add sync mechanism for ipi and support lazy flush tlb#29
feat(ipi): tie ipi to smp feature, add sync mechanism for ipi and support lazy flush tlb#29li041 wants to merge 10 commits intoStarry-OS:devfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR introduces IPI (Inter-Processor Interrupt) synchronization mechanisms and ties the IPI feature to SMP support, enabling lazy TLB flushing across multiple CPUs. The changes include adding a wait/sync mechanism for IPI operations, tracking secondary CPU startup state, and implementing TLB flush operations that leverage the IPI system.
Key changes:
- Added synchronization support to IPI operations with a
waitparameter and completion tracking viaAtomicBoolflags - Introduced
SECONDARY_CPUS_STARTEDflag to track when secondary CPUs are ready - Implemented TLB flush interface that uses IPI for cross-CPU coordination
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| modules/axtask/src/task.rs | Adds on_cpu_mask() method to TaskExt trait for CPU affinity tracking |
| modules/axruntime/src/lib.rs | Initializes axipi module and signals secondary CPU startup completion |
| modules/axmm/Cargo.toml | Enables SMP feature for page_table_multiarch dependency |
| modules/axipi/src/tlb.rs | New file implementing TLB flush interface using IPI mechanism |
| modules/axipi/src/queue.rs | Updates IPI event queue to track event names and completion flags |
| modules/axipi/src/lib.rs | Adds synchronization support, secondary CPU tracking, and new multicast functions |
| modules/axipi/src/event.rs | Extends IpiEvent structure with name field and completion flag |
| modules/axipi/Cargo.toml | Adds dependencies for TLB flush implementation |
| api/axfeat/Cargo.toml | Ties IPI feature to SMP and adds axmm dependency |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
modules/axipi/src/lib.rs
Outdated
| let cpu_num = axconfig::plat::CPU_NUM; | ||
| let callback = callback.into(); | ||
|
|
||
| let mut done_flags: Vec<Arc<AtomicBool>> = Vec::new(cpu_num - 1); |
There was a problem hiding this comment.
Vec::new() does not accept a capacity parameter. Use Vec::with_capacity(cpu_num - 1) instead to preallocate the vector with the correct capacity.
modules/axipi/src/lib.rs
Outdated
| // Execute callback on current CPU immediately | ||
| callback.clone().call(); | ||
|
|
||
| let mut done_flags: Vec<Arc<AtomicBool>> = Vec::new(cpu_num); |
There was a problem hiding this comment.
Vec::new() does not accept a capacity parameter. Use Vec::with_capacity(cpu_num) instead to preallocate the vector with the correct capacity.
modules/axipi/src/lib.rs
Outdated
| if done_flags.is_empty() { | ||
| return; |
There was a problem hiding this comment.
The early return for empty done_flags in run_on_each_cpu_except_self prevents IPI from being sent when wait is false. This logic appears inconsistent with run_on_bitmask_except_self (lines 107-111) which always sends IPIs regardless of wait flag. Consider whether IPIs should be sent even when wait=false and done_flags is empty.
| log.workspace = true | ||
| percpu.workspace = true | ||
| crate_interface = "0.1.4" | ||
| axcpu = { git = "https://github.com/arceos-org/axcpu.git", tag = "dev-v03" } |
There was a problem hiding this comment.
The axcpu dependency is added but does not appear to be used in any of the modified axipi source files. Consider removing this dependency if it's not needed, or add it when it's actually required.
| axcpu = { git = "https://github.com/arceos-org/axcpu.git", tag = "dev-v03" } |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| fn flush_all(vaddr: Option<VirtAddr>) { | ||
| if axconfig::plat::CPU_NUM == 1 || !secondary_cpus_ready() { | ||
| // local | ||
| axhal::asm::flush_tlb(None); |
There was a problem hiding this comment.
The local TLB flush is called with None regardless of the vaddr parameter value. This should pass vaddr to maintain consistency with the remote flush behavior on line 20.
| axhal::asm::flush_tlb(None); | |
| axhal::asm::flush_tlb(vaddr); |
| let cpu_num = axconfig::plat::CPU_NUM; | ||
| let callback = callback.into(); | ||
|
|
||
| let mut done_flags: Vec<Arc<AtomicBool>> = Vec::with_capacity(cpu_num - 1); |
There was a problem hiding this comment.
The vector is pre-allocated with cpu_num - 1 capacity but this is only accurate when all other CPUs will receive the IPI. Consider pre-allocating with the exact capacity needed or using Vec::new() without pre-allocation.
| // Execute callback on current CPU immediately | ||
| callback.clone().call(); | ||
|
|
||
| let mut done_flags: Vec<Arc<AtomicBool>> = Vec::with_capacity(cpu_num); |
There was a problem hiding this comment.
The capacity is set to cpu_num but only cpu_num - 1 flags will be added since the current CPU is excluded. The capacity should be cpu_num - 1 to avoid over-allocation.
| let mut done_flags: Vec<Arc<AtomicBool>> = Vec::with_capacity(cpu_num); | |
| let mut done_flags: Vec<Arc<AtomicBool>> = Vec::with_capacity(cpu_num - 1); |
| } else { | ||
| let callback = MulticastCallback::new(move || { | ||
| axhal::asm::flush_tlb(vaddr); | ||
| }); |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
modules/axipi/src/lib.rs
Outdated
| pub fn run_on_cpu<T: Into<Callback>>(dest_cpu: usize, callback: T) { | ||
| info!("Send IPI event to CPU {dest_cpu}"); | ||
| pub fn run_on_cpu<T: Into<Callback>>(name: &'static str, dest_cpu: usize, callback: T, wait: bool) { | ||
| let gurad = kernel_guard::NoPreempt::new(); |
There was a problem hiding this comment.
Corrected spelling of 'gurad' to 'guard'.
modules/axipi/src/lib.rs
Outdated
| /// Executes a callback on all other CPUs via IPI. | ||
| pub fn run_on_each_cpu<T: Into<MulticastCallback>>(callback: T) { | ||
| pub fn run_on_each_cpu<T: Into<MulticastCallback>>(name: &'static str, callback: T, wait: bool) { | ||
| let gurad = kernel_guard::NoPreempt::new(); |
There was a problem hiding this comment.
Corrected spelling of 'gurad' to 'guard'.
modules/axipi/src/lib.rs
Outdated
| } | ||
| } | ||
| } | ||
| drop(gurad); // rescheduling may occur when preemption is re-enabled. |
There was a problem hiding this comment.
Corrected spelling of 'gurad' to 'guard'.
Related: