refactor(net): 重构TCP listening backlog为动态扩展#1316
refactor(net): 重构TCP listening backlog为动态扩展#1316Mico-rstar wants to merge 1 commit intoDragonOS-Community:masterfrom
Conversation
- refactor(Listening): 使用队列维护inners - debug(Listening): fix inners没有正确扩展 - refactor(poll): 多包一并notify改为单包notify - feat: 增加单元测试
为什么要这样做 |
There was a problem hiding this comment.
Pull Request Overview
This PR refactors the TCP listening backlog implementation to use dynamic expansion instead of pre-allocated sockets. The core changes involve moving from a fixed Vec of bound inners to a queue-based approach that dynamically creates new listening sockets as needed.
- Refactored TCP listening to use a VecDeque with dynamic socket creation
- Modified polling to process packets individually with immediate notification
- Added unit test for backlog functionality
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| user/apps/c_unitest/test_backlogc.c | New unit test for TCP backlog functionality |
| kernel/src/net/socket/inet/stream/inner.rs | Refactored Listening struct to use dynamic socket management with VecDeque |
| kernel/src/driver/net/mod.rs | Changed polling strategy from batch to single-packet processing |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| } | ||
| printf("the %dth connection\n", ++cnt); | ||
| close(*client_sockfd); | ||
|
|
There was a problem hiding this comment.
Memory leak: allocated client_sockfd is never freed after closing the socket. Add free(client_sockfd); after line 80.
| free(client_sockfd); |
| let mut inners_guard = self.inners.write(); | ||
| let inner = match inners_guard.back() { | ||
| Some(inner) => inner, | ||
| None => return debug!("the tcp socket inners is empty"), |
There was a problem hiding this comment.
Using debug! macro in a return statement is unconventional. Consider using a separate debug! call followed by return, or return an appropriate value/error.
| None => return debug!("the tcp socket inners is empty"), | |
| None => { | |
| debug!("the tcp socket inners is empty"); | |
| return; | |
| } |
| )), | ||
| ) { | ||
| Ok(inner) => inner, | ||
| Err(e) => return debug!("bind err: {:#?}", e), |
There was a problem hiding this comment.
Using debug! macro in a return statement is unconventional. Consider using a separate debug! call followed by return, or return an appropriate value/error.
|
and,这个PR是为了解决什么问题?有无测试用例/问题复现案例? |
当前tcp listening backlog的实现会在执行系统调用sys_listen后创建backlog个socket等待连接,这样即便处于空闲状态下依旧占用固定内存,且占用大小取决于用户程序指定的backlog大小。这个PR用队列来维护socket,将完成tcp三次握手的socket push到队列,调用accept从队列pop出socket,从而让队列大小随着并发连接数和客户端处理能力的变化而变化 |
smoltcp没有提供listen接口,不能在连接到来时自动创建新的socket,所以现在的实现是总是绑定连接数+1个smoltcp socket,单包处理就可以在update_io_event中bind新的socket供下一次连接 |
|
这个实现我初看了一下,怎么感觉容易在poll的地方卡很久?如果一直有包到来的话。 @sparkzky @Samuka007 麻烦帮忙review一下? |
| let mut sockets = self.sockets.lock_irqsave(); | ||
| let result = interface.poll_ingress_single(timestamp, device, &mut sockets); | ||
| drop(sockets); | ||
| match result { | ||
| smoltcp::iface::PollIngressSingleResult::None => break, | ||
| smoltcp::iface::PollIngressSingleResult::PacketProcessed => {}, | ||
| smoltcp::iface::PollIngressSingleResult::SocketStateChanged => { | ||
| self.bounds.read_irqsave().iter().for_each(|bound_socket| { | ||
| bound_socket.notify(); | ||
| let _woke = bound_socket | ||
| .wait_queue() | ||
| .wakeup(Some(ProcessState::Blocked(true))); | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
能说说你这里为什么要逐包处理吗
之前的写法可以一次性处理多个包,然后唤醒bound_sockets,然后可以同时唤醒多个在等待数据的socket
但是现在你这样的写法,每次有一个包的到来,都会去唤醒整个bound_sockets,但是最终只有一个socket真正收到数据,会不会有点不太必要
There was a problem hiding this comment.
之前的写法能够一次处理多个包,关键在于提前bind backlog个数量的socket,这个pr希望实现类似于linux动态扩展wait_queue的机制。考虑过两种解决方案:
- 逐包处理,将处理粒度变小,让上层TCP_Socket可以及时扩充队列大小以供新的连接
- 在smoltcp层面,提供一个listen接口,并维护一个wait_queue来根据连接数量动态扩展
方案2相对优雅,且可以对smoltcp的分发机制进行优化,例如用hashmap来维护sockets set,但需要对smoltcp进行修改。
是否有更好的方案,或者是否有准备对smoltcp进行优化
| Inner::Listening(listen) => Some(listen.inners.read().front().unwrap().iface().clone()), | ||
| Inner::Established(est) => Some(est.inner.iface().clone()), |
There was a problem hiding this comment.
这里直接unwarp吗,加个判断是否为空吧,为空的话就返回None

refactor(Listening): 使用队列维护inners
debug(Listening): fix inners没有正确扩展
refactor(poll): 多包一并notify改为单包notify
feat: 增加单元测试