http://blog.csdn.net/zhuichao001/article/details/5599539
2010
Lwip協(xié)議棧的實(shí)現目的,無(wú)非是要上層用來(lái)實(shí)現app的socket編程。好,我們就從socket開(kāi)始。為了兼容性,lwip的socket應該也是提供標準的socket接口函數,恩,沒(méi)錯,在src/include/lwip/socket.h文件中可以看到下面的宏定義:
#if LWIP_COMPAT_SOCKETS
#define accept(a,b,c) lwip_accept(a,b,c)
#define bind(a,b,c) lwip_bind(a,b,c)
#define shutdown(a,b) lwip_shutdown(a,b)
#define closesocket(s) lwip_close(s)
#define connect(a,b,c) lwip_connect(a,b,c)
#define getsockname(a,b,c) lwip_getsockname(a,b,c)
#define getpeername(a,b,c) lwip_getpeername(a,b,c)
#define setsockopt(a,b,c,d,e) lwip_setsockopt(a,b,c,d,e)
#define getsockopt(a,b,c,d,e) lwip_getsockopt(a,b,c,d,e)
#define listen(a,b) lwip_listen(a,b)
#define recv(a,b,c,d) lwip_recv(a,b,c,d)
#define recvfrom(a,b,c,d,e,f) lwip_recvfrom(a,b,c,d,e,f)
#define send(a,b,c,d) lwip_send(a,b,c,d)
#define sendto(a,b,c,d,e,f) lwip_sendto(a,b,c,d,e,f)
#define socket(a,b,c) lwip_socket(a,b,c)
#define select(a,b,c,d,e) lwip_select(a,b,c,d,e)
#define ioctlsocket(a,b,c) lwip_ioctl(a,b,c)
#if LWIP_POSIX_SOCKETS_IO_NAMES
#define read(a,b,c) lwip_read(a,b,c)
#define write(a,b,c) lwip_write(a,b,c)
#define close(s) lwip_close(s)
先不說(shuō)實(shí)際的實(shí)現函數,光看這些定義的宏,就是標準socket所必須有的接口。
接著(zhù)看這些實(shí)際的函數實(shí)現。這些函數實(shí)現在src/api/socket.c中。先看下接受連接的函數,這個(gè)是tcp的
原型:int lwip_accept(int s, struct sockaddr *addr, socklen_t *addrlen)
可以看到這里的socket類(lèi)型參數 s,實(shí)際上是個(gè)int型
在這個(gè)函數中的第一個(gè)函數調用是sock = get_socket(s);
這里的sock變量類(lèi)型是lwip_socket,定義如下:
/** Contains all internal pointers and states used for a socket */
struct lwip_socket {
/** sockets currently are built on netconns, each socket has one netconn */
struct netconn *conn;
/** data that was left from the previous read */
struct netbuf *lastdata;
/** offset in the data that was left from the previous read */
u16_t lastoffset;
/** number of times data was received, set by event_callback(),
tested by the receive and select functions */
u16_t rcvevent;
/** number of times data was received, set by event_callback(),
tested by select */
u16_t sendevent;
/** socket flags (currently, only used for O_NONBLOCK) */
u16_t flags;
/** last error that occurred on this socket */
int err;
};
好,這個(gè)結構先不管它,接著(zhù)看下get_socket函數的實(shí)現【也是在src/api/socket.c文件中】,在這里我們看到這樣一條語(yǔ)句sock = &sockets[s];很明顯,返回值也是這個(gè)sock,它是根據傳進(jìn)來(lái)的序列號在sockets數組中找到對應的元素并返回該元素的地址。好了,那么這個(gè)sockets數組是在哪里被賦值了這些元素的呢?
進(jìn)行到這里似乎應該從標準的socket編程的開(kāi)始,也就是socket函數講起,那我們就順便看一下。它對應的實(shí)際實(shí)現是下面這個(gè)函數
Int lwip_socket(int domain, int type, int protocol)【src/api/socket.c】
這個(gè)函數根據不同的協(xié)議類(lèi)型,也就是函數中的type參數,創(chuàng )建了一個(gè)netconn結構體的指針,接著(zhù)就是用這個(gè)指針作為參數調用了alloc_socket函數,下面具體看下這個(gè)函數的實(shí)現
static int alloc_socket(struct netconn *newconn)
{
int i;
/* Protect socket array */
sys_sem_wait(socksem);
/* allocate a new socket identifier */
for (i = 0; i < NUM_SOCKETS; ++i) {
if (!sockets[i].conn) {
sockets[i].conn = newconn;
sockets[i].lastdata = NULL;
sockets[i].lastoffset = 0;
sockets[i].rcvevent = 0;
sockets[i].sendevent = 1; /* TCP send buf is empty */
sockets[i].flags = 0;
sockets[i].err = 0;
sys_sem_signal(socksem);
return i;
}
}
sys_sem_signal(socksem);
return -1;
}
對了,就是這個(gè)時(shí)候對全局變量sockets數組的元素賦值的。
既然都來(lái)到這里了,那就順便看下netconn結構的情況吧。它的學(xué)名叫netconn descriptor
/** A netconn descriptor */
struct netconn
{
/** type of the netconn (TCP, UDP or RAW) */
enum netconn_type type;
/** current state of the netconn */
enum netconn_state state;
/** the lwIP internal protocol control block */
union {
struct ip_pcb *ip;
struct tcp_pcb *tcp;
struct udp_pcb *udp;
struct raw_pcb *raw;
} pcb;
/** the last error this netconn had */
err_t err;
/** sem that is used to synchroneously execute functions in the core context */
sys_sem_t op_completed;
/** mbox where received packets are stored until they are fetched
by the netconn application thread (can grow quite big) */
sys_mbox_t recvmbox;
/** mbox where new connections are stored until processed
by the application thread */
sys_mbox_t acceptmbox;
/** only used for socket layer */
int socket;
#if LWIP_SO_RCVTIMEO
/** timeout to wait for new data to be received
(or connections to arrive for listening netconns) */
int recv_timeout;
#endif /* LWIP_SO_RCVTIMEO */
#if LWIP_SO_RCVBUF
/** maximum amount of bytes queued in recvmbox */
int recv_bufsize;
#endif /* LWIP_SO_RCVBUF */
u16_t recv_avail;
/** TCP: when data passed to netconn_write doesn't fit into the send buffer,
this temporarily stores the message. */
struct api_msg_msg *write_msg;
/** TCP: when data passed to netconn_write doesn't fit into the send buffer,
this temporarily stores how much is already sent. */
int write_offset;
#if LWIP_TCPIP_CORE_LOCKING
/** TCP: when data passed to netconn_write doesn't fit into the send buffer,
this temporarily stores whether to wake up the original application task
if data couldn't be sent in the first try. */
u8_t write_delayed;
#endif /* LWIP_TCPIP_CORE_LOCKING */
/** A callback function that is informed about events for this netconn */
netconn_callback callback;
};【src/include/lwip/api.h】
到此,對這個(gè)結構都有些什么,做了一個(gè)大概的了解。
下面以SOCK_STREAM類(lèi)型為例,看下netconn的new過(guò)程:
在lwip_socket函數中有
case SOCK_DGRAM:
conn = netconn_new_with_callback( (protocol == IPPROTO_UDPLITE) ?
NETCONN_UDPLITE : NETCONN_UDP, event_callback);
#define netconn_new_with_callback(t, c) netconn_new_with_proto_and_callback(t, 0, c)
簡(jiǎn)略實(shí)現如下:
struct netconn*
netconn_new_with_proto_and_callback(enum netconn_type t, u8_t proto, netconn_callback callback)
{
struct netconn *conn;
struct api_msg msg;
conn = netconn_alloc(t, callback);
if (conn != NULL )
{
msg.function = do_newconn;
msg.msg.msg.n.proto = proto;
msg.msg.conn = conn;
TCPIP_APIMSG(&msg);
}
return conn;
}
主要就看TCPIP_APIMSG了,這個(gè)宏有兩個(gè)定義,一個(gè)是LWIP_TCPIP_CORE_LOCKING的,一個(gè)非locking的。分別分析這兩個(gè)不同類(lèi)型的函數
* Call the lower part of a netconn_* function
* This function has exclusive access to lwIP core code by locking it
* before the function is called.
err_t tcpip_apimsg_lock(struct api_msg *apimsg)【這個(gè)是可以locking的】
{
LOCK_TCPIP_CORE();
apimsg->function(&(apimsg->msg));
UNLOCK_TCPIP_CORE();
return ERR_OK;
}
* Call the lower part of a netconn_* function
* This function is then running in the thread context
* of tcpip_thread and has exclusive access to lwIP core code.
err_t tcpip_apimsg(struct api_msg *apimsg)【此為非locking的】
{
struct tcpip_msg msg;
if (mbox != SYS_MBOX_NULL) {
msg.type = TCPIP_MSG_API;
msg.msg.apimsg = apimsg;
sys_mbox_post(mbox, &msg);
sys_arch_sem_wait(apimsg->msg.conn->op_completed, 0);
return ERR_OK;
}
return ERR_VAL;
}
其實(shí),功能都是一樣的,都是要對apimsg->function函數的調用。只是途徑不一樣而已??纯此鼈兊墓δ苷f(shuō)明就知道了。這么來(lái)說(shuō)apimsg->function的調用很重要了。從netconn_new_with_proto_and_callback函數的實(shí)現,可以知道這個(gè)function就是do_newconn
Void do_newconn(struct api_msg_msg *msg)
{
if(msg->conn->pcb.tcp == NULL) {
pcb_new(msg);
}
/* Else? This "new" connection already has a PCB allocated. */
/* Is this an error condition? Should it be deleted? */
/* We currently just are happy and return. */
TCPIP_APIMSG_ACK(msg);
}
還是看TCP的,在pcb_new函數中有如下代碼:
case NETCONN_TCP:
msg->conn->pcb.tcp = tcp_new();
if(msg->conn->pcb.tcp == NULL) {
msg->conn->err = ERR_MEM;
break;
}
setup_tcp(msg->conn);
break;
我們知道在這里建立了這個(gè)tcp的連接。至于這個(gè)超級牛的函數,以后再做介紹。
嗯,還是回過(guò)頭來(lái)接著(zhù)看accept函數吧。
Sock獲得了,接著(zhù)就是newconn = netconn_accept(sock->conn);通過(guò)mbox取得新的連接。粗略的估計了一下,這個(gè)新的連接應該和listen有關(guān)系。那就再次打斷一下,看看那個(gè)listen操作。
lwip_listen --à netconn_listen_with_backlog--à do_listen--à
tcp_arg(msg->conn->pcb.tcp, msg->conn);
tcp_accept(msg->conn->pcb.tcp, accept_function);//注冊了一個(gè)接受函數
* Accept callback function for TCP netconns.
* Allocates a new netconn and posts that to conn->acceptmbox.
static err_t accept_function(void *arg, struct tcp_pcb *newpcb, err_t err)
{
struct netconn *newconn;
struct netconn *conn;
conn = (struct netconn *)arg;
/* We have to set the callback here even though
* the new socket is unknown. conn->socket is marked as -1. */
newconn = netconn_alloc(conn->type, conn->callback);
if (newconn == NULL) {
return ERR_MEM;
}
newconn->pcb.tcp = newpcb;
setup_tcp(newconn);
newconn->err = err;
/* Register event with callback */
API_EVENT(conn, NETCONN_EVT_RCVPLUS, 0);
if (sys_mbox_trypost(conn->acceptmbox, newconn) != ERR_OK)
{
/* When returning != ERR_OK, the connection is aborted in tcp_process(),
so do nothing here! */
newconn->pcb.tcp = NULL;
netconn_free(newconn);
return ERR_MEM;
}
return ERR_OK;
}
對了,accept函數中從mbox中獲取的連接就是這里放進(jìn)去的。
再回到accept中來(lái),取得了新的連接,接下來(lái)就是分配sock了,再然后,再然后?再然后就等用戶(hù)來(lái)使用接收、發(fā)送數據了。
到此整個(gè)APP層,也就是傳輸層以上對socket的封裝講完了。在最后再總結一些整個(gè)路徑的調用情況吧
本文出自 “bluefish” 博客,請務(wù)必保留此出處http://bluefish.blog.51cto.com/214870/158413