上篇文章以MediaPlayService为例,讲解了系统服务进程注册到ServiceManager的过程,那与之相对应的过程是服务的获取过程。但在这之前需要先了解ServiceManager,之前有说到过defaultServiceManager函数返回BpServiceManager,然后通过命令发送给handle为0的服务端,这个服务端就是ServiceManager。所以首先要了解ServiceManager的启动过程,这样更有助于理解系统服务的注册过程和获取过程。
1. ServiceManager的入口函数
frameworks/native/cmds/servicemanager/service_manager.c
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 int main (int argc, char ** argv) { struct binder_state *bs ; union selinux_callback cb ; char *driver; if (argc > 1 ) { driver = argv[1 ]; } else { driver = "/dev/binder" ; } bs = binder_open (driver, 128 *1024 ); if (!bs) { while (true ) { sleep (UINT_MAX); } return -1 ; } if (binder_become_context_manager (bs)) { ALOGE ("cannot become context manager (%s)\n" , strerror (errno)); return -1 ; } binder_loop (bs, svcmgr_handler); return 0 ; }
先接收一下main函数中binder_state这个结构体,用来存储binder的三个信息。
frameworks/native/cmds/servicemanager/binder.c
1 2 3 4 5 6 struct binder_state { int fd; void *mapped; size_t mapsize; };
fd: binder设备的文件描述符
*mmaped: binder设备文件映射到进程的地址空间
mapsie: 内存映射后,系统分配的地址空间大小,默认为128KB
在main函数中binder_open用于打开binder设备,并申请128KB大小的内存空间。binder_become_context_manager函数将serviceManager注册成为Binder通信的上下文handle为0。调用binder_loop函数,循环等待和处理客户端,注意svcmgr_handler这个函数指针。
1.1- 打开binder设备
frameworks/native/cmds/servicemanager/binder.c
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 struct binder_state *binder_open (const char * driver, size_t mapsize) { struct binder_state *bs ; struct binder_version vers ; bs = malloc (sizeof (*bs)); if (!bs) { errno = ENOMEM; return NULL ; } bs->fd = open (driver, O_RDWR | O_CLOEXEC); if (bs->fd < 0 ) { fprintf (stderr,"binder: cannot open %s (%s)\n" , driver, strerror (errno)); goto fail_open; } if ((ioctl (bs->fd, BINDER_VERSION, &vers) == -1 ) || (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) { fprintf (stderr, "binder: kernel driver version (%d) differs from user space version (%d)\n" , vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION); goto fail_open; } bs->mapsize = mapsize; bs->mapped = mmap (NULL , mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0 ); if (bs->mapped == MAP_FAILED) { fprintf (stderr,"binder: cannot map device (%s)\n" , strerror (errno)); goto fail_map; } return bs; fail_map: close (bs->fd); fail_open: free (bs); return NULL ; }
有了之前分析过ProcessState的过程,再看见这个函数明显就熟悉很多了。先通过open函数打开binder设备文件,然后在调用ioctl函数获取Binder的版本,最后调用mmap函数进行内存映射,这样binder设备文件映射到进程的内存空间,这个空间大小128KB。完成这些步骤后会将地址空间的大小和起始地址以及该文件的描述符保存在binder_state结构体中。
1.2- 成为上下文管理者
frameworks/native/cmds/servicemanager/binder.c
1 2 3 4 int binder_become_context_manager (struct binder_state *bs) { return ioctl (bs->fd, BINDER_SET_CONTEXT_MGR, 0 ); }
ioctl函数会调用Binder驱动的binder_ioctl,这块属于内核Kernel Binder的代码,我也不是很了解。该函数内部会处理switch处理BINDER_SET_CONTEXT_MGR这个命令,最终会把ServiceManager当前进程注册为Binder机制的上下文管理者。
1.3- binder-loop循环等待 ServiceManager成功注册成Binder机制的上下文管理者后,ServiceManager就是Binder机制的总管。所有进程间的通信都得通过它处理,但客户端什么发送请求是无法确定的。如果想让系统运行期间时刻处理客户端的请求,想必就是通过无线循环来处理,而ServiceManager也是这么处理的,具体的实现是函数binder_loop:
frameworks/native/cmds/servicemanager/binder.c
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 void binder_loop (struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr ; uint32_t readbuf[32 ]; bwr.write_size = 0 ; bwr.write_consumed = 0 ; bwr.write_buffer = 0 ; readbuf[0 ] = BC_ENTER_LOOPER; binder_write (bs, readbuf, sizeof (uint32_t )); for (;;) { bwr.read_size = sizeof (readbuf); bwr.read_consumed = 0 ; bwr.read_buffer = (uintptr_t ) readbuf; res = ioctl (bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0 ) { ALOGE ("binder_loop: ioctl failed (%s)\n" , strerror (errno)); break ; } res = binder_parse (bs, 0 , (uintptr_t ) readbuf, bwr.read_consumed, func); if (res == 0 ) { ALOGE ("binder_loop: unexpected reply?!\n" ); break ; } if (res < 0 ) { ALOGE ("binder_loop: io error %d %s\n" , res, strerror (errno)); break ; } } }
bwr上篇文章有提过,用来与Binder驱动数据结构体binder_write_read。在注释1出的readbuf存储BC_ENTER_LOOPER指令,然后通过binder_write函数写入到Binder驱动中。 在无限循环中不断的调用注释2的ioctl函数,ioctl函数使用BINDER_WRITE_READ指令查询Binder驱动中是否有新的请求,如果有新的请求交给binder_parse函数处理。如果没有新的请求,当前线程就会在Binder驱动中睡眠,等待新的进程间请求。
1、先看binder_write函数:
frameworks/native/cmds/servicemanager/binder.c
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 int binder_write (struct binder_state *bs, void *data, size_t len) { struct binder_write_read bwr ; int res; bwr.write_size = len; bwr.write_consumed = 0 ; bwr.write_buffer = (uintptr_t ) data; bwr.read_size = 0 ; bwr.read_consumed = 0 ; bwr.read_buffer = 0 ; res = ioctl (bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0 ) { fprintf (stderr,"binder_write: ioctl failed (%s)\n" , strerror (errno)); } return res; }
在binder_write中函数中对bwr进行赋值,再调用ioctl函数命令是BINDER_WRITE_READ将bwr中的数据发送给Binder驱动,这里又进去内核Kernel Binder的代码。
Kernel Binder内核binder.c的源码
binder_ioctl函数处理BINDER_WRITE_READ指令,会先通过copy_from_user函数将用户空间的binder_write_read数据复制并保存到内核数据binder_write_read bwr中。如果内核binder_write_read中的write_size输入有数据时,会调用binder_thread_write函数来处理BC_ENTER_LOOPER协议,其内部会将目标线程的状态设置为BINDER_LOOPER_STATE_ENTHERED,这个目标线程就是Binder线程。 然后会通过copyto_user函数将内核空间的数据bwr复制到用户空间。
2、再看回到binder_loop的死循环中,接收到请求,交给binder_parse,最终会调用func来处理这些请求。往binder_loop中传入的那个函数就是svcmgr_handler,它的代码如下:
frameworks/native/cmds/servicemanager/service_manager.c
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 int svcmgr_handler (struct binder_state *bs, struct binder_transaction_data *txn, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si ; uint16_t *s; size_t len; uint32_t handle; uint32_t strict_policy; int allow_isolated; uint32_t dumpsys_priority; strict_policy = bio_get_uint32 (msg); s = bio_get_string16 (msg, &len); switch (txn->code) { case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16 (msg, &len); if (s == NULL ) { return -1 ; } handle = do_find_service (s, len, txn->sender_euid, txn->sender_pid); if (!handle) break ; bio_put_ref (reply, handle); return 0 ; case SVC_MGR_ADD_SERVICE: s = bio_get_string16 (msg, &len); if (s == NULL ) { return -1 ; } handle = bio_get_ref (msg); allow_isolated = bio_get_uint32 (msg) ? 1 : 0 ; dumpsys_priority = bio_get_uint32 (msg); if (do_add_service (bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority, txn->sender_pid)) return -1 ; break ; case SVC_MGR_LIST_SERVICES: { uint32_t n = bio_get_uint32 (msg); uint32_t req_dumpsys_priority = bio_get_uint32 (msg); if (!svc_can_list (txn->sender_pid, txn->sender_euid)) { ALOGE ("list_service() uid=%d - PERMISSION DENIED\n" , txn->sender_euid); return -1 ; } si = svclist; while (si) { if (si->dumpsys_priority & req_dumpsys_priority) { if (n == 0 ) break ; n--; } si = si->next; } if (si) { bio_put_string16 (reply, si->name); return 0 ; } return -1 ; } default : ALOGE ("unknown code %d\n" , txn->code); return -1 ; } bio_put_uint32 (reply, 0 ); return 0 ; }
注意SVC_MGR_ADD_SERVICE这个case,还记得上篇文章中调用BpServiceManager的addService函数吗?
frameworks/native/libs/binder/IServiceManager.cpp::BpServiceManager
1 2 3 4 5 6 7 8 9 10 11 virtual status_t addService (const String16& name, const sp<IBinder>& service, bool allowIsolated, int dumpsysPriority) { Parcel data, reply; data.writeInterfaceToken (IServiceManager::getInterfaceDescriptor ()); data.writeString16 (name); data.writeStrongBinder (service); data.writeInt32 (allowIsolated ? 1 : 0 ); data.writeInt32 (dumpsysPriority); status_t err = remote ()->transact (ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode () : err; }
其中的ADD_SERVICE_TRANSACTION就是对应SVC_MGR_ADD_SERVICE这个case,所以会调用do_add_service这个函数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 struct svcinfo *find_svc (const uint16_t *s16, size_t len) { struct svcinfo *si ; for (si = svclist; si; si = si->next) { if ((len == si->len) && !memcmp (s16, si->name, len * sizeof (uint16_t ))) { return si; } } return NULL ; } int do_add_service (struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle, uid_t uid, int allow_isolated, uint32_t dumpsys_priority, pid_t spid) { struct svcinfo *si ; if (!handle || (len == 0 ) || (len > 127 )) return -1 ; if (!svc_can_register (s, len, spid, uid)) { ALOGE ("add_service('%s',%x) uid=%d - PERMISSION DENIED\n" , str8 (s, len), handle, uid); return -1 ; } si = find_svc (s, len); if (si) { if (si->handle) { ALOGE ("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n" , str8 (s, len), handle, uid); svcinfo_death (bs, si); } si->handle = handle; } else { si = malloc (sizeof (*si) + (len + 1 ) * sizeof (uint16_t )); if (!si) { ALOGE ("add_service('%s',%x) uid=%d - OUT OF MEMORY\n" , str8 (s, len), handle, uid); return -1 ; } si->handle = handle; si->len = len; memcpy (si->name, s, (len + 1 ) * sizeof (uint16_t )); si->name[len] = '\0' ; si->death.func = (void *) svcinfo_death; si->death.ptr = si; si->allow_isolated = allow_isolated; si->dumpsys_priority = dumpsys_priority; si->next = svclist; svclist = si; } binder_acquire (bs, handle); binder_link_to_death (bs, handle, &si->death); return 0 ; }
启动ServiceManager进程会在有个bingder_loop无限循环不断的使用BINDER_WRITER_READ之前查询Binder驱动是否有新的请求,如果没有,当前线程就会在Binder驱动中睡眠,如果有就会交给 binder_parse处理,在binder_parse函数内部最终会出发svcmgr_handler 找到对应的switch/case调用do_add_service函数,讲对应的服务信息保存到svclist中。
2. ServiceManager的意义
ServiceManager能集中管理系统的所有服务,它能施加全县控制,统一管理,比如并不是所有的进程都能注册服务
ServiceManager支持通过字符串查找对应的Service
由于各种原因,Service进程可能生死无常。如果每个Client都去检测,那控制起来就很麻烦。现由ServiceManager通过管理,Client只需要查询ServiceManager,其他的无需管理。