Binder是Android系统提供的一种IPC(进程间通信)机制。由于Android是基于Linux的内核,因此除了Binder之外,还有其他IPC机制,例如管道和socket等,Binder相对与其他IPC机制来说,更为灵活和方便。对于学习Android的同学来说,是需要也最难掌握的就是Binder机制了,因为Android系统基本上可以看做一个基于Binder通信的C/S架构。它就像网络一样,把Android系统中的各个部分联系在一起,所以要想较好的理解Android,那Binder机制就是一个跨不过去的坎。
1-Binder概述 Binder是基于C/S架构体系,但除了Client和Server外,Androids还一个全局ServiceManager端,它的作用是用来管理各种服务(Service)。 这Client、Server、ServiceManager三者的关系如下图所示:
Server进程要先注册服务到ServiceManager中,所以Server相对于ServiceManager是客户端,而ServiceManager是服务端。
如果某个Client进程要使用Service,必须先到ServiceManager中获取该Service的相关信息,所以Client是ServiceManager的客户端。
Client根据得到的Service信息与Service所在的进程建立了联系,然后就可以直接与Service交互了,所以Client也就是Server的客户端。
这三者互相交互都是基于Binder通信
为了更好的了解Binder原理,需要一个切入点。而Client、Server、ServiceManager三者的交互都是基于Binder通信,那么任意两者的交互都可以说明Binder的通信原理,以MediaPlay框架为为例:
如上图可知,MediaPlayer和MediaPlayerService是通过Binder来进行通信,MediaPlayer是客户端,MediaPlayerService是服务端,MediaPlayerService是系统多媒体服务的一种,而系统服务是由MediaService服务进程提供的,它是一个可执行程序,在Android系统启动的时候,MediaService也被启动,入口函数如下:
frameworks/av/media/mediaserver/main_mediaserver.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 int main (int argc __unused, char **argv __unused) { signal (SIGPIPE, SIG_IGN); sp<ProcessState> proc (ProcessState::self()) ; sp<IServiceManager> sm (defaultServiceManager()) ; ALOGI ("ServiceManager: %p" , sm.get ()); MediaPlayerService::instantiate (); ResourceManagerService::instantiate (); registerExtensions (); ::android::hardware::configureRpcThreadpool (16 , false ); ProcessState::self ()->startThreadPool (); IPCThreadState::self ()->joinThreadPool (); ::android::hardware::joinRpcThreadpool (); }
注释1:获取ProcessState实例,在这过程中会打开 /dev/binder 设备,并使用mmap为Binder驱动分配一个虚拟地址空间用来接收数据。 注释2:用来得倒IServiceManager,通过IServiceManager,其他进程就可以和当前ServiceManager交互,这里就涉及到Binder通信。 注释3:用来注册MediaPlayerService
2-ProcessState实例 ProcessState实例代表当前进程的状态,而且每个进程只有一个ProcessState,它是独一无二的。那先看一下self函数:
frameworks/native/libs/binder/ProcessState.cpp
1 2 3 4 5 6 7 8 9 10 11 sp<ProcessState> ProcessState::self () { Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL ) { return gProcess; } gProcess = new ProcessState ("/dev/binder" ); return gProcess; }
self函数采用的是单例模式,创建一个``ProcessState实例,参数是/dev/binder/。接下来是ProcessState`构造函数。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ProcessState::ProcessState (const char *driver) : mDriverName (String8 (driver)) , mDriverFD (open_driver (driver)) , mVMStart (MAP_FAILED) , mThreadCountLock (PTHREAD_MUTEX_INITIALIZER) , mThreadCountDecrement (PTHREAD_COND_INITIALIZER) , mExecutingThreadsCount (0 ) , mMaxThreads (DEFAULT_MAX_BINDER_THREADS) , mStarvationStartTimeMs (0 ) , mManagesContexts (false ) , mBinderContextCheckFunc (NULL ) , mBinderContextUserData (NULL ) , mThreadPoolStarted (false ) , mThreadPoolSeq (1 ) { if (mDriverFD >= 0 ) { mVMStart = mmap (0 , BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0 ); if (mVMStart == MAP_FAILED) { ALOGE ("Using %s failed: unable to mmap transaction memory.\n" , mDriverName.c_str ()); close (mDriverFD); mDriverFD = -1 ; mDriverName.clear (); } } LOG_ALWAYS_FATAL_IF (mDriverFD < 0 , "Binder driver could not be opened. Terminating." ); }
2.1-打开binder设备 open_driver的作用就是打开 /dev/binder这个设备,它是Android在内核中为完成进程间通信而专门设置的虚拟设备。具体实现如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 static int open_driver (const char *driver) { int fd = open (driver, O_RDWR | O_CLOEXEC); if (fd >= 0 ) { int vers = 0 ; status_t result = ioctl (fd, BINDER_VERSION, &vers); if (result == -1 ) { ALOGE ("Binder ioctl to obtain version failed: %s" , strerror (errno)); close (fd); fd = -1 ; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { ALOGE ("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d" , vers, BINDER_CURRENT_PROTOCOL_VERSION, result); close (fd); fd = -1 ; } size_t maxThreads = DEFAULT_MAX_BINDER_THREADS; result = ioctl (fd, BINDER_SET_MAX_THREADS, &maxThreads); if (result == -1 ) { ALOGE ("Binder ioctl to set max threads failed: %s" , strerror (errno)); } } else { ALOGW ("Opening '%s' failed: %s\n" , driver, strerror (errno)); } return fd; }
打开 /dev/binder 设备,并设定Binder最大的支持线程数,这样相当于与内核的binder驱动有了交互的通道。在Linux中每个进程用open函数打开一个文件,内核便会返回改文件的文件操作符,此后所有该文件的操作,都会以此fd文件操作符为参数
PS: 文件描述符可以理解为进程文件描述表这个表的索引,或者把文件描述表看做一个数组的话,文件描述符可以看做是数组的下标。当需要进行I/O操作的时候,会传入fd作为参数,先从进程文件描述符表查找该fd对应的那个条目,取出对应的那个已经打开的文件的句柄,根据文件句柄指向,去系统fd表中查找到该文件指向的inode,从而定位到该文件的真正位置,从而进行I/O操作。
对返回的fd使用mmap,这样Binder驱动就会分配一块内存来接受数据。
到此关于ProcessState主要的内容分析完成,接下来是是第二部分defaultServiceManager。
3-defaultServiceManager函数 defaultServiceManager函数的实现在IServiceManager.cpp中。它会返回一个IServiceManager对象,通过这个对象,可以和另一个进程Service Manager进行交互。那它是如可做到的,带着疑问往下看。
frameworks/native/libs/binder/IServiceManager.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 sp<IServiceManager> defaultServiceManager () { if (gDefaultServiceManager != NULL ) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL ) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self ()->getContextObject (NULL )); if (gDefaultServiceManager == NULL ) sleep (1 ); } } return gDefaultServiceManager; }
这里如何创建IServiceManager对象的过程有点绕,interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL))先看参数部分,ProcessState::self()->getContextObject(NULL),self函数是创建ProcessState实例,然后爱调用getContextObject函数,注意参数为NULL也就是0。
frameworks/native/libs/binder/ProcessState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 sp<IBinder> ProcessState::getContextObject (const sp<IBinder>& ) { return getStrongProxyForHandle (0 ); } sp<IBinder> ProcessState::getStrongProxyForHandle (int32_t handle) { sp<IBinder> result; AutoMutex _l(mLock); handle_entry* e = lookupHandleLocked (handle); if (e != NULL ) { IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak (this )) { if (handle == 0 ) { Parcel data; status_t status = IPCThreadState::self ()->transact ( 0 , IBinder::PING_TRANSACTION, data, NULL , 0 ); if (status == DEAD_OBJECT) return NULL ; } b = BpBinder::create (handle); e->binder = b; if (b) e->refs = b->getWeakRefs (); result = b; } else { result.force_set(b); e->refs->decWeak (this ); } } return result; }
在getStrongProxyForHandle函数中会先调用lookupHandleLocked函数,并入参数handle为0,会根据这个索引查找对应的资源。如果lookupHandleLocked没有发现对应的资源项,则会创建一个新项并返回。并且会创建一个handler为0的BpBinder,最终会赋值给resul返回。所以回到上面interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL))可以转化为interface_cast<IServiceManager>(new BpBinder(0))。
3.1-BpBinder 分析到这多出了一个BpBinder,在介绍BpBinder之前还有另外一个哥们BBinder,它们是双胞胎兄弟,他们的都是Android与Binder通信相关的代表,而且都是IBinder的派生类。
BpBinder是客户端与服务端交互的代理类,而BBinder则代表服务端, BpBinder与BBinder一一对应,BpBinder会通过handle来找到对应的BBinder。
刚才我们在defaultServiceManager函数中创建了BpBinder,这里有两个问题,
为什么要创建BpBinder?
因为此时我们是ServiceManager的客户端,所以需要创建BpBinder代理端与ServiceManager交互
前面说BpBinder和BBinder是一一对应,那么BpBinder如何标识它对应的BBinder端
Binder系统通过handler来标识对应的BBinder
注意此时BpBinder构造函数传入的参数handle为0,这个0在整个Binder系统中有重要的含义——因为0代表的就是ServiceManager所对应的BBinder
1 2 3 4 5 6 7 8 9 10 11 12 BpBinder::BpBinder (int32_t handle, int32_t trackedUid) : mHandle (handle) , mAlive (1 ) , mObitsSent (0 ) , mObituaries (NULL ) , mTrackedUid (trackedUid) { ALOGV ("Creating BpBinder %p handle %d\n" , this , mHandle); extendObjectLifetime (OBJECT_LIFETIME_WEAK); IPCThreadState::self ()->incWeakHandle (handle, this ); }
从BpBinder的构造函数可以发现,我们并没有发现像之前ProcessState中直接操作Binder的相关的操作,那为什么说BpBinder与通行相关呢?其实BpBinder和BBinder只是一个工具,在构造函数有个重要的对象IPCThreadState,从命名上就可见端倪。但现在还不急着揭开庐山真面目,后面会讲到,还是回到interface_cast<IServiceManager>(new BpBinder(0))。
3.2-障眼法interface_cast 在看邓凡平老师的深入理解Android卷1中,把interface_cast命名为障眼法,第一次看的时候做实把我绕晕了,interface_cast以为是指针类型的转换,但其实就是个方法,只不过包装在DECLARE_META_INTERFACE 和IMPLEMENT_META_INTERFACE 两个宏定义的方法,
那我们看看它的实现:
libs/binder/include/binder/IInterface.h
1 2 3 4 inline sp<INTERFACE> interface_cast (const sp<IBinder>& obj) { return INTERFACE::asInterface(obj); }
当前场景INTERFACE 的值为IServiceManager,那么替换后的代码为:
1 2 3 4 inline sp<IServiceManager> interface_cast (const sp<IBinder>& obj) { return IServiceManager::asInterface(obj); }
libs/binder/include/binder/IInterface.h
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 #define DECLARE_META_INTERFACE(INTERFACE) \ static const ::android::String16 descriptor; \ static ::android::sp<I##INTERFACE> asInterface( \ const ::android::sp<::android::IBinder> & obj); \ virtual const ::android::String16& getInterfaceDescriptor() const; \ I##INTERFACE(); \ virtual ~I##INTERFACE(); \ #define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \ const ::android::String16 I##INTERFACE::descriptor(NAME); \ const ::android::String16& \ I##INTERFACE::getInterfaceDescriptor() const { \ return I##INTERFACE::descriptor; \ } \ ::android::sp<I##INTERFACE> I##INTERFACE::asInterface( \ const ::android::sp<::android::IBinder> & obj) \ { \ ::android::sp<I##INTERFACE> intr; \ if (obj != NULL) { \ intr = static_cast<I##INTERFACE*> ( \ obj->queryLocalInterface( \ I##INTERFACE::descriptor).get()); \ if (intr == NULL) { \ intr = new Bp##INTERFACE(obj); \ } \ } \ return intr; \ } \ I##INTERFACE::I##INTERFACE() { } \ I##INTERFACE::~I##INTERFACE() { } \
在IServiceManager.cpp中使用了IMPLEMENT_META_INTERFACE 宏,只有一行代码,如下图所示:
1 IMPLEMENT_META_INTERFACE (ServiceManager, "android.os.IServiceManager" );
IMPLEMENT_META_INTERFACE 宏的INTERFACE 为ServiceManager,NAME 值为"android.os.IServiceManager",进行宏展开后的代码如下所示:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 const ::android::String16 IServiceManager::descriptor ("android.os.IServiceManager" ) ; const ::android::String16& IServiceManager::getInterfaceDescriptor () const { return IServiceManager::descriptor; } ::android::sp<IServiceManager> IServiceManager::asInterface ( const ::android::sp<::android::IBinder>& obj) { ::android::sp<IServiceManager> intr; if (obj != NULL ) { intr = static_cast <IServiceManager>( obj->queryLocalInterface ( IServiceManager::descriptor).get ()); if (intr == NULL ) { intr = new BpServiceManager (obj); } } return intr; } IServiceManager::IServiceManager () { } IServiceManager::~IServiceManager () { }
到这里interface_cast的奥秘已经出来了,最后会调用IServiceManager::asInterface把BpBinder对象作为参数创建一个新的BpServiceManager对象。
注意: 在Android12新版的源码 BpServiceManager的实现挪到out自动生成的路径下 soong/.intermediates/frameworks/native/libs/binder/libbinder/android_arm64_armv8-a_shared/gen/aidl/android/os/IServiceManager.cpp
3.3-BpServiceManager BpServiceManager又是什么?又有什么作用?带着疑问先看BpServiceManager的构造方法:
IServiceManager.cpp ::BpServiceManager
1 2 3 4 5 6 7 8 9 10 11 12 class BpServiceManager : public BpInterface<IServiceManager>{ public : explicit BpServiceManager (const sp<IBinder>& impl) : BpInterface<IServiceManager>(impl) { } ... }
IInterface.h::BpInterface
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 template <typename INTERFACE>class BpInterface : public INTERFACE, public BpRefBase{ public : explicit BpInterface (const sp<IBinder>& remote) ; protected : virtual IBinder* onAsBinder () ; }; template <typename INTERFACE>inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote) : BpRefBase(remote) { }
Binder.cpp::BpRefBase
1 2 3 4 5 6 7 8 9 10 11 12 BpRefBase::BpRefBase (const sp<IBinder>& o) : mRemote (o.get ()), mRefs (NULL ), mState (0 ) { extendObjectLifetime (OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong (this ); mRefs = mRemote->createWeak (this ); } }
看到这里才明白,原来是BpServiceManager的一个变量mRemote指向BpBinder.到这里会想整个defaultServiceManager函数过程,可以得到以下两个关键对象:
有一个BpBinder对象,它的handle值是0.
有个BpServiceManager对象,它的mRemote值是BpBinder
3.4-IServiceManager家族 看到这我们已经知道了BpBinder和BBinder与通信的关系,又引出了BpServiceManager一些列基类,那它们之间又有什么关系呢?
由图可知
BpBinder与BBinder与通信有关,且都继承至IBinder
IServiceManager 派生出BpServiceManager
BpServiceManager又继承了BpInterface,而BpInterface继承BpRefBase,所有在BpServiceManager内包含mRemote,它指向BpBinder,通过BpBinder来实现通信。
4-系统服务注册 经过ProcessState::self和 defaultServiceManager这两个函数分析准备工作后,再次回到文章的开头main入口函数,MediaPlayerService的注册。
frameworks/av/media/mediaserver/main_mediaserver.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 int main (int argc __unused, char **argv __unused) { signal (SIGPIPE, SIG_IGN); sp<ProcessState> proc (ProcessState::self()) ; sp<IServiceManager> sm (defaultServiceManager()) ; ALOGI ("ServiceManager: %p" , sm.get ()); MediaPlayerService::instantiate (); ResourceManagerService::instantiate (); registerExtensions (); ::android::hardware::configureRpcThreadpool (16 , false ); ProcessState::self ()->startThreadPool (); IPCThreadState::self ()->joinThreadPool (); ::android::hardware::joinRpcThreadpool (); }
注释3的代码:
frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp
1 2 3 4 void MediaPlayerService::instantiate () { defaultServiceManager ()->addService ( String16 ("media.player" ), new MediaPlayerService ()); }
defaultServiceManager返回的是BpServiceManager,进入BpServiceManager的addService函数:
IServiceManager.cpp ::BpServiceManager
1 2 3 4 5 6 7 8 9 10 11 virtual status_t addService (const String16& name, const sp<IBinder>& service, bool allowIsolated, int dumpsysPriority) { Parcel data, reply; data.writeInterfaceToken (IServiceManager::getInterfaceDescriptor ()); data.writeString16 (name); data.writeStrongBinder (service); data.writeInt32 (allowIsolated ? 1 : 0 ); data.writeInt32 (dumpsysPriority); status_t err = remote ()->transact (ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode () : err; }
addService函数就是把数据打包到Parcel中,然后传给BpBinder的transact函数:
frameworks/native/libs/binder/BpBinder.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 status_t BpBinder::transact ( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { if (mAlive) { status_t status = IPCThreadState::self ()->transact ( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0 ; return status; } return DEAD_OBJECT; }
这里再次遇见了IPCThreadState,之前在3.1讲到BpBinder的构造函数时提到,现在该到了深入分析的时候。
4.1-IPCThreadState IPCThreadState它是进程真正干活的家伙,先从self函数入手:
frameworks/native/libs/binder/IPCThreadState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 IPCThreadState* IPCThreadState::self () { if (gHaveTLS) { e: const pthread_key_t k = gTLS; IPCThreadState* st = (IPCThreadState*)pthread_getspecific (k); if (st) return st; return new IPCThreadState; } if (gShutdown) { ALOGW ("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n" ); return NULL ; } pthread_mutex_lock (&gTLSMutex); if (!gHaveTLS) { int key_create_value = pthread_key_create (&gTLS, threadDestructor); if (key_create_value != 0 ) { pthread_mutex_unlock (&gTLSMutex); ALOGW ("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n" , strerror (key_create_value)); return NULL ; } gHaveTLS = true ; } pthread_mutex_unlock (&gTLSMutex); goto restart; }
注释1出的TLS的全称为Thread local storage,它🈯️的是线程本地存储空间,在每个线程中都有个TLS,并且线程间不共享,有点像Java层的ThreadLocal。注释2处的用来获取TLS中内容并赋值IPCThreadState*指针,如果如果pthread_getspecific找不到就会创建一个IPCThreadState对象。像这种创建对象的流程事很常见的,先从 一个地方获取,没有则重新创建并赋值,以免每次都创建对象。所以有调用pthread_getspecific,就会有地方调用pthread_setspecific,估计就在Thread local storage的构造函数中:
frameworks/native/libs/binder/IPCThreadState.cpp
1 2 3 4 5 6 7 8 9 10 IPCThreadState::IPCThreadState () : mProcess (ProcessState::self ()), mStrictModePolicy (0 ), mLastTransactionBinderFlags (0 ) { pthread_setspecific (gTLS, this ); clearCaller (); mIn.setDataCapacity (256 ); mOut.setDataCapacity (256 ); }
果真如期望那样,会调用pthread_setspecific,把自身设置到线程本地存储器中。每个IPCThreadState中都有一个mIn、一个mOut,类型都为Parcel。其中mIn用来接收来自Binder驱动的数据,而mOut用来存放发往Binder驱动的数据。默认大小都是256个字节。
4.2-transact 传输工作是很费力,我们刚刚看到了BpBinder的transact调用的时IPCThreadState的transact函数,而此函数才是真正产生了与Binder通行的工作。
frameworks/native/libs/binder/IPCThreadState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 status_t IPCThreadState::transact (int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err; flags |= TF_ACCEPT_FDS; err = writeTransactionData (BC_TRANSACTION, flags, handle, code, data, NULL ); if (err != NO_ERROR) { if (reply) reply->setError (err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0 ) { if (reply) { err = waitForResponse (reply); } else { Parcel fakeReply; err = waitForResponse (&fakeReply); } } else { err = waitForResponse (NULL , NULL ); } return err; }
writeTransactionData 函数用于传输数据,第一个参数是BC_TRANSACTION代表折向Binder驱动发送命令的消息码,而Binder驱动向应用程序回复消息的消息码是以BR_开头,这些消息码都定义爱binder_module.h中。
frameworks/native/libs/binder/IPCThreadState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 status_t IPCThreadState::writeTransactionData (int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t * statusBuffer) { binder_transaction_data tr; tr.target.ptr = 0 ; tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0 ; tr.sender_pid = 0 ; tr.sender_euid = 0 ; const status_t err = data.errorCheck (); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize (); tr.data.ptr.buffer = data.ipcData (); tr.offsets_size = data.ipcObjectsCount ()*sizeof (binder_size_t ); tr.data.ptr.offsets = data.ipcObjects (); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof (status_t ); tr.data.ptr.buffer = reinterpret_cast <uintptr_t >(statusBuffer); tr.offsets_size = 0 ; tr.data.ptr.offsets = 0 ; } else { return (mLastError = err); } mOut.writeInt32 (cmd); mOut.write (&tr, sizeof (tr)); return NO_ERROR; }
在此函数中binder_transaction_data tr结构体是向Binder驱动通信的数据结构,把handle的值传递给target,用来标识目的端,其中0是ServiceManager。然后对data(add Service请求中的数据)数据错误检查,如果没有错误就把数据赋值给tr结构体。最后会将BC_TRANSACTION和tr结构体写入到mOut中。
接下来分析writeTransactionData函数,writeTransactionData函数中的case有点多,只截取部分代码:
frameworks/native/libs/binder/IPCThreadState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 status_t IPCThreadState::waitForResponse (Parcel *reply, status_t *acquireResult) { uint32_t cmd; int32_t err; while (1 ) { if ((err=talkWithDriver ()) < NO_ERROR) break ; err = mIn.errorCheck (); if (err < NO_ERROR) break ; if (mIn.dataAvail () == 0 ) continue ; cmd = (uint32_t )mIn.readInt32 (); switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break ; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; default : err = executeCommand (cmd); if (err != NO_ERROR) goto finish; break ; } } finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError (err); mLastError = err; } return err; }
在waitForResponse函数代码有点多,主要关注的talkWithDriver函数,以及处理命令各种命令。包括executeCommand方法,下面先把talkWithDriver和executeCommand主要代码贴出来。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 status_t IPCThreadState::talkWithDriver (bool doReceive) { if (mProcess->mDriverFD <= 0 ) { return -EBADF; } binder_write_read bwr; const bool needRead = mIn.dataPosition () >= mIn.dataSize (); const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize () : 0 ; bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t )mOut.data (); if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity (); bwr.read_buffer = (uintptr_t )mIn.data (); } else { bwr.read_size = 0 ; bwr.read_buffer = 0 ; } if ((bwr.write_size == 0 ) && (bwr.read_size == 0 )) return NO_ERROR; bwr.write_consumed = 0 ; bwr.read_consumed = 0 ; status_t err; do { IF_LOG_COMMANDS () { alog << "About to read/write, write size = " << mOut.dataSize () << endl; } #if defined(__ANDROID__) if (ioctl (mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0 ) err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif if (mProcess->mDriverFD <= 0 ) { err = -EBADF; } IF_LOG_COMMANDS () { alog << "Finished read/write, write size = " << mOut.dataSize () << endl; } } while (err == -EINTR); if (err >= NO_ERROR) { if (bwr.write_consumed > 0 ) { if (bwr.write_consumed < mOut.dataSize ()) mOut.remove (0 , bwr.write_consumed); else { mOut.setDataSize (0 ); processPostWriteDerefs (); } } if (bwr.read_consumed > 0 ) { mIn.setDataSize (bwr.read_consumed); mIn.setDataPosition (0 ); } IF_LOG_COMMANDS () { TextOutput::Bundle _b(alog); alog << "Remaining data size: " << mOut.dataSize () << endl; alog << "Received commands from driver: " << indent; const void * cmds = mIn.data (); const void * end = mIn.data () + mIn.dataSize (); alog << HexDump (cmds, mIn.dataSize ()) << endl; while (cmds < end) cmds = printReturnCommand (alog, cmds); alog << dedent; } return NO_ERROR; } return err; }
在talkWithDriver函数的内部,binder_write_read是和Binder驱动通信的结构体,把mIn和mOut的数据赋值binder_write_read的相应字段,最终通过ioctl函数与Binder驱动进行通信,这块涉及Kernel Binder相关的内容,本人也不是很熟悉,只需要知道在Kernel Binder中会记录服务名和handle,用于后续的服务查询。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 status_t IPCThreadState::executeCommand (int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch ((uint32_t )cmd) { case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read (&tr, sizeof (tr)); ALOG_ASSERT (result == NO_ERROR, "Not enough command data for brTRANSACTION" ); if (result != NO_ERROR) break ; Parcel buffer; Parcel reply; status_t error; if (tr.target.ptr) { if (reinterpret_cast <RefBase::weakref_type*>( tr.target.ptr)->attemptIncStrong (this )) { error = reinterpret_cast <BBinder*>(tr.cookie)->transact (tr.code, buffer, &reply, tr.flags); reinterpret_cast <BBinder*>(tr.cookie)->decStrong (this ); } else { error = UNKNOWN_TRANSACTION; } } else { error = the_context_object->transact (tr.code, buffer, &reply, tr.flags); } } break ; case BR_DEAD_BINDER: { BpBinder *proxy = (BpBinder*)mIn.readPointer (); proxy->sendObituary (); mOut.writeInt32 (BC_DEAD_BINDER_DONE); mOut.writePointer ((uintptr_t )proxy); } break ; case BR_SPAWN_LOOPER: mProcess->spawnPooledThread (false ); break ; default : ALOGE ("*** BAD COMMAND %d received from Binder driver\n" , cmd); result = UNKNOWN_ERROR; break ; } if (result != NO_ERROR) { mLastError = result; } return result; }
简化MediaPlayService的注册过程中涉及到Binder协议通过程,其中涉及到BC协议、BR协议、target、code、data。就拿MediaPlayService的Client进程向ServiceManager注册服务的过程为例,IPC层的数据traget为零0,code为ADD_SERVICE_TRANSACTION, data里的数据MediaPlayService。这些在上面的代码都有体系。 下图为简化版本的Binder通信过程,也是从进程的角度出发。 client端、Service端通过Binder进行通信,简单点说就是通过向Binder驱动发送命令协议,其中涉及的协议非常多,我把它简化成上面几个过程。
Client端向Binder驱动发送BC_TRANSACTION命令,发送请求数据
Binder驱动接收到请求后对客户端请求的成功反馈
Binder驱动接收请求后生成BR_TRANSACTION命令,发送给ServiceManager,唤醒服务端的线程。
Service端的服务注册完仇,生成BC_REPLY命令发送给Binder驱动
Binder驱动对Service端请求发送成功的反馈
Binder驱动生成BR_REPLY命令,发送给Client端唤醒Client的线程。