(Course No.)
(City & State)
Android is the world’s famous operating system used in more than 190 countries to run mobile devices. It is installed on the mobile platform and provides the tools for creating apps that utilize the hardware capabilities on each device. Android automatically adapts to the user interface of each and every device and provides the easy, customizable control. This paper is going to explore in detail the Android operating system process management in terms of CPU scheduling, threads, process synchronization and deadlocks.
Android mobile operating system is based on the Linux Kernel 2.6 that provides open source license and adaptability to user driven apps. It incorporates all the basic features of an operating system such as process scheduling, memory management threading and process synchronization. The fundamental feature of a successful operating system is the process scheduling. This is achieved by synchronizing resources in a manner that avoid conflicts. Thus, scheduling is essential to adapt the system with the requirements of a given application.
Operating system is an interface that enables the communication between applications and hardware. Successful operating systems have considerably high throughput rate. In order to attain this essential feature, task scheduling and memory management are two principal requirements. Mobile operating systems such as Android are embedded on devices and are designed to satisfy a specific timeline for the execution of certain tasks. Priority based task scheduling is one example of one mechanism used to attain quick response time for important tasks such as messaging.
When an application component initiates an application, and there is lack of other components running, Android initiates a Linux process for the application using a single thread of execution. All components of the same application are set to run in the same process and thread in default. This thread is known as the “main”. An application component starting while another process for that the application is running (due to another component of the application already running) will utilize the same thread of execution except when the user arranges for different applications using different processes.
THREADS AND PROCESS SYNCHRONIZATION
Android classify processes according to the order of their importance. The most important process is killed last. These processes are arranged as follows;
The android operating system tries to keep and application as long as possible, and at the same time removes old applications to create a memory for new processes. An importance hierarchy places each proves in an order based on components running in the process and their state. Therefore, a rule that eliminates the lowest processes and those with least importance are adhered to reclaim and efficiently utilize resources.
- Foreground processes
- Visible processes
- Service processes
- Background processes
- Empty processes
Foregoing process
Fig. 1 Priority ranking of processes nin operating systems
This processes dictate the process that is required for what the user is currently doing. It uses a number of conditions to determine if a process is in the foreground It host and activity that the user is interacting with
- It host as a service that is bound to the activity that the user is interacting with
- It host a service that is running on the foreground
- It host as a service that is executing one of the lifecycle callbacks
- It host a broadcast receiver service that is executing its onReceive method
Few processes exist on the foreground state at any instance and are killed as a last resort due to low memory that restricts it from running. This point is known as the memory paging state, and the foreground process must be killed to keep the UI interactive. Visible process
This is a process that does not have any foreground components but can directly affect the display on the screen. A process is visible if either of the following conditions are true; It hosts an Activity that is not on the background but is visible to the user. For instance, the foreground activity initiated a dialog which permits the previous activity to be seen behind it (onPause()). It is hosted a Service that is bound to a visible activity.
Service process
This is a process that is running a service that has been initiated with the StartService () and does not fall under the previous two categories. Service processes are not bound to what the user sees but deals with what the user cares such as playing music on the background. Thus, the system keeps them running unless there is insufficient memory to retain them together with the foreground and visible processes. Background processes
This process holds the activity that is not currently visible to the user (onStop) and has no direct effect on the user experience. Thus, the system can have kills them any time to reclaim memory for background, visible, and service processes. Background processes are kept in the LRU (least recently used) list to make sure that the last process recently accessed by the user is the last to kill. Such activities can be restored through correct implementation of the lifecycle if they are killed. Empty process
This process holds no active application components and is primarily used for caching processes to up enhance startup times the next instance a system component need to run. A process hosting a visible activity is referred as a visible process and not a service process. THREADS
A system creates a thread of execution for an application launched. Threading is important because it is in charge of allocating events to the appropriate user interface widgets. It also gives a platform for application interaction with components from the Android UI toolkit. These are components from the android.widget and android.view. For instance, a user touching a button on the screen will initiate the IU thread to dispatch the touch event to the widget which consequently sets the pressing state and dispatches an invalidate request to the event queue. The UI dequeues the request and advises the widget that it should redraw itself.
An application performing intensive operations in response to user interaction will perform poorly if the applications are not implemented intelligently. For instance, long applications such as database querying and network access have a tendency of blocking the UI. Blocked threads consequently leads to un-dispatched events including drawing events. This is viewed from the user perspective as hanging applications. The “application not responding ANR” dialog appears if the UI thread is blocked for more than 5 seconds. Thus, Androids single thread operates under two principles;
Do not block the UI thread
Do not access to the UI toolkit using the outside UI thread
WORKER THREADS
As a result of the single thread model, it is appropriate to initiate multiple non-instantaneous operations using separate threads known as worker threads. For instance the code below shows the click listener that downloads images from separate threads and display it in Imageview
There is a conflict in the code because it violates the second rule of the single-thread model. Thus a solution is provided by Android using the following methods.
- Activity.runOnUiThread
- View.post
- View.postDelayed
The implementation allows the thread-safe model where the network operation is accessed from a different thread and the ImageView is manipulated from the UI thread.
Better modes of performing asynchronous operations include the AsyncTask and thread safe modes. AsyncTask does the blocking operations in the worker thread and publishes the results in the UI thread without the need to handle threads himself. This mode is implemented in the “do InBackground ()”running in background thread and executing from the “execute ()” IU thread.
Thread-safe method applies to methods that implement more than one thread and can be called remotely such as bound service.
HOW ANDROID MANAGES THREADS
Application developers in Android have come up with house threads that control threads. Controlling thread of execution is a nightmare and a service container such as Open Source Tymeac has been designed to manage application threads. Some activities need to form background processes without involving the user interface. Examples of these applications include downloading files from the internet, CPU intensive tasks, and networking. The process of managing service and application threads is complicated. This is because all threads share the same execution context in terms of address space, I/O buffers, save areas and handlers. Therefore, a misbehaving thread will damage an application. In addition, declaring to many threats can ultimately impact other applications in the address spaces. Likewise, there is no safe way to kill an application without putting to risk the execution context and inconsistent states in shared objects. Therefore, a mechanism need to be developed to control both the main thread and the requested threads in multi-threading.
Tymeac refers to an asynchronous thread manager and easy assembly of components mechanism that put requests in queues to be processed by asynchronous threads. Tymeac works by placing the client request on a queue. Also, it uses a fork-join logic to divide multiple requests into components and place each into its appropriate queue.
A thread in the pool of each queue receives the request and executes it through user-written class. Thereafter, it returns the data from class to where it fetched it. Thus, it returns back the data to the caller. In the multi-part request, Tymeac concatenates the return data originating from all the components into a Parceable join and transfer the array to the caller.
Tymeac simply works by separating the request from the requester thereby managing queues and threads. In addition, it controls components from multiple parts, recursions when nested levels of access are required, persistence for shared threads, debugging, run-time alteration, extensions and logging.
Other applications and coding have been developed to manage threading. A few will be discussed here.
There is an application that spawns a new thread when a user asks for an image filtering. If too many concurrent requests are made, the thread manager initiates a RejectedExecutionException. Adding the rejected task to the queue automatically programs the application to check for backlog every time the thread is started. A possible setback for this application is that a rejection of the final task at the first attempt will render task rejected in the future. A possible solution involves cloning of the AsyncTask to raise the limit by adjusting the MAXIMUM POOL SIZE and the LinkBlockingQueue. The code for the implementation is as shown;
In addition, the code below illustrates a class extending Thread to retrieve user location using LocationManager in a non-user interface thread. This thread is implemented on request and consisted of a Looper object that is able to create the handler for the LocationManager.
This code can be used to implement LocationManager and call Looper associated with current and specific threads.
In multi threading, Android uses AsyncTask methods to execute operations. For instance, onPreExecute() is invoked in the UI thread after the execution of a task. This method is used for task setup such as a progress bar in the interface.
The main task invokes the AsyncTask to perform some slow job. AsyncTask method performs the required computation and subsequently updates the main’s UI. The background activity for instance, confers the writing of the lines in the text box and also controls the circular progress bar.
CPU SCHEDULING
This involves the operation models that get a process to be attached to the CPU. It is founded on efficient algorithms that operate well to assign processes to the CPU in order of priority. Schedulers are designed to ensure that all the applications and users are allocated a fair share of the CPU. Scheduler selects among the processes in memory that are ready for operation and gives the CPU to one of them.
In order for the scheduler to allocate or terminate a process from the CPU, it must exhibit the following features. The processes that switch from running to waiting state processes switching from running to ready state, processes switching from waiting to ready and terminating processes. Scheduling under step one and four are non-preemptive processes while two and three are preemptive. The dispatcher module allocates control of the CPU to the process chosen by the short term scheduler through a host of processes involving switching context, switching to user mode, and shifting to the proper location in the user program to initiate a restart of that program. Dispatch latency is the time it takes the dispatcher to stop one process and initiate another.
The following characteristics have been put into consideration in the Android OS to attain the best scheduling algorithm for optimum operation.
CPU utilization represents the fraction of time a the CPU is in use. Usually the rage is between 40% to 90%.
Throughput dictates the number of jobs completed in a specified period of time.
Residence time implies the time spent by request at a device and is given by the service time plus the queueing time.
Think time refers to the time spent by the user of an interactive system to figure out the next request.
- Scheduling Algorithm
- First-come First-Serve scheduling
FCFS is the easiest scheduling algorithm utilized in most operating systems and applies the principle of first-come, first-serve. A process that ask the CPU first is given first priority in allocation, and the implementation of the FCFS is managed by the FIFO queue. A process entering the ready queue is blocked by the process control onto the tail of the queue. A free CPU is allocated to the process at the end of the queue. A running process is subsequently removed from the queue. However the average waiting time is increased depending on the allocation criteria, and it is not automatically minimal but varies according to the process CPU’s burst time. FCFS has some disadvantages in that it utilizes the convoy mechanism that waits the long CPU process to be executed first rendering other short-time processes and I/O-bound processes idle. This result in CPU and device underutilization which might not be the case if shorter processes would have allocated the CPU first.
Android uses the Linux scheduling policy where threads with higher “niceness” will be run less often than those with lower “niceness”. The default priority thread is “Process. THREAD_PRIORITY_DEFAULT” and run more than the “Process. THREAD_PRIORITY_BACKGROUND)”. The CFS (completely fair scheduler is still under development to conform with the Linux mainline development kernel.
PROCESS SYNCHRONIZATION
This is the process of getting the processes to coordinate with each other with the shared resources and acquire locks to safeguard some regions of memory. This involves the producer process and the consumer process. The producer gives the information that is utilized by the consumer process. Android applications uses the SymmetricDS on the mobile platform due to its full-featured Java-based SymmetricDS client.
The SQLite database in Android is the centre of focus since the SymmetricDS coordinates the capturing and transfer of information. It also directly support deployment in Android devices by eliminating the need for external dependencies. The database access layer was abstracted in Android to enable the use of Android specific database access layer. Thus, the operations are made efficient in the SQLite database.
SymmetricDS can easily be integrated with NotePad applications using the Android-specific Symmetric service through the AndroidManifest.xml file as shown.
Likewise, the internet permission can be accessed from the following algorithm
DEADLOCK
Deadlock is defined as a combination of two or more processes that request resources and enter into waiting state if the resources are not available at the moment. In some instances, the waiting process is kept in that mode for infinity because another process is holding on to the requested resources. Thus, termed that scenario as deadlock. A real life illustration is an example of two trains approaching each other at a crossing and both Operating Systems have no capability to prevent deadlock and thus, programmers are posed with the challenge of designing deadlock free applications. Deadlock problems are more common in respect to current trends that include numerous processes, multithread programs and many resources within a system. Also, the desire to have a long-lived file and database servers other than batched systems. According to the characteristics of dreadlock are classified into four; mutual exclusion, hold and wait, no preemption and circular wait. The use of allocation graphs illustrates the scenarios with and without deadlock. System resource allocation graph contains a set of vertices of two different types of nodes; P=(p1, ..pn) and R= (r1,r2, ..rn).
Deadlock preventives strategies in mobile applications run on Android include the adoption of two essential mechanisms to ensure that deadlock does not occur in the first place. This includes deadlock prevention and deadlock avoidance. Deadlock prevention has to do with the set of procedures and mechanisms that ensure that no one of the necessary conditions exist. This is achieved by constraining the method of sending a request.
Deadlock prevention is the state that a system allocates resources to each an individual process in the mobile application in an orderly manner to avoid the existence of deadlocks. A certain sequence process described as < p1,p2,p3, .pn> is used to achieve the safe mode where each of Pis the resource are allocated to it and can be made available to the resources held by Pj with a condition that j< i. If the resources cannot be immediately allocated then, Pi is kept in the waiting mode until all the Pj have finished their operations and ultimately release the needed resources. Thus, this literature describes the sequence in which Pi must terminate in order for Pi+1 to obtain the resources. A contrary state is known as unsafe state and can result as of lack of algorithm or order for the allocation of resources. However, the exception is not all unsafe modes are deadlock. The operating system lacks the capacity to control the processes of requesting the resources, and as such, the behavior of the processes determine if the unsafe state may proceed to deadlock or not.
Android uses a Android Dimmunix design model to eliminate deadlock. Dimmunix is implemented in the Android’s Dalvik VM. Dalvik VM is a customized JVM in the Android OS that runs all the applications. Deadlock immunity is implemented in the Dalvik VM because it cannot be implemented within the kernel space.
Fig.2 Architecture of the Android Dimmunix
The implementation of this immunity allows the Android device to detect and avoid deadlock initiated by lock inversion as a result of wait () calls. The inversion that leads to a deadlock is represented as shown in figure
Threads ti and t2 are going to deadlock when thread t1 finishes waiting for x and attempts to reacquire x at the same time holding monitor y. Likewise, thread t2 is waiting for y and holding on x at the same time.
In order to detect and avoid such a deadlock, the code of the object.wait() is altered such that Dimmunix is called before and after the reacquisition of x and at the end of the wait function.
Dimmunix consist of two components; the dimmunix code and the integration code. Dimmunix core is used to implement the deadlock immunity while integration code contains the information utilized to call the Dimmunix core. The core is made up of 661 lines of code (LOC) while integration code has 155 LOC.
Dalvik VM implements the monitorenter, monitorexit, and object.wait statements in the routine lockMonitor, unlockMonitor, and waitMonitor. Thus if lock Monitor is changed to invoke Request and Acquired Dimmunix routines, the threads are implemented as shown.
Android Dimunix does not handle deadlocks that deals with native code but can be synchronized with POSIX Threadlibrary in a careful manner. Android OS allows Dimmunix to intercept calls directed at the POSIX Threads synchronization routine only if native code is executing.
In conclusion it is evident that implementation of Dimmunix in the Android operating system within the Dalvik VM presenting deadlock immunity to all applications. The performance and memory overhead is relatively slow at 4-5% and 4% consecutively thus making Dimmunix a practical application for all deadlock bugs in Android OS.
Reference
Laird Dornin, G. B. (2012). Programming Android. O'Reilly Media, Inc.