Mobile Activity Recognition

This article could be useful to know how to identify and monitor the physical activity of users of smartphones. We will share some methods of cleaning education information, feature choice, proper selecting classification algorithm and model validating. You will see the process of development of the recognition system for mobiles.


The stunning activity smartphones apps could do is to sense present users physical activity like walking, driving, or staying. Identification of activities features numerous apps from fitness or health monitoring to context-based marketing and staff tracking. Context-aware apps are able to personalize activities depending on present process. As an example, if the user is looking for nearby companies it could use bigger radius when driving, and smaller when walking.

One of the apps created for training data accumulating is the Sensor Logger by IBM. It extracts the accelerometer appr. 50 times a second and writes results into local file. Also, it writes captured from GPS present speed. This application was presented to 20 volunteers to install on their mobile phones and use this app for data reporting in the process of exercises.

Feature Recovery

Files recorded by Sensor Logger were examined with the feature recovery program. This program divided logged information into parts of 3 sec, then computed representative feature of every part. These features were commonly determined by frequency analysis of the accelerometer data. It was calculated with a fast Fourier transform (FFT). The FFT functions were after that measured by splitting the series of FFT coefficients into subranges and getting the sums of coeffs in every range. To illustrate, one among the arrangements was the sums of coeffs in a range of 1 Hz, 2 Hz so on. It was calculated a few optional sets of features in the range of minimum and maximum frequencies.

Other characteristics used together with FFT were the mean, the variation, and the energy of the signal, and additionally velocity provided by GPS. The measurements were saved in the database, records were made every 3 second-part with acquired features and the activities from which information was taken. Data connected with log file was also saved in the database. It consisted of a phone model, the OS, the username, and the track name.

Cleaning Data

Before accepting a new log file into the testing data it should be examined for accuracy of classifying with decision tree which was created using validated data. In case the classification accuracy of a new log file is lower an obtained from random partitioning it needs to be audited. To fix the issue the log file should be edited and all wrong data which is determined from inspection should be deleted.

Model Validating

Log files of every track are randomly split between training and testing sets while randomly partitioned. Consequently, training and testing sets may be related, because of consist of records of the same file. Data which is part of the same track logged by the same user with the same phone on the same situation, are usually near each other regarding a variety of features. For that reason examination with random partitioning could be not really indicative and could not show overfitting. A preferable method to define the accuracy is to divide by users – delete several users from training information and apply the data recorded by these users for testing. In a minimum one instance, one of the feature sets provided by random partitioning showed good results, however, results were not as positive with dividing by users, possibly because of overfitting happened by random partitioning.

OS Independency Validating

If you accumulate data from Android, iOS and Windows Phone OS it is crucial to check the possibility to mix the results and create one common decision tree. We did some testing with dividing data by operating systems, just to illustrate the creation of a decision tree from information obtained from Android and use it for iOS. Results proved the same behavior of accelerometer in every OS, no big contrast between data obtained from these operating systems. This approval granted mixing the data and creating a single decision tree.

The tree could be exported as text, HTML or a PMML file. The text file is a basic textual depiction of the tree in which internal nodes are shown by the predicate which is connected with the node and line indentation is needed for branching structure.
Recognition model begins with collecting data from accelerometer and GPS. Every 3 seconds the information is calculated for feature extracting by using FFT. Eventually, the original decision tree code is used to get a classification result – the recognition activities.

YARN Review

Apache Hadoop is regarded as the most in-demand applications for big data handling. It is installed proficiently by a lot of companies for quite a while. Although Hadoop is known as a trusted, scalable and inexpensive option, it is repeatedly receiving upgrades from a big network of builders. Consequently, the version 2.0 gives some innovative functions, one of them is Yet Another Resource Negotiator (YARN), HDFS Federation, and a highly accessible NameNode, it makes Hadoop cluster far more efficient, robust and trustworthy. You will get information on the features and advantages of YARN in this article.

Apache Hadoop 2.0 contains YARN, which splits the resource handling and processing elements. The YARN-based configuration is not limited to MapReduce. The article represents YARN and its benefits. You can get details on how to improve your clusters with YARN’s scalability, performance, and flexibility.

Overview of Apache Hadoop

Apache Hadoop is an open-source application framework which could be deployed on a cluster of computers so the devices are able to interact and collaborate to keep and handle huge volumes of information in an extremely syndicated way. First of all, Hadoop contains two basic elements: HDFS and a distributed computing engine which gives you the ability to execute applications as MapReduce tasks.

MapReduce is an easy software model spread by Google. It is really useful for handling big data in a parallel and scalable manner. It is encouraged by functional programming on which people show their computation in the form of a map and reduce services which handle info as key-value couples. Hadoop also offers the application system for executing MapReduce tasks in the form of a string of map and reduce jobs.
On an important note, the Hadoop system handles all the involved elements of syndicated processing: parallelization, planning, resource supervision, internal interacting, dealing with soft and hard malfunctions or others.

Best Time of Hadoop

However there have been some open-source implementations of the MapReduce model, Hadoop MapReduce rapidly evolved into most favored. Hadoop is likewise among the most interesting open-source projects in the world resulting from a number of great benefits: a high-level API, near-linear scalability, open-source license, capability to be executed on asset hardware and failing persistence. It was installed on a huge number of servers of thousands of companies, and nowadays it is a must for large-scale syndicated storage and processing. Several premature adopters like Yahoo and Facebook constructed huge clusters ranging on 4000 machines to fulfill their continuously increasing data processing demands. Once they created clusters, they have noticed restrictions of the Hadoop MapReduce framework.

The significant limitation of MapReduce is mostly connected with scalability, resource usage, support of workloads distinct from MapReduce. Application execution is regulated by 2 systems:
JobTracker – a single master process. It coordinates any running task and assigns map and reduce jobs for running on TaskTracker. TaskTracker process is secondary, it runs given tasks and regularly informs to the JobTracker. Yahoo technicians in 2010 started to work on a totally new structure of Hadoop which handles all the limitations and add new features.

YARN – Next Generation of Hadoop

The following terms have changed in YARN:

  • ResourceManager in place of cluster manager.
  • ApplicationManager in place of a separate and short-lived JobTracker.
  • NodeManager in place of TaskTracker.
  • A distributed application in place of a MapReduce job.

The YARN structure is consisted of a global ResourceManager, which runs a primary service, generally on a dedicated computer. ResourceManager monitors the number of live nodes and resources obtained on the cluster and matches applications with resources. The ResourceManager is a unique task which obtains info, therefore, it is able to distribution selections in a shared, protected and multi-tenant way.
Once a user runs an application, an instance of a portable process named ApplicationMaster initiated coordination of functioning for all the jobs within the application. This consists of task monitoring, failed jobs restart, speculatively slow tasks execution and determining the number of job counters. These duties were formerly allocated to one JobTracker. The ApplicationMaster and jobs that fit in are executed on resource containers managed by the NodeManager.

The NodeManager is usually a more common and effective form of the TaskTracker. As an alternative to acquiring a limited number of map and reduce slots, the NodeManager possesses several dynamically generated resource containers. The containers size is determined by the volume of resources it consists of, like memory, CPU, HDD, network IO. At present only memory and CPU are included. The quantity of containers on a node is a result of setting specifications and the number of node resources outside devoted to the slave daemons and OS.

Once the ResourceManager takes a new syndication of the task, one of the primary choices the Scheduler does is picking a container where ApplicationMaster would execute. Just when ApplicationMaster is starting it is getting responsibility under the total life cycle of the application. In the first instance, it will deliver resource queries to the ResourceManager to request needed containers. A resource request means a request to get a number of containers to fulfill the demands of the application.


YARN is a totally rebuilt architecture of Hadoop. It appears to be a revolution for the way distributed programs are installed on a cluster of commodity computers. YARN provides evident perks in scalability, effectiveness, and flexibility in comparison to traditional MapReduce in the initial version of Hadoop. Either minor or big Hadoop cluster gets advantages from YARN. For the end-users, the difference is barely visible. You won’t find any explanation not to move from MRv1 to YARN. Nowadays YARN is effectively applied in development by lots of companies like Yahoo, Xing, eBay, Spotify etc.

Machine Data Harvesting

Machine data is available in various forms. Temperature sensors, health trackers, and also air-conditioning systems deliver large volumes of information. But it is hard to know which information is important. In this article, you will know some ways of supporting the usage of big data sets using Hadoop.

Keeping and providing the data

You should consider the ways and terms of information storage prior to examine fundamental methods of data keeping.
One of the main problems with Hadoop is that it delivers append-only information for big volumes of data. However, this technique appears perfect for keeping machine data. This case turns into an issue because the amounts of information contribute needless load into an environment just when turning to live and useful.
You will need mindful management if you want to use Hadoop for storing big data. You will need a strategy to use it. If you want to use the information for live alerts, you wouldn’t like to sift for years to choose the latest info. You should select consciously what to store and for how long.
To know the volume of information you need to store you’ll have to calculate the size of your records and periods when data renews. Basing on these calculations you can get knowledge of amounts of created data. As an example, a three-field data unit is small, however, saved every 15 seconds, creates about 45 KB of data. If you multiply this to 2000 computers you’ll get 92 MB per day.
You should ask yourself: how long do you want your data to be available? By-the-minute information is not really used during a week, because the importance of this info is weak when the problem is solved.
You should also define the baseline knowing the context. The baseline is a data point or matrix shows standard operation. You can much more easily recognize aberrant trends or spikes with available baseline. Baselines are comparison values you keep to identify when a new amount is beside of standard level. Baselines have 3 types:

  • Pre-existing baselines – that is already known baseline if you monitor a big data.
  • Controlled baseline – for the units and systems which need control. Determine baseline with the comparison of the controlled and monitored value
  • Historical baselines – this type is applied to systems where the baseline is calculated by existing value.

Historical baselines certainly modified with time and, apart from exceptional conditions, never specified to a hard figure. It should be changeable based on the information you get from the sensor and the value. Baselines should be computed based on past values. As a result, you have to figure out the amount you want to compare and how far back to go.
You may keep and generate graphical representations of information, however just like with basic storage, you are improbable to return a certain moment of time. But you should keep the minimum, maximum and the standard every 15 minutes to create the graph.

Storing the data in Hadoop

Usually, Hadoop is not good enough for live database for big data. However it is a reasonable solution for appending the information into the system, a near-line SQL DB for data storage is a much preferable option. A reasonable way to load information into the database is using a permanent write into the Hadoop Distributed File System (HDFS) by adding to current. Hadoop is able to work as a concentrator.

One technique is to record every diverse information into a particular file for a time and copy this data into HDFS for handling. Also, you may write straight into a file on HDFS which is available from Hive.

Within Hadoop, various small files significantly less effective and practical compared to a small quantity of bigger files. Largest files are spread across the cluster with better effectiveness. For this reason, the information is better to spread across several nodes of the cluster.

Assembling the information from various data points to the numerous bigger files is more effective.
You have to be sure that the data is extensively transferred within the system. With a 30-node cluster, you’d like data to be split over the cluster for better efficiency. This allocation leads to the most effective transactions and replies time. It is crucial if you want to utilize the info for monitoring and alerts.
Those files may be attached via one concentrator that writes the data into these bigger files by gathering from various hosts. Separating the information in this way means you may begin to divide your data systematically in accordance with the host.

Generating Baselines

First of all, if you don’t have hard of controlled baselines, you should generate its statistics to define which is normal. This data will probably modify with time, so you desire to be capable to know what the baseline is over this time by examining available information. You could analyze the data with Hive by applying an appropriate query to create minimum, maximum and average research.

To save reexamining every time write the data into a new table to compare the definition of the issue in the incoming streams. For constant tests calculate the items you evaluate the latest and the current. Research the whole table and compute a proper value across the entire data. You also could calculate additional values like standard deviation or precision.

The important step for generating baselines is archiving the old data. First of all, you need to compress the data, then mark them appropriately. This method needs creating a modified form of the baseline query to summarizes the data by applying a set of maximum and minimum basis.
Ensure that you keep your information which could be summarized to create the data and values beyond.


When you utilize and handle raw computer data, obtaining and keeping the information into Hadoop is actually the slightest of your troubles. Instead, you have to define what data shows and the way you would like to review and report the data. When you have the raw data and is able to run queries on it within Hive you should compute your baselines. After that operate a query which primary signifies the baseline and then issues against it to locate the data beyond the baseline limits. This article contains some of the methods for handling, defining, and then eventually determining those live exceptions to find errors which should be reported and alerted and then shown to a control application.