Activity Recognition

A working implementation of my real-time activity recognition system: 

Times at which I perform the various activities: Squats - 0:20, Pushups- 0:40, Bicep curls - 1:04, Tricep extensions - 1:20, Walking - 1:41, Rest - 2:00.

How does it work?
The set up involves two iPhones: one is worn on my forearm (iPhone 4s) to capture motion, and the other (iPhone 6 - right corner of the video) displays the predicted activity.

The iPhone 4s is the brains of the operation. It captures my motion through the accelerometer and gyroscope sensors, transforms this raw data into meaningful input vectors, feeds them into a multi-layer neural network which then outputs a prediction. This value is then sent and displayed on the iPhone 6 via bluetooth. This whole process repeats every second to produce a local and real-time* activity recognition system.

*The recognition is not perfectly real-time. There is an inherent lag of ~3 to 4 seconds. This is because the system uses a 4-second window (with a sliding length of 1 second) to compute the input vectors for the neural network. Thus the system predicts the correct label approximately 3 to 4 seconds after an activity has been started.

What’s with the random predictions at times?
These random predictions have a pattern. They occur during the transitional periods between exercises. The neural network does not know about transitions, so it tries to fit an activity to the observed motion. As the motion during these periods is sporadic, the predictions jump from activity to activity.

The video shows the raw output of the neural network. In my project, I have addressed this problem by employing a simple accumulator strategy. The raw prediction of the neural network is fed into an accumulator which requires a threshold (i.e. a streak of x consistent predictions) to be met before changing its prediction to a new activity.

The app

The iOS application has three major functions:

  1. Viewer: This view looks for devices running the app in 'Tracker' mode so it can connect and display the input it receives.

  2. Tracker: Once the user hits 'Starts Tracking', this view performs the 'activity-recognition' part and then sends its output to devices running in the 'Viewer' mode.

  3. Trainer: Allows the user to perform additional training on any activity. Once the user's motion has been captured, the neural network learns from this data to better adapt to the user's form.

Rationale for using a neural network
Neural networks provide a level of malleability that is very important for this project. The multi-layer network is implemented using online learning and this enables a level of personalisation. The neural network comes with a base training set (my training data) but the user is able to build on this by performing additional training. With this extra training the network can adapt and change to fit and match user's form.

This level of malleability is not available in decision trees. A decision tree would have to be re-built each time to accomodate additional training from the user. This is a relatively computationally expensive solution as the data set grows.

I experimented with a naive Bayes classifier and although it's quicker to build and has no optimisation step, the neural network had higher accuracy in almost all of the test scenarios.

Test results
The neural network has high accuracy on my activity data. It achieves accuracy levels of ~93% on a test circuit containing 5 exercises — with transitional periods removed. Such high accuracies are to be expected as it’s trained on my data. To test it out on another person, I recruited one of my friends to complete the same test circuit. The system achieved accuracy levels of 78% without additional training, and 92% with 30 seconds worth of additional training per activity. These results look promising but I’m going to conduct more tests with additional participants over the next week to see if the results are reproducible.

Some random/highly specific questions you may have:

Can it count repetitions for gym exercises?
Not yet. I would love to add that functionality but I haven't had time to tackle that problem yet.

Wouldn’t a better approach be to initially train the neural network on more than just one person?
That is a great point! In fact Microsoft’s research arm published a paper last year doing exactly that. Although they achieved great results, their initial training cohort consisted of 94 participants! I, as a one man team, can’t possibly duplicate that. This is why I created a system that can adapt, eliminating the need of Microsoft level resources. 

GIFME

Last month, a couple of friends and I participated in a 24 hour hackathon (UNIHACK). We were called the Swifites. Named after both the programming language and obviously ms. taylor swift.

We wanted to be a prepared team so we decided on an idea before the competition. This all changed 10 minutes into the hackathon after one of the mentors informed us that our idea had already been done. The app we planned on making was already out on the app store with basically the same core features and design aesthetics we had planned. We did Google our idea, obviously not well enough.

The moment (or 2 hours) of panic that followed, resulted in a much more fun and organic idea: Gifme. Starting from the premise that "photos are hip", we landed on the idea of creating photo mosaics with infinite freedom. The idea for the app was simple, take a selfie, tell us what you love and we will re-create your picture from the thing you love. It's better demonstrated by a video:

That's my face being re-created with pictures of Taylor Swift, all in real-time.

Weird that the app is called Gifme, right? When we started out, we wanted to eventually add GIF support. Instead of having a still representation, it'd be a livelier animating version of you! It turns out that downloading + displaying even a low dense GIF mosaic (a grid of 24 by 40 = 960 images) is not a trivial task. So we had to settle with regular old non-animating photos.

During the process we naturally hit a few challenges. Sourcing the photos was quite a task. We had to use Bing as our source because Google is very restrictive with their APIs. With Bing's (terribly documented) API we weren't able to search for images by colour. Our workaround was to request monochrome images from Bing, and then tint the images according to the low-dense representation we had formed. This admittedly did not achieve the desired effect we were after, but it was the best we could do.

Even though our start to the hackathon was a bit of a mess, the end result was a cool and quirky app. 

Next time, I'll personally Google the shit out the idea we decide on.

Apple Watch SDK

Apple's operations chief confirmed that they'll be releasing the native Apple Watch SDK at the upcoming WWDC. He revealed that the SDK will provide direct access to the Watch sensors. This is extremely relevant to me, given my final year project (I promise to do an update on progress soon). Now, I expected Apple to eventually roll this out, but the fact that I'll have something to play with in less than two weeks is extremely exciting!

Although i'm excited, I'm also realistic with my expectations. Given Apple's record with app functionality (*cough* *cough* background processes), I have a lot of questions about the Watch SDK. Will developers have access to the heart-rate sensor or just accelerometer and gyroscope? Given the limitations in battery, will apps have any background functionality? Can apps continuously access sensors output? Are apps allowed to be Watch only (pretty sure I know the answer to this one)?

So here's hoping for the best (a big fat YES to all my questions), but I'll settle for basically anything.

Bring on WWDC!

TIME

I love this post on 'lateness' by Greg Savage, it's titled "How did it get to be ‘OK’ for people to be late for everything?" Here is an excerpt :

And it is not that we lead ‘busy lives’. That’s a given, we all do, and it’s a cop out to use that as an excuse. It’s simply that some people no longer even pretend that they think your time is as important as theirs. And technology makes it worse. It seems texting or emailing that you are late somehow means you are no longer late.

Eliminating the Unnecessary

I was reading the Swift programming book published by Apple, and in the closures chapter, I came across a series of code snippets. The snippets relate to the closure expression syntax used in Swift, but that is not important for the purpose of this post. Putting aside the programmer perspective (code readability etc. etc.), let's just focus on the aesthetics of the collection. Each iteration does away with a seemingly 'necessary' element, and only through this process does it arrive at the bare essentials. 

closure1.jpg
closure2.jpg
closure3.jpg
closure4
closure5
closure6

Although these are just snippets of code, the process of iterating, evaluating and doing away with the unnecessary applies to all aspect of our lives.

Health-Tech

The intersection between health and technology is my current obsession. Well, current is not exactly correct, I've been interested in this idea for quite a while. The idea of using technology to measure, record and analyse my activities and health (i.e. the movement of quantified self) is very attractive to me. My interest extends beyond recording steps or having devices tell me when to stand up/move, although that's a nice start. I want to monitor the complete state of my body, from cholesterol levels to gym routines. This idea of health-tech extends much further, what if we could help monitor the behaviour of patients with neurological disorders? or help elderly people increase mobility? The point here is not replace doctors or nurses or health assistants, but to provide them with the complete picture.

I know the area of health-tech is picking up pace and I want to be part of the effort towards making it a reality. But, you know, the journey of a thousand miles beings with ... etc etc.

So to begin this journey, I've focussed my honours/4th year on building an activity recognition system that works real-time on an iPhone. Now, this idea is not new. There are fitness bands and smartwatches that accomplish most of what I aim to do but that is not the point. There is no single solution for this problem, so the point is to create my implementation and to go through the process of solving a non-trivial problem.

Activity recognition can be separated into three main parts: sensing module, feature analysis and classification. The sensing module is responsible for collecting the sensor data, and I've implemented this part, so here are some pretty pictures (data collected through an iPhone strapped to my wrist, visualising specifically the x-axis acceleration measured by the gyroscope):

How can you not be giddy with excitement after looking at these! We can distinguish most of these quite easily, and can even count the repetitions of the pull-ups! But just because it's relatively easily to identify based on inspection, building a recognition model isn't as trivial. So my first goal is to take these four activities and try to classify them as activity vs non-activity. Then i'll progress to identification of specific activities and just keep building on it.

That's the current state of the project, and as I hit milestones/roadblocks/insights with the project, I'll share the progress!