1) An article posted recently on TechReview
about Google's efforts in AI got me thinking that there's a neural network we
should be putting in the back end of our system. One that can learn from the
input by the users and can learn about adverse events from the users and
doctors labeling them as adverse, providing us with history sufficient enough
to become training data (sensor-based) for detection and prediction
(early-warning system if you will) of future adverse health conditions. I'm
made more confident in this approach by the results of a visual recognition (computer
vision - another form of machine learning) published from Stanford recently:
ImageNet
Large Scale Visual Recognition Competition 2011 (ILSVRC2011)
Here, the best model to recognize objects (could be useful in a medical sense)
was a very large neural network - an unsupervised machine learning technique
(one that doesn't require a human to set the important characteristics /
features to look for in training sets, but rather picks them up on its own).
We'll have to update our diagram or publish an entirely new back-end diagram
for how such a system might interact with a patient and a provider, as well as
the other players in the healthcare ecosystem - what it's potential use-cases
might be.
2) Meanwhile, we have received an Android version of the Node sensor from Dr. Yu and
his team. The Android API
and Demo version of the app is available on the site now. It seems that a
minimum Droid version of 2.3.3 is necessary to run the app and the device
(they're dependent on one another) - thankfully I can run it on my Thunderbolt.
I have been able to compile the demo app and tried to run it, but are having
issues pairing the device to my Droid via Bluetooth. Hopefully I can figure out
what the issue is (try out other, newer Droids and re-compile with Android
4.1).
3) I will be attending the AT&T
Mobile Hackathon this weekend, in the hopes of getting our system built
further and getting some help from senior developers at AT&T. The goal is
get the front-end enhanced a bit, get more signals going, improve our
fall-detection model based on other studies (especially if we get the Node
working), and most importantly, get a back-end going - most likely in Amazon
Web Services (AWS), where we'd be able to store data for future analysis. We
can simulate some signal readings, like Heart Rate and EKG signal - for outputs
like Stress Levels and warnings on EKG signals.
4) We're in the process of on-boarding a part-time engineer to help us with the
back-end. More as it comes later. Still looking for front-end help and HealthIT
entrepreneur advisors.
5) We're looking at Groovy for back
end analytics for the future (likely Development Round 2).
UPDATE:
6) Since the start of the AT&T Hackathon, added Apigee App Services to
back-end technologies being explored.
|