Skip to main content

Posts

Showing posts from October, 2018

Summer - Week 10

This week, I am working to get the dynamic time warping functionality into my program. The process of doing so includes re-processing the features to include the time series, putting each series back together when we construct sequences, and then performing the DTW to generate a number that will be used to compute the kNN of each sequence which can then be used for predictions with the models. The processing time of these activities has gone up significantly since we have been using five different metrics with each of the F phase datasets. I am returning to school next week, and once I've completed the DTW processing all that will remain before we put together our second paper (The date for the reach journal we would like to submit it to is October 1), I am hoping I will have time to look again into the Agglomerative Hierarchical Clustering concept, which I did not successfully complete when we explored it earlier in the summer and then changed focus to the paper. We heard back

Week 10

As mentioned last week, I spent some time this week getting up to speed on github and git command line tools. I definitely wouldn't consider myself an expert, and the concept of branching is still quite foreign to me, but I'm confident enough currently to work on Alexa and my repository without fear of deleting everything. We continue to not have patients enrolled in the study at St. Lukes. This upcoming week, I will continue my work on the all-subject statistic calculator and reading on sleep classification.

Week 9

Wrapping up last week, I have my stat-calculating script working well and am ready to move forward. My partner, Alexa, and I have decided that we should start to begin working collaboratively on the next step of the project involving multiple data frames. To do so, we will be using a github repository. I have limited experience with the git tools, so I took time in the past week to try and gain a better understanding of the functionality that will be available. Alexa and I have also decided that instead of taking a break from the project itself to begin our literature review, we want to have some measure of both each week. We'll start by taking a more in-depth look at many of the sources utilized in the proposal for our project and studies published by Doug Weeks and Gina Sprint that are related to our current work. We hope to follow source material from those papers to find more relevant literature going forward. Dr. Weeks discharged the two patients that were enrolled in the

Week 8

My goals for this week are to:     1. re-create the slicing functionality of the stats function that I lost last week     2. begin working on a program that will allow me to analyze multiple data frames at once The stat-calculator revamp is coming along smoothly, but the multi-frame analysis is still in the planning phase. I intend to re-write my Automated Sleep/Wake analyzer to clean up any unnecessary or overcomplicated features of the original. In addition to working on the programs, we've got two new patients enrolled in the light study so I will be spending a few of my mornings at St. Luke's. Looking forward, after I am able to put together a working multiple-frame analyzer, I'll work on the literature review portion of the project.

Week 7

There are no patients currently enrolled in the study at St. Lukes, so this week's work was entirely focused on coding and reading about slicing. This was my second week of work dealing with the summary statistics of the practice data for subjects K002 and K027. My original code was functional and produced a result that was just slightly different than my mentor's. However, I'd chosen quite a roundabout way of doing this. While I did manage to create a date-time index for my DataFrame, I failed to fully utilize the full potential of this set-up. Instead of passing a slice of the DataFrame through a function for each period, I was moving through the data line by line in order to determine the number of transitions, minutes of sleep and minutes of activity for the period. Clearly, the former is both more efficient and more modular than the latter. I planned to create a function slice_stats that would accept a slice of a DataFrame up to twenty-four hours in size, but sho