Data Geek
A place for me to keep track of interesting articles, graphs, discussion or whatever related to education, testing and statistics.
Thursday, March 20, 2014
FiveThirtyEight Blog from Nate Silver
I just read an article on how the media distorts research findings on Nate Silver's new blog FiveThirtyEight. I am really looking forward to reading more on this blog. Based on the first article I read, I like how they (there are a number of contributing authors) are teaching readers to interpret statistics carefully. We need more of this type of reporting and less hype in the media.
Thursday, September 19, 2013
Preschool lessons: New research shows that teaching kids more and more, at ever-younger ages, may backfire. - Slate Magazine
Preschool lessons: New research shows that teaching kids more and more, at ever-younger ages, may backfire. - Slate Magazine:
'via Blog this'
Great article on why preschools need to focus on play and exploration. I think that the same holds true for older children as well.
'via Blog this'
Great article on why preschools need to focus on play and exploration. I think that the same holds true for older children as well.
Monday, September 16, 2013
New IES report of the effectiveness of Teach for America
I have to be honest and say that I have not read this report in its entirety yet. I have read the methods and results sections and that is what I will focus my critique on. I may a little biased against Teach for America (TFA) as I work in a College of Education who trains teachers in a traditional model. That being said, I am very passionate about teaching, I think that there are wonderful teachers out there doing a great job and a few that need to find another calling. I don't think that traditional teaching models have all the answers and I believe strongly that there is always room for improvement. So, if TFA teachers are doing a better job than teacher who get their credentials through a traditional teaching college, I want to know that and I want to understand why. Unfortunately, I don't believe that this study offers us any evidence that TFA teachers were in fact better as a group and they offer no information to help improve teacher preparation programs.
Link to Article
Research Design
I was initially put off by the authors repeatedly claiming that the fact that they used an "experimental design" and randomly assigned students to either a class taught by a TFA teacher (experimental group) or another teacher (control group) meant that there would be no initial differences between the two group of students. They state, "Random assignment was the key to the causal validity of these estimates because it ensured that students assigned to the TFA or Teachings Fellows teachers were no different, on average, than students assigned to comparison teachers in the same match at the time of random assignment." (pg. 21). This is simply untrue. The only thing random assignment guarantees is that any difference in the groups would be due to random chance rather than selection criteria. As a post-test only study there are a number of threats to internal validity (see http://www.socialresearchmethods.net/kb/intsoc.php for a short list of some of the problems). Many things can happen within any single class that have nothing to do with the teacher. In addition, they have lumped all other teaching programs into a single program. We have no idea where the teachers in the control group received their teaching credential and no idea how these different programs may have differed.
Analysis Methods and Results
I am not sure that I completely understand the regression analysis they did. Because they used a single regression analysis including all grade levels, they did a lot of statistical manipulations to create standardized measures for all the variables. All these adjustments make the interpretation of the final findings difficult to interpret. The final conclusion of the researchers was that students in classes being taught by TFA teachers on average scored 0.07 standard deviations (s.d.) higher that students in classes with non-TFA teachers, and that this difference was statistically significant. While they may have been able to find statistical significance after all that data manipulation, I am not convinced that this difference has any practical implications. They try to turn this small difference in s.d. scores into a 2.6 months of learning stating that this .07 standard deviations can be calculated to 26% of a years growth (pg.56). If I did the math right, that would indicate that students are only expected to grow roughly .27 standard deviations in a year. I really need to read the study they got that from.
Conclusions
What was not surprising was that there were some TFA teachers whose students did better than the students in the comparison class and there were some whose students did worse that the comparison teacher (pg.57). It would have been more informative if they had looked at both TFA and non-TFA prepared teachers who did better than their matched peer and looked for patterns that could explain why they did better. But since this study was all about TFA teachers, they did not do this.
Over all, I was very disappointed in the study. I felt like the researchers were trying to sell TFA rather than trying to explore how to better prepare teachers. I don't believe that this article offers any information for teacher preparation programs trying to improve their practices.
Link to Article
Research Design
I was initially put off by the authors repeatedly claiming that the fact that they used an "experimental design" and randomly assigned students to either a class taught by a TFA teacher (experimental group) or another teacher (control group) meant that there would be no initial differences between the two group of students. They state, "Random assignment was the key to the causal validity of these estimates because it ensured that students assigned to the TFA or Teachings Fellows teachers were no different, on average, than students assigned to comparison teachers in the same match at the time of random assignment." (pg. 21). This is simply untrue. The only thing random assignment guarantees is that any difference in the groups would be due to random chance rather than selection criteria. As a post-test only study there are a number of threats to internal validity (see http://www.socialresearchmethods.net/kb/intsoc.php for a short list of some of the problems). Many things can happen within any single class that have nothing to do with the teacher. In addition, they have lumped all other teaching programs into a single program. We have no idea where the teachers in the control group received their teaching credential and no idea how these different programs may have differed.
Analysis Methods and Results
I am not sure that I completely understand the regression analysis they did. Because they used a single regression analysis including all grade levels, they did a lot of statistical manipulations to create standardized measures for all the variables. All these adjustments make the interpretation of the final findings difficult to interpret. The final conclusion of the researchers was that students in classes being taught by TFA teachers on average scored 0.07 standard deviations (s.d.) higher that students in classes with non-TFA teachers, and that this difference was statistically significant. While they may have been able to find statistical significance after all that data manipulation, I am not convinced that this difference has any practical implications. They try to turn this small difference in s.d. scores into a 2.6 months of learning stating that this .07 standard deviations can be calculated to 26% of a years growth (pg.56). If I did the math right, that would indicate that students are only expected to grow roughly .27 standard deviations in a year. I really need to read the study they got that from.
Conclusions
What was not surprising was that there were some TFA teachers whose students did better than the students in the comparison class and there were some whose students did worse that the comparison teacher (pg.57). It would have been more informative if they had looked at both TFA and non-TFA prepared teachers who did better than their matched peer and looked for patterns that could explain why they did better. But since this study was all about TFA teachers, they did not do this.
Over all, I was very disappointed in the study. I felt like the researchers were trying to sell TFA rather than trying to explore how to better prepare teachers. I don't believe that this article offers any information for teacher preparation programs trying to improve their practices.
Thursday, September 5, 2013
The STEM Crisis Is a Myth - IEEE Spectrum
The STEM Crisis Is a Myth - IEEE Spectrum:
'via Blog this'
As an educator, an evaluator and a parent of school aged children this article gives me a lot to think about. As an evaluator, I am struck by the different definitions of STEM, the different calculations that are used to define whether there is a shortfall or a surplus in STEM workers, and how our biases are always present. Evaluators and researchers need to do their best not let their biases influence their results, but that it not always possible. That is one of the reasons why replication of studies is so important, so that we have results from a variety of sources and biases and can view data form different viewpoints to hopefully help us draw better conclusions. Politicians and business leaders have their own agendas and will only bring to light those studies that support those agendas. People have to look at the studies and the data themselves before making judgments and cannot blindly trust these leaders.
As a parent and educator, I agree with the author's conclusion that all students should be STEM literate, just as they should be literate in art, history and literature. The recent push for more STEM and less of the liberal arts classes worries me. I think the most successful people have a wide range of skills and knowledge. Understanding art, literature and history helps us understand how people think and behave, math and sciences help us understand how the world works. Why wouldn't we want all people to have all kinds of knowledge?
'via Blog this'
As an educator, an evaluator and a parent of school aged children this article gives me a lot to think about. As an evaluator, I am struck by the different definitions of STEM, the different calculations that are used to define whether there is a shortfall or a surplus in STEM workers, and how our biases are always present. Evaluators and researchers need to do their best not let their biases influence their results, but that it not always possible. That is one of the reasons why replication of studies is so important, so that we have results from a variety of sources and biases and can view data form different viewpoints to hopefully help us draw better conclusions. Politicians and business leaders have their own agendas and will only bring to light those studies that support those agendas. People have to look at the studies and the data themselves before making judgments and cannot blindly trust these leaders.
As a parent and educator, I agree with the author's conclusion that all students should be STEM literate, just as they should be literate in art, history and literature. The recent push for more STEM and less of the liberal arts classes worries me. I think the most successful people have a wide range of skills and knowledge. Understanding art, literature and history helps us understand how people think and behave, math and sciences help us understand how the world works. Why wouldn't we want all people to have all kinds of knowledge?
Friday, November 16, 2012
Planning Trips to Disneyland
What does a data geek do before a trip to Disneyland with the kids? Why survey the kids bout how much they like each ride and then plot the result of course! This was done about a year ago, the next time we go (which probably won't be another year or so) I am going to do this again and add the date as a field so we can see what kind of changes there were.
Saturday, November 10, 2012
More Data on CA budget
I found another data source, the California Legislative Analysis Office. The have a note on their site,
"Because of many changes over the years (including, but not limited to, changes in the sources of funding for certain state programs, deferrals of scheduled payments and tax collections, and other accounting changes), this data may not provide sufficient information to evaluate trends in general state spending, spending for particular programs, or state revenues. Consulting other sources, such as the LAO's annual Spending Plan publications, is advised when conducting such trend analyses."
I still thought it would be intersting to see what the data looks like graphed.
Monday, April 30, 2012
CA Budget Data View
I decided to create a data view of the history of the budget in California (CA) using the enacted CA Budget available from CA Department of Finance. While I think that this is interesting, I think that to get a better understanding of the budget, now I need to see how each category is spent.
Subscribe to:
Comments (Atom)