I am teaching a hybrid AP Microeconomics course this spring. This is the first of three posts describing my experience teaching this course. The first is the basics about the course. Attached is a FAQ that I sent to students before the class began.
About the Course: AP Microeconomics
- This is a one-semester course.
- It is a hybrid course where students meet once every six school days and complete online readings, activities, and assessments.
- The goal was to add the opportunity for students to fit the course into their schedule and to add the flexibility to students’ workload.
- Students are expected to complete about an hour’s worth of work each day (equivalent of 40 minute class plus 20 minute homework).
- Students use the Learning Management System Canvas.
- Students read from Mankiw’s Principles of Economics textbook.
- Students complete workbook activities from Stone’s AP Microeconomics Resource Manual.
- Students meet in small groups in occasionally non-traditional locations (ie. library or breakout room)
- The schedule was challenging to coordinate everyone’s availability, given that we did not have a dedicated meeting time.
- There are five major resources that we use:
- Videos (short bursts of information)
- Textbook (longer and more theoretical)
- Workbook & Handouts (focusing on mechanics)
- Class (connecting dots and filling in the blanks)
- Discussion Board (with threaded answers to questions, both conceptual and vocabulary)
- Students take online quizzes to test for basic comprehension and completion
- Students take in-person quizzes and tests by scheduling a convenient time.
The following is the abstract of a recent paper I wrote analyzing student performance data. Here is a link to the entire paper.
Measurement of student achievement is at the heart of educational policy and standardized testing has been both supported and contested as a genuine representation of student achievement. This study shows that the interpretations of standardized testing may have starkly contrasting meanings for different cohorts of students. By using quantile regression to account for conditional PSAT scores, educational factors such as gender and tracking are shown to effect students in varying ways. For the students in this study, gender was a significant indicator of performance on the PSAT test, with being male accounting for more than a five point “bump.” The conclusion from the quantile regressions is that students with extreme PSAT scores are outliers based on ability or inability, not because of their gender. Meanwhile, students that participated in the “honors” tracking system had more of an increase in their predicted score the higher their conditional PSAT score.
It’s that time of year when almost 100 million Americans go back to school. Yes, approximately 25% of the US population is enrolled at least part-time as a student at some level of kindergarten through graduate programs. My question is whether these students are learning skills applicable outside of the classroom or are they just earning credentials for their resume?
The easy answer is both, but I would argue that after initial universal skills are learned, education acts as a signal of ability to comprehend and analyze. It is important to distinguish between learning a skill, such as the ability to problem solve, versus signaling that you have the ability to problem solve. Before going too deep, an example should clarify the difference.
Consider high school math, in Algebra or Geometry a student may learn the skill of deductive reasoning and problem solving. Then a few years later in Calculus, the students signal that they have the requisite skills needed to comprehend Calculus concepts. Note that this signal (the ability to understand Calculus) may not be an applicable “life skill,” but if you ask a college admissions officer they would tell you it is quite valuable to signal to colleges that you are able to complete a Calculus course.
What skills will be acquired in schools this fall that will be applicable? Obviously basic reading, writing, and arithmetic are necessary skills learned in the lower grade levels. Once these skills are mastered, the second level of thinking comes into play. That is, using factual knowledge to solve problems. Finally third level thinking, including synthesizing (or combining) knowledge from various sources to derive original thoughts and conclusions. The last two “levels of thinking” are not necessarily taught on their own, yet students acquire these skills through high school and these attributes are the base argument for a “liberal arts education” promoted by colleges.
So, when does education stop being skill based and start becoming a signaling device? My conclusion is that this balance transfer occurs in the midst of the traditional high school years. The American high school curriculum (including calendar and schedule), include many requirements that act as hurdles. Also, many advanced graduate programs fail to provide students with necessary skills. These degrees simply act as acknowledgement that a student has read the core literature of a field and does not enable students to go out into the world ready to produce original ideas.
Below are two graphs that I believe represent traditional educational paths for American students and the subsequent returns of skill development from that education. The first graph represents a student whose returns to schooling are negative, that is each year fewer skills are acquired. Unfortunately, I believe this is the path of the majority of American students. The second graph represents a student that take a career orientated path and ends their education in a trade school or college or graduate program that gives the student real skills to be applied outside of the classroom.
If we want education to have meaning for our students, we should gear curriculum to achieve the second path shown above. Ideally, we could create a skill development graph that is a horizontal line where students are constantly acquiring applicable skills. This may seem far off, but if we want our future citizens to be equipped to face the challenges of an ever-changing economy and job market, we need to be sure that they are prepared with applicable skills, not a diploma that recognizes that they jumped through the appropriate hoops.
In my first (found here) of a three part series on statistical analysis, I discussed how data can inform decisions within countless industries. My research in human capital, and specifically education, provides for various usages of data in decision making. Specifically, available data can be used for predictions of when, where, and which students may have trouble, determinants of parental satisfaction, admission decisions (at both secondary and collegiate levels), and student achievement both academically, on standardized tests, and in future wages.
This post is intended to shed light on the science of data analysis, specifically conditional expectations. The mathematical approach to conditional expectation is based on heavy statistical concepts, some of which are not appropriate for this venue. That being said, my intention is to provide an accessible explanation of the techniques used in statistical analysis. For a mathematical approach, I recommend the seminal text Econometric Analysis, by William Greene, 2002; or Econometric Analysis of Cross Section and Panel Data, by Jeffrey Wooldridge.
Econometrics, and essentially any data analysis, is based on determining a prediction. In statistical terms, this is called an expectation. A conditional expectation is a prediction based on available information.
For instance, consider guessing the height of a random human being. The average human height is 5 foot 6 inches, so, this would be a logical starting point for our prediction. But if we know more information about this random person, we can improve our expectation. Specifically, if we knew the person was male, we would want to change our expectation conditional on that fact. The average height of an adult man is 5 foot 9.5 inches, so that would be our new prediction given some information.
If we also knew that the person weighed 240 pounds, we may want to increase our expected height. Note here that there is no causal assumption, just a change in our expectation, given some piece of information. This is correlation: the taller a person is the more, on average, we expect that person to weigh. We may predict 6 foot 1 inch for our random male weighing 240 pounds. Data analysis can help us with our predictions, given some imperfect information.
Next, consider that we also knew this random person’s SAT score was 1200. We probably would not consider changing our expectation of height. If we had data on a sample of people with the variables height, sex, weight, and SAT score, some variables may be good predictors of height and be statistically significant while others, SAT score in this example, would not be statistically significant and would not persuade us to change our expectation.
Multiple Regression Analysis essentially considers a sample of data and determines the predictive success of each variable. Using the information from a statistical data program (even excel can do this reasonably well), we can arrive at a predictive equation for the most logical expectation conditional on the information we have at our disposal.
The data will find the coefficients- the “B’s”- and also determine the likelihood that each “B” is a significant predictor. In this case, I suspect B3 would not be statistically significant.
In part three, I will discuss an example of student achievement data from a nation-wide sample and how conditional expectations can be used to inform decisions in many fields.
A year ago, the LA Times published a ranking of over 5 thousand third through fifth grade teachers based on their value added scores. If you are not familiar with Value Added Ratings, they are a statistical technique that measures how much a teacher’s students improve on a standardized test over the year the teacher works with those students. These ratings are very controversial, especially as a means to evaluate teachers. To a parent reading these rankings in the LA Times, they seem like a say all end all evaluation of their child’s teacher, prompting calls to change classes and fire teachers. In May the LA Times released value-added ratings for over eleven thousand more teachers without addressing the intricacies of the data.
Last winter, I analyzed Value-Added Ratings, specifically as a way of evaluating teachers. Here is a link to my paper (see pages 3-6 for a literature review on the subject and p. 6-8 for a look at the economic model, while p. 9-17 are a critique of the method).
Here is a summary of that paper:
Economists and educational researchers studying student achievement return consistent results about teachers: they matter. While researchers agree that educational achievement depends on a quality teacher, disagreement among both economists and educational researchers occurs when considering what constitutes teacher quality. Clearly, one of a teacher’s specific duties is to improve student performance and value added statistical methods do measure student progress, but the issue arises when that progress or lack of progress is considered completely the effect of the teacher.
This paper concludes that if school administrators only evaluate teachers on student progress, then they may not be measuring the teacher’s total value to students. The review shows that many factors, both observed and unobserved, can affect student achievement and that teachers have the ability to impact their students in ways that standardized tests cannot measure.
In a 2008 New Yorker essay, Malcolm Gladwell compared hiring a new teacher to drafting an NFL quarterback. In both professions observable characteristics do not translate to production, whether on the field or in the classroom. In the NFL, coaches can examine statistics that their quarterback directly controls and that are easily observed: completions, yards, and touchdowns, but observations from college games do not translate to NFL success. Meanwhile, school administrators only evaluate teachers on observable characteristics such as experience, highest degree, undergraduate university attended, and in a recent number of cases Value Added Ratings.
Just as a quarterback’s college statistics do not directly indicate NFL performance, easily measured or observable characteristics of teachers are not always correlated with student success. Value added scores vary significantly from year to year and only measure student improvement on standardized tests, not necessarily learning. If, over a ten or fifteen year period a teacher’s Value added score was consistently higher or lower than average, then the scores can tell nus something about the teacher. But looking at one score from one year and publishing that as an all encompassing value of a teacher is unfair.
Many factors, both observed and unobserved, can affect student achievement; teachers have the ability and responsibility to impact their students in ways that standardized tests cannot measure. If school administrators only evaluate teachers on student progress on a standardized test, then they are not necessarily measuring the teacher’s total value added to their students.
This weekend, the New York Times included a section called “Education Life,” which featured some discussion of the purpose, role, and value of grad school. One of their “Most Emailed” articles is titled “Master’s as the New Bachelor’s” by Laura Pappano. I wanted to share a few comments given some research I had done on the subject and I also thought this was appropriate given the article I shared at the right in the “Article I’m Reading” section of my blog: Louis Menand’s New Yorker Piece Debating the Value of College.
These articles consist almost entirely of anecdotal evidence. I am unaware of an empirical analysis, but last year, I presented a paper by Dominic Brewer, et al. titled “Does it Pay to Attend an Elite Private College.” The paper considered the different economic returns of different types of colleges (private, public, small, and big). The punch line is:Yes, an Elite Private College has significantly more economic return than other colleges. BUT, the paper considered data from the seventies and eighties. This study needs to be reproduced given the vastly different college environment almost thirty years after the data was collected (the paper is from 1998).
Specifically, with the explosion of tuition, is an elite private undergraduate education worth the money. Also, with such a high proportion of high school graduates going on to college, is the elite undergraduate degree worth the same, or more depending on graduate school experiences? Also, thirty years ago students chose their college, while over the past few decades, colleges have much more control over who is admitted. Finally, since the recession and (hopefully?) post-recession, are labor market returns any different given workers’ educational background?
So, is an undergraduate degree “worth” the skyrocketing cost? I would agree with the paper I cited earlier by Brewer, et al. that only elite private colleges garner a return worth the significant tuition differences to large public universities. That being said, according to Ms. Pappano’s NYT article and Menand’s “First Theory” , maybe undergraduate degrees will not provide the signal that we prescribe to them. Is acquiring a Master’s or other graduate degree just the new sorting mechanism for potential employers?