Monthly Archives: August 2014

A Person Among Machines

34justice’s first guest author is David Fischer, a student at Harvard Medical School and a Howard Hughes Medical Institute medical research fellow.  In this piece, David discusses how physicians navigate “the gray zone between life and death” when they interact with patients on life support.  David studies the effects of noninvasive brain stimulation on movement and cognition and has authored several articles pertaining to neuroscience research, philosophy, and medicine. He has a B.S. from Haverford College, where he studied psychology and philosophy.

David Fischer

David Fischer

The attending physician sat at the foot of the patient’s bed, while I stood watching. He was smiling, but the look in his eyes conveyed far more kindness than his mouth or words could. He reached for the patient’s hand, which was contorted into a strained position, and took it in his. “You’re a lovely gentleman,” the doctor said, his voice quiet but firm. “It’s my pleasure to meet you.” The patient turned his eyes to meet the doctor’s gaze, his neck twisted and cocked at a sharp angle. The patient said nothing, and could say nothing, but kept his eyes fixed on the doctor’s. Several moments passed in silence, punctuated only by the mechanical sighs of the patient’s ventilator and the rhythmic beeping of nearby monitors. The doctor gave the patient’s hand a final squeeze, smiled, and led me from the room.

There was something remarkable about this encounter. It was, in some sense, a mundane scenario: a physician evaluating a patient with spastic paralysis, altered level of consciousness and dependence upon a ventilator. So what made the doctor’s attitude towards the patient so striking?

Treating patients with diminished consciousness and dependence upon life-sustaining technology poses unique challenges to the cultivation of humanity in patient care. In many areas of medicine, the distinction between life and death is roughly dichotomous. When alive, patients can often interact, remember past experiences, and demonstrate their personality. Following a fatal event, the transition between life and death, from a person to a body, often occurs quickly, save for relatively brief alterations in mentation. However, the technology that has permitted modern life-sustaining treatment, such as mechanical ventilation, has complicated this distinction. Following a severe neurologic insult, patients such as the one we encountered can remain in this transition for prolonged periods of time. Patients with disorders of consciousness or severe dementia may appear to lack the memories and personality that made them who they were in life. Yet, by mechanically preserving basic physiologic functions, we can ward off death. In this way, these technologies, though undoubtedly important, can suspend patients in a gray zone between life and death.

For physicians who care for patients in this gray zone, the encounters can be uncomfortable. The ability to interact with people, a skillset developed through years of human experience, is difficult to apply in these circumstances. The moments alone with such patients can be haunting, as one greets the patient by name and then awaits a response. As the silence lengthens, the patient may seem neither alive nor dead, like a ghost of his or her formal self. When the expectant silence is broken only by the mechanical sounds of equipment, technology can feel like the only presence.

Doctors who regularly encounter experiences such as these may come to treat these patients like bodies, like sets of physiologic processes as inanimate as the technology the patients rely upon. This is not to say that such patients are not treated with respect, but that the respect is similar to that paid to a body in a funeral home. This is an approach that protects the doctor’s psyche in several ways. For one, the doctor must often purposefully inflict pain on patients in order to gauge the extent of neurologic impairment. To summon the strength to deliberately injure a fellow, vulnerable person requires a forceful violation of empathy in what can be an emotionally harrowing task.  To do so to a human body – to transform the ‘experience of pain’ into a ‘noxious stimulus’ – is much more manageable. Moreover, the prognosis in disorders of consciousness can often be poor, and the range of therapeutic options is often limited, rendering the physician largely powerless. With patients viewed as bodies, however, the physician is afforded emotional distance from these tragedies, and the instances of clinical improvement are all the more gratifying. Ultimately, for many physicians, eliminating humanism from these interactions is emotionally protective in the care of these patients.

This context is what made the encounter between the doctor and his patient so powerful. It was not merely that the doctor sat at the patient’s side, was polite, or maintained eye contact. We have all learned to do these things. What was striking was the attitude that appeared to underlie these behaviors: despite the patient’s altered level of consciousness and dependence upon life-sustaining technology, the doctor treated the patient like a full person. The doctor, with no expectation of reciprocation or gratitude, was willing to take the time to speak to and hold hands with a person who may not have understood these gestures. The doctor’s time, however, was the least of his sacrifices; by approaching the patient as a person, he rendered himself vulnerable to the emotional hazards of care, from the discomfort of inflicting pain to the powerlessness associated with management.

In addition to emotional fortitude, the doctor’s willingness to treat the patient as a person reflected a poignant wisdom. Much of the discomfort associated with treating patients in this state stems from confronting the gray zone between life and death. Our binary concepts of life and death provide us comfort, distancing us from the thought of mortality. However, life-sustaining technologies challenge this dichotomy, and threaten the view that the line between ourselves and death is a sharp one. In such cases, it can be easier to circumvent these existential discomforts by treating these patients as bodies, dedicating more attention to the monitors and ventilation settings than to the person before us. This doctor, however, was able and willing to appreciate the spectrum between life and death, and in doing so could comfortably recognize, within that spectrum, an ill person in need of compassion. He could recognize someone who was more than the mechanics upon which he relied. This wisdom ultimately empowered him to accept the emotional sacrifices of care and, as was clear to me in that room, allowed him to see a person when few else dared to see more than machines.

3 Comments

Filed under Health Care and Medicine, Philosophy

Cooks, Chefs, and Teachers: A Long-Form Debate on Evaluation (Part 3a)

StudentsFirst Vice President Eric Lerum and I have been debating teacher evaluation approaches since my blog post about why evaluating teachers based on student test scores is misguided and counterproductive.  Our conversation began to touch on the relationship between anti-poverty activism and education reform conversations, a topic we plan to continue discussing.  First, however, we wanted to focus back on our evaluation debate.  Eric originally compared teachers to cooks, and while I noted that cooks have considerably more control over the outcomes of their work than do teachers, we fleshed that analogy out and continue discussing its applicability to teaching below.

Click here to read Part 1 of the conversation.

Click here to read Part 2 of the conversation.

Lerum: I love the analogy you use for this simple reason – I don’t think we’re as interested in figuring out whether the cook is an “excellent recipe-follower” as we are about whether the cook makes food that tastes delicious. And since we’re talking about the evaluation systems themselves – and not the consequences attached (which by and large, most jurisdictions are not using) – then this really matters. The evaluation instrument may reveal that the cook is not an “excellent recipe follower,” which you gloss over. But that’s an important point. It could certainly identify those cooks that need to work on their recipe-following skills. That’s helpful in creating better cooks.

But taking your hypothetical that it identifies someone who can follow a recipe well and executes our strategies, but then the outcome is still bad – that is also important information. It could cause us to re-evaluate the recipe, the meal choice, certain techniques, even the assessment instrument itself (do the people tasting the food know what good food tastes like?). But all of those would be useful and significant pieces of information that we would not get if we weren’t starting with an evaluation framework that includes outcomes measures.

You clearly make the assumption that nobody would question the evaluation instrument or anything else – if we had this result for multiple cooks, we would just keep going with it and assume it’s the cooks and nothing else. But that’s an unreasonable assumption that I think is founded on a lack of trust and respect for the intentions underlying the evaluation. What we’re focused on is identifying, improving, rewarding, and making decisions based on performance. And we want accurate measures for doing so – nobody is interested in models that do not work. That’s why you constantly see the earliest adopters of such models making improvements as they go.

Also, to clarify, we do not advocate for the “use of standardized test scores as a defined percentage of teacher evaluations.” I assume you probably didn’t mean that literally, but I think it’s important for readers to understand the difference as it’s a common and oft-repeated misconception among critics of reform. We advocate for use of measures of student growth – big difference from just using the scores alone. It doesn’t make any sense to evaluate teachers based on the test scores themselves – there needs to be some measure (such as VAM) of how much students learn over time (their growth), but that is not a single snapshot based on any one test.

I appreciate your recommendation regarding the use of even growth data based on assessments, but again, your recommendation is based on your opinion and I respectfully disagree, as do many researchers and respected analysts (also see here and here – getting at some of the issues you raise as concerns, but proposing different solutions). To go back to your analogy, nobody is interested in going to a restaurant run by really good recipe-followers. They want to go where the food tastes good. Period. Likewise, no parent wants to send her child to a classroom taught by a teacher who creates and executes the best lesson-planning. They want to send their child to a classroom in which she will learn. Outcomes are always part of the equation. Figuring out the best way to measure them may always have some inherent issues with subjectivity or variability, but I believe removing outcomes from the overall evaluation itself betrays to some degree the initial purpose.

Spielberg: I think there’s some confusion here about what I’m advocating for and critiquing.  I’d like to reiterate what I have consistently argued in this exchange – that student outcomes should be a part of the teacher evaluation process in two ways:

1) We should evaluate how well teachers gather data on student achievement, analyze the data, and use the data to reflect on and improve their future instruction.

2) We should examine the correlation between the effective execution of teacher practices and student outcome results.  We should then use the results of this examination to revise our instructional practices as needed.

I have never critiqued the fact that you care about student outcomes and believe they should factor heavily into our thinking – on this point we agree (I’ve never met anyone who works in education who doesn’t).  We also agree that it is better to measure student growth on standardized test scores, as value added modeling (VAM) attempts to do, than to look at absolute scores on standardized tests (I apologize if my earlier wording about StudentsFirst’s position was unclear – I haven’t heard anyone speak in favor of the use of absolute scores in quite some time and assumed everyone reading this exchange would know what I meant).  Furthermore, the “useful and significant pieces of information” you talk about above are all captured in the evaluation framework I recommend.

My issue has always been with the specific way you want to factor student outcomes into evaluation systems.  StudentsFirst supports making teachers’ VAM results a defined percentage of a teacher’s “score” during the evaluation process, do you not?  You highlight places, like DC and Tennessee, that use VAM results in this fashion.  Whether or not this practice is likely to achieve its desired effect is not really a matter of opinion; it’s a matter of mathematical theory and empirical research.  I’ve laid out why StudentsFirst’s approach is inconsistent with the theory and research in earlier parts of our conversation and none of the work you link above refutes that argument.  As you mention, both Matt Di Carlo and Douglas Harris, the authors of the four pieces you linked, identify issues with the typical uses of VAM similar to the ones I discuss.  Their main defense of VAM is only to suggest that other methods of evaluation are similarly problematic; Harris discusses a “lack of reliability in essentially all measures” and Di Carlo notes that “alternative measures are also noisy.”  There is, however, more recent evidence from MET that multiple, full-period classroom observations by multiple evaluators are significantly more reliable than VAM results.  While Di Carlo and Harris do have slightly different opinions than me about the role of value added, Di Carlo’s writing and Harris’s suggestion for evaluation on the whole seem far closer to what I’m advocating than to StudentsFirst’s recommendations, and I’d be very interested to hear their thoughts on this conversation.

That said, I like your focus above on what parents want, and I think it’s a worthwhile exercise to look at the purposes of evaluation systems and how our respective proposals meet the desires and needs of different stakeholders.  I believe evaluation systems have three primary purposes: providing information, facilitating support, and creating incentives.

1) Providing Information – You wrote the following:

…nobody is interested in going to a restaurant run by really good recipe-followers. They want to go where the food tastes good. Period. Likewise, no parent wants to send her child to a classroom taught by a teacher who creates and executes the best lesson-planning. They want to send their child to a classroom in which she will learn.

The first thing I’d note is that this juxtaposition doesn’t make very much sense; students taught by teachers who create and execute the best lesson-planning will most likely learn quite a bit (assuming that the teachers who are great lesson planners are at least decent at other aspects of good teaching). In addition, restaurants run by really good recipe-followers, if the recipes are good, will probably produce good-tasting food.  Good outputs are expected when inputs are well-chosen and executed effectively.

The cooking analogy is a bit problematic here because, in the example you give, the taste of the food is both the ultimately desired outcome and the metric by which you propose to assess the cook’s output.  In the educational setting, the metric – VAM, in the case of our debate – is not the same as the desired output.  In fact, VAM results are a relatively weak proxy for only a subset of the outcomes we care about for kids (those related to academic growth).  To construct a more appropriate analogy for judging a teacher on VAM results, let’s consider a chef who works in a restaurant where we want to eat dinner.  We are interested, ultimately, in the overall dining experience we will have at the restaurant. A measurement tool parallel to VAM, one that gives us a potentially useful but very limited picture of only one aspect of the experience other diners had, could be other diners’ assessments of the smell of the chef’s previous meals.

This analogy is more appropriate because the degree to which different diners value different aspects of a dining experience is highly variable.  All diners likely care to some extent about a combination of the food selection, the sustainability of their meal, the food’s taste, the atmosphere, the service, and the price.  Some, however, might value a beautiful, romantic environment over the taste of their entrees, while others may care about service above all else.  Likewise, some parents may care most about a classroom that fosters kindness, some may prioritize the development of critical thinking skills, and others may hold content knowledge in the highest esteem.

Were I to eat at a restaurant, I’d certainly get some information from knowing other diners’ assessments of previous meals’ smells.  Smell and taste are definitely correlated and I tend to value taste above other considerations when I’m considering a restaurant.  Yet it’s possible that other diners like different kinds of food than me, or that their senses of smell were affected by the weather or allergies when they dined there.  Some food, even though it smells bad, tastes quite good (and vice versa).  If I didn’t look deeper and really analyze what caused the smell ratings, I could very easily choose a sub-optimal restaurant.

What I’d really want to know would be answers to the following questions: what kind of food does the chef plan to make?  Does he source it sustainably?  Is it prepared to order?  Is the wait-staff attentive?  What’s the decor like?  The lighting?  Does the chef accommodate special requests?  How does the chef solicit feedback from his guests, and does he, when necessary, modify his practices in response to the feedback?  If diners could get information on the execution in each of these areas, they would be much better positioned to figure out whether they would enjoy the dining experience than if they focused on other diners’ smell ratings.  A chef who did all of these things well and who used Bayesian analysis to add, drop, and refine menu items and restaurant practices over time would almost certainly maximize the likelihood that future guests would leave satisfied.  A chef with great smell ratings might maximize that probability, but he also might not.

The exact same reasoning applies to the classroom experience.  Good VAM results might indicate a classroom that would provide a learning experience appropriate for a given student, but they might not.  Though I will again note that you don’t advocate for judging teachers solely on VAM, VAM scores tend to be what people focus on when they’re a defined percentage of evaluations.  That focus, again, does not provide very good information.  Whether parents value character development, inspiration, skill building, content mastery, or any other aspect of their children’s educational experience, they would get the best information by concentrating on teacher actions. If a parent knows a teacher’s skill – at establishing a positive classroom environment, at lesson planning, at lesson delivery, at using formative assessment to monitor student progress and adapt instruction, at helping students outside of class, etc. – that parent will be much more informed about the likelihood that a child will learn in a teacher’s class than if that parent focuses attention on the teacher’s VAM results.

2) Facilitating support – A chef with bad smell ratings might not be a very good chef.  But if that’s the case, any system that addressed the questions above – that assessed the chef’s skill at choosing recipes, sourcing great ingredients, making food to order, training his wait-staff, decorating his restaurant, responding to guest feedback, etc. – should also give him poor marks.  Bad results that truly signify bad performance, as opposed to reflecting bad luck or circumstances outside of the chef’s control, are the result of a bad input.  The key idea here is that, if we judge chefs on input execution but monitor outputs to make sure the inputs are comprehensive and accurate, judging chefs on their smell ratings won’t give us any additional information about which chefs need support.

More importantly, making smell ratings a defined percentage of a chef’s evaluation would not help a struggling chef improve his performance.  No matter the other components of his evaluation, he is likely to concentrate primarily on the smell ratings, feel like a failure, and have difficulty focusing on areas in which he can improve.  If we instead show the chef that, despite training the waitstaff well, he is having trouble selecting the best ingredients, we give him an actionable item to consider.  “Try these approaches to selecting new ingredients” is much easier to follow and much less demoralizing a directive than “raise your smell ratings.”

I think the parallel here is pretty clear – if we define and measure appropriate teaching inputs and use outcomes in Bayesian analysis to constantly revise those inputs, making VAM a defined percentage of an evaluation provides no new information about which teachers need support.  Especially because VAM formulas are complex statistical models that aren’t easily understood, the defined-percentage approach also focuses the evaluation away from actionable improvement items and towards the assignment of credit and blame.

3) Creating Incentives – Finally, a third goal of evaluation systems is related to workforce incentives.  First, we often wish to reward and retain high-performers and, in the instances in which support fails, exit consistently low-performers.  For retention and dismissal to improve overall workforce quality, we must base these decisions on accurate performance measures.

I don’t think the incomplete information provided by VAM results and smell ratings needs rehashing here; the argument is the same as above.  We are going to retain a higher percentage of chefs and teachers who are actually excellent if our evaluation systems focus on what they control than if our incentives focus on outputs over which they have limited impact.

Of particular concern to me, however, are the incentives teachers have for working with the highest-need populations.  Even efforts that take great pains to “level the playing field” between teachers with different student populations result in significantly better VAM results for teachers and schools that work with more privileged students.  Research strongly suggests that teachers who work in low-income communities could substantially improve their VAM scores by moving to classrooms with more affluent populations (and keeping their teaching quality constant).  When we make VAM results a defined percentage of an evaluation, we provide incentives for teachers who work with the highest-need populations to leave.  The type of evaluation I’m proposing, if we execute it properly, would eliminate this perverse incentive.

Again, I want to reiterate that I support constantly monitoring student outcomes; we should evaluate teachers on their ability to modify instruction in response to student outcomes, and we should also use outcomes to continuously refine our list of great teaching inputs.  But we rely on evaluation systems to provide accurate and comprehensive information, to help struggling employees improve, and to provide appropriate incentives.  VAM can help us think about good teaching practices, but StudentsFirst’s proposed use of VAM does not help us accomplish the goals of teacher evaluation.

Part 3b – in which we return to our discussion about the relationship between anti-poverty work and education reform – will follow soon!

Update (8/21/14) – Matt Barnum alerted me to the fact that the article I linked above about efforts to “level the playing field” when looking at VAM results actually does provide evidence that “two-step VAM” can eliminate the bias against low-income schools.  That’s exciting because, assuming the results are replicable and accurate, this particular VAM method would eliminate one of the incentive concerns I discussed.  However, while Educators 4 Excellence (Barnum’s organization) advocates for the use of this method, I don’t believe states currently use it (if you know of a state that does, please feel free to let me know).  The significant other issues with VAM would also still exist even with the use of the two-step version.

6 Comments

Filed under Education

Eric Lerum and I Debate Teacher Evaluation and the Role of Anti-Poverty Work (Part 2)

StudentsFirst Vice President Eric Lerum and I recently began debating the use of standardized test scores in high stakes decision-making.  I argued in a recent blog post that we should instead evaluate teachers on what they directly control – their actions.  Our conversation, which began to touch on additional interesting topics, is continued below.

Click here to read Part 1 of the conversation.

Lerum: To finish the outcomes discussion – measuring teachers by the actions they take is itself measuring an input. What do we learn from evaluating how hard a teacher tries? And is that enough to evaluate teacher performance? Shouldn’t performance be at least somewhat related to the results the teacher gets, independent of how hard she tries? If I put in lots of hours learning how to cook, assembling the perfect recipes, buying the best ingredients, and then even more hours in the kitchen – but the meal I prepare doesn’t taste good and nobody likes it, am I a good cook?

Regarding your use of probability theory and VAM – the problem I have with your analysis there is that VAM is not used to raise student achievement. So using it – even improperly – should not have a direct effect on student achievement. What VAM is used for is determining a teacher’s impact on student achievement, and thereby identifying which teachers are more likely to raise student achievement based on their past ability to do so. So even if you want to apply probability theory and even if you’re right, at best what you’re saying is that we’re unlikely to be able to use it to identify those teachers accurately on an ongoing basis. The larger point that is made repeatedly is that because outside factors play a larger overall role in impacting student achievement, we should not focus on teacher effectiveness and instead solve for these other factors. This is a key disconnect in the education reform debate. Reformers believe that focusing on things like teacher quality and focusing on improving circumstances for children outside of school need not be mutually exclusive. Teacher quality is still very important, as Shankerblog notes. Improving teacher quality and then doing everything we can to ensure students have access to great teachers does not conflict at all with efforts to eliminate poverty. In fact, I would view them as complementary. But critics of these reforms use this argument to say that one should come before the other – that because these other things play larger roles, we should focus our efforts there. That is misguided, I think – we can do both simultaneously. And as importantly in terms of the debate, no reformer that I know suggests that we should only focus on teacher quality or choice or whatever at the expense or exclusion of something else, like poverty reduction or improving health care.

If you’re interested in catching up on class size research, I highly recommend the paper published by Matt Chingos at Brookings, found here with follow-up here. To be clear about my position on class size, however; I’m not against smaller class sizes. If school leaders determine that is an effective way for improving instruction and student achievement in their school, they should utilize that approach. But it’s not the best approach for every school, every class, every teacher, or every child. And thus, state policy should reflect that. Mandating class size limits or restrictions makes no sense. It ties the hands of administrators who may choose to staff their schools differently and use their resources differently. It hinders innovation for educators who may want to teach larger classes in order to configure their classrooms differently, leverage technology or team teaching, etc. Why not instead leave decisions about staffing to school leaders and their educators?

The performance framework for San Jose seems pretty straightforward. I’m curious how you measure #2 (whether teachers know the subjects) – are those through rigorous content exams or some other kind of check?

I think a solid evaluation system would include measures using indicators like these. But you would also need actual student learning/growth data to validate whether those things are working – as you say, “student outcome results should take care of themselves.” You need a measure to confirm that.

I honestly think my short response to all of this would be that there’s nothing in the policies we advocate for that prevent what you’re talking about. And we advocate for meaningful evaluations being used for feedback and professional development – those are critical elements of bills we try to move in states. But as a state-level policy advocacy organization, we don’t advocate for specific models or types of evaluations. We believe certain elements need to be there, but we wouldn’t be advocating for states to adopt the San Jose model or any other specifically – that’s just not what policy advocacy is. So I think there’s just general confusion about that – that simply because you don’t hear us saying to build a model with the components you’re looking for, that must mean we don’t support it. In fact, we’re focused on policy at a level higher than the district level, and design and implementation of programs isn’t in our wheelhouse.

Spielberg: I believe you discuss three very important questions, each one of which deserves some attention:

1) Given that student outcomes are primarily determined by factors unrelated to teaching quality, can and should people still work on improving teacher effectiveness?

Yes!  While teaching quality accounts for, at most, a small percentage of the opportunity gap, teacher effectiveness is still very important.  Your characterization of reform critics is a common misconception; everyone I’ve ever spoken with believes we can work on addressing poverty and improving schools simultaneously.  Especially since we decided to have this conversation to talk about how to measure teacher performance, I’m not sure why you think I’d argue that “we should not focus on teacher effectiveness.”  I am critiquing the quality of some of StudentsFirst’s recommendations – they are unlikely to improve teacher effectiveness and have serious negative consequences – not the topic of reform itself.  I recommend we pursue policy solutions more likely to improve our schools.

Critics of reform do have a legitimate issue with the way education reformers discuss poverty, however.  Education research’s clearest conclusion is that poverty explains inequality significantly better than school-related factors.  Reformers often pay lip-service to the importance of poverty and then erroneously imply an equivalence between the impact of anti-poverty initiatives and education reforms.  They suggest that there’s far more class mobility in the United States than actually exists.  This suggestion harms low-income students.

As an example, consider the controversy that surrounded New York mayor Bill de Blasio several months ago.  De Blasio was a huge proponent of measures to reduce income inequality, helped reform stop-and-frisk laws that unfairly targeted minorities, had fought to institute universal pre-K, and had shown himself in nearly every other arena to fight for underprivileged populations.  While it would have been perfectly reasonable for StudentsFirst to disagree with him about the three charter co-locations (out of seventeen) that he rejected, StudentsFirst’s insinuation that de Blasio’s position was “down with good schools” was dishonest, especially since a comprehensive assessment of de Blasio’s policies would have indisputably given him high marks on helping low-income students.  At the same time, StudentsFirst aligns itself with corporate philanthropists and politicians, like the Waltons and Chris Christie, who actively exploit the poor and undermine anti-poverty efforts.  This alignment allows wealthy interests to masquerade as advocates for low-income students while they work behind the scenes to deprive poor students of basic services.  Critics argue that organizations like StudentsFirst have chosen the wrong allies and enemies.

I wholeheartedly agree that anti-poverty initiatives and smart education reforms are complementary.  I’d just like to see StudentsFirst speak honestly about the relative impact of both.  I’d also love to see you hold donors and politicians accountable for their overall impact on students in low-income communities.  Then reformers and critics of reform alike could stop accusing each other of pursuing “adult interests” and focus instead on the important work of improving our schools.

2) How can we use student outcome data to evaluate whether an input-based teacher evaluation system has identified the right teaching inputs?

This concept was the one we originally set out to discuss.  I’d love to focus on it in subsequent posts if that works for you (though I’d love to revisit the other topics in a different conversation if you’re interested).

I’m glad we agree that “a solid evaluation system would include [teacher input-based] measures…like [the ones used in San Jose Unified].”  I also completely agree with you that we need to use student outcome data “to validate whether those things are working.”  That’s exactly the use of student outcome data I recommend.  Though cooks probably have a lot more control over outcomes than teachers, we can use your cooking analogy to discuss how Bayesian analysis works.

We’d need to first estimate the probability that a given input – let’s say, following a specific recipe – is the best path to a desired outcome (a meal that tastes delicious).  This probability is called our “prior.”  Let’s then assume that the situation you describe occurs – a cook follows the recipe perfectly and the food turns out poorly.  We’d need to estimate two additional probabilities. First, we’d need to know the probability the food would have turned out badly if our original prediction was correct and the recipe was a good one.  Second, we’d need the probability that the food would have turned out poorly if our original prediction was incorrect and the recipe was actually a bad one.  Once we had those estimates, there’s a very simple formula we could use to give us an updated probability that the input – the recipe – is a good one.  Were this probability sufficiently low, we would throw out the recipe and pick a new one for the next meal.  We would, however, identify the cook as an excellent recipe-follower.

This approach has several advantages over the alternative (evaluating the cook primarily on the taste of the food).  Most obviously, it accurately captures the cook’s performance.  The cook clearly did an excellent job doing what both you and he thought was a good idea – following this specific recipe – and can therefore be expected to do a good job following other recipes in the future.  If we punished him, we’d be sending the message that his actual performance matters less than having good luck, and if we fired him, we’d be depriving ourselves of a potentially great cook.  Additionally, it’s not the cook’s fault that we picked the wrong cooking strategy, so it’s unethical to punish him for doing everything we asked him to do.

Just as importantly, this approach would help us identify the strategies most likely to lead to better meals in the long run.  We might not catch the problem with the recipe if we incorrectly attribute the meal’s taste to the cook’s performance – we might end up continuously hiring and firing a bunch of great cooks before we realize that the recipe is bad.  If we instead focus on the cook’s locus of control – following the recipe – and use Bayesian analysis, we will more quickly discover the best recipes and retain more cooks with recipe-following skills.  Judging cooks on their ability to execute inputs and using outcomes to evaluate the validity of the inputs would, over time, increase the quality of our meals.

Let’s now imagine the analogous situation for teachers.  Suppose a school adopts blended learning as its instructional framework, and suppose a teacher executes the school’s blended learning model perfectly.  However, the teacher’s value added (VAM) results aren’t particularly high.  Should we punish the teacher?  The answer, quite clearly, is no; unless the teacher was bad at something we forgot to identify as an effective teaching practice, none of the explanations for the low scores have anything to do with the teacher’s performance.  Just as with cooking, we might not catch a real problem with a given teaching approach if we incorrectly attribute outcome data to a teacher’s performance – we might end up continuously hiring and firing a bunch of great teachers based on random error, a problem with an instructional framework, or a problem with VAM methodology.

The improper use of student outcome data in high-stakes decision-making has negative consequences for students precisely because of this incorrect attribution.  Making VAM a defined percentage of teacher evaluations leads to employment decisions based on inaccurate perceptions of teacher quality.  Typical VAM usage also makes it harder for us to identify successful teaching practices.  If we instead focus on teachers’ locus of control – effective execution of teacher practices – and use Bayesian analysis, we will more quickly discover the best teaching strategies and retain more teachers who can execute teaching strategies effectively.  Judging teachers on their ability to execute inputs and using outcomes to evaluate the validity of the inputs would, over time, increase the likelihood of student success.

3) As “a state-level policy advocacy organization,” what is the scope of StudentsFirst’s work?

You wrote that StudentsFirst “[doesn’t] advocate for specific models or types of evaluations” but believes “certain elements need to be there.”  One of the elements you recommend is “evaluating teachers based on evidence of student results.”  This recommendation has translated into your support for the use of standardized test scores as a defined percentage of teacher evaluations.  I was not recommending that you ask states to adopt San Jose Unified’s evaluation framework (as an aside, the component you ask about deals mostly with planning and, among other things, uses lesson plans, teacher-created materials, and assessments as evidence) or that you recommend across-the-board class size reduction (thanks for clarifying your position on that, by the way – I look forward to reading the pieces you linked).  Instead, since probability theory and research suggest it isn’t likely to improve teacher performance, I recommend that StudentsFirst discontinue its push to make standardized test scores a percentage of evaluations.  You could instead advocate for evaluation systems that clearly define good teacher practices, hold teachers accountable for implementing good practices, and use student outcomes in Bayesian analysis to evaluate the validity of the defined practices.  This approach would increase the likelihood of achieving your stated organizational goals.

Thanks again for engaging in such an in-depth conversation.  I think more superficial correspondence often misses the nuance in these issues, and I am excited that you and I are getting the opportunity to both identify common ground and discuss our concerns.

Click here to read Part 3a of the conversation, which focuses back on the evaluation debate.

Click here to read Part 3b of the conversation, which focuses on how reformers and other educators talk about poverty.

5 Comments

Filed under Education

What Did I Just Pay For?

One year down and the greater part of a decade to go. As a first year medical student, having finished class for a couple months has allowed for ample time to digest much of what happened to me over the last twelve months, I can’t help but ask the question: what did I just sign up to pay for?

Students aren’t afforded the time to process the new information, surroundings, and lifestyle that comes with being a med student—it just sort of happens to you whether you like it or not. Medical school confronts students with a unique problem from the very first day of class: too many teaching resources to learn from and not enough time to use them all. It is up to the student to determine the most efficient way to retain information and stick with it for the year. The problem is that different subjects require different types of learning—some rote memorization, others require more critical thinking and problem solving—so there isn’t a magic bullet for getting by. Most students would agree that the material offered in medical school is not particularly difficult, there is just a lot of it. A policy at my school, along with many other medical schools, is to record all lectures and to ease restraints on mandatory attendance. This decision has deep ramifications that may end up changing the face of not only medical school, but higher education in its entirety.

The motivation behind recording all lectures with the professor’s corresponding notes is presumably to make life easier on the students, and in doing so, move medical education into the 21st century. The theory is that if all students have the ability to go back and listen to old lectures surely test scores will rise, as will the scores for the all-important and ever-looming United States Medical License Exam (USMLE) Step 1, which is a national standardized test given to all medical students following completion of their second year.

I’m not complaining. Streamlining content and making it accessible from anywhere on the planet is certainly more beneficial to students than having to attend each lecture and furiously scribble notes while simultaneously attempting to comprehend what is being dictated. I have it easier than classes before me and classes after me will have it easier than me. This is a good thing.

Not all courses involve professors standing in a lecture hall speaking to students. There are several courses in which students are taught how to interact with patients, colleagues, and peers, as well as using small groups and teams to discuss and work through cases. These require the students to be present because some things—like interviewing patients and teamwork—just don’t translate to the digital world yet. While watching lectures at a time and place of my choosing I can pause, rewind, and increase the lecture speed to ensure that everything I need to spend more time on I can go over slowly, and material that I know well I can just skim through.

Every now and then a lecturer will get called into an emergency and cannot attend class, so the lecture from last year on the same topic will be posted online. This is also good. No classes are ever really canceled or postponed due to unforeseen circumstances because there is always the previous year’s lecture ready to be posted at a moment’s notice. Lectures that were canceled but would have discussed updated material to reflect new findings in the field would have an emailed addendum with the additional slides or lecture notes to reflect such changes.

During this year alone our class had over 20 lectures used from last year (out of over 450), most of which came during the unusually snowy winter. I appreciate the option to learn medicine while in my pajamas and not having to go to campus each day, but what if every class simply used the previous year’s recorded lectures and then addenda were sent out addressing the newest research or pertinent clinical findings so that students are current on the given topic? Since the vast majority of students don’t attend lectures anyway this would only affect 2 groups: the professors themselves and the students who do attend lectures in person. I am usually hesitant to call for automation at the expense of other people’s labor, salaries and livelihoods, but if it can be shown that the cost of paying the salaries for lecturers can be used on other important learning tools then I believe it is an interesting proposition. The average medical school tuition is over $40,000 per year with an average class size of 135 students, meaning about 8 full-time professors/faculty making $85,000 a year would need to be laid off in order to reduce tuition just $5,000/year per student. Keep in mind the cost of medical school is far greater than just tuition, and more accurately comes to $60,000 and upwards each year (with many students coming out owing well over $200,000) and does not even include interest. All of this to say that saving $5,000 or so on tuition each year is really only a drop in the bucket from a student’s perspective and money should be spent on technology and facilities that find innovative ways improve learning. Additionally, most of the professors do not teach full time but perform research on campus and use teaching as supplemental income (or it’s part of their contract), or hold other positions on the medical school staff such as advisors, committee members, etc. I’m sure many of the professors would prefer to spend more time in their laboratory and less time in front of students teaching, but would they really wish to do so at the expense of a decreased salary?

However, the real question is: if the vast majority of lectures are posted online, how far away is medical school from becoming an online degree? Facilities such as the simulation laboratory (a robot patient that interacts with student doctors and responds to treatments given), and micro and gross anatomy laboratories have difficulty translating into the virtual world, but with new technology we are not far from having a fully interactive human body that looks and responds to our scalpels in the same way that our actual cadavers do. As technology streamlines education, how will this affect students’ abilities to learn the required material? Most schools have the same core curriculum that covers standard topics that are required for the USMLE. Doesn’t it make sense to have a centralized database in which there are only a handful of professors lecturing on topics to every med student in the U.S.? This somewhat exists already for students studying for the USMLE exams. The vast majority of students use only a handful of resources to prepare for the test. Couldn’t this be adopted for actual school material throughout the year rather than only for USMLE prep?

Curriculum for U.S. med schools is not completely uniform, however, as a school in a rural area will be more likely to have classes that are geared towards illnesses afflicting the surrounding population than a school in an urban environment. This variation can also be accounted for in recorded lectures and shouldn’t deter the schools from adopting more online-only content.

The reasons for having a physical campus for medical school is to be able to put in face time with peers to create a sense of community and attend the occasional classes in which groups of students are required debate and discuss case studies. Extracurricular activities and student groups also need places to meet. Students should meet with their advisors and professors for office hours, although I will admit that the increasing ease and frequency of video conferencing programs such Skype makes this less pressing. Students need to be face to face with their “mock patients” when conducting interviews and physical exams, but even the traditional doctor-patient relationship is becoming a thing of the past. As of this point, learning the hands-on aspects of becoming a physician cannot be substituted for an internet connection. In the same vein, gross anatomy needs to be attended by students because getting close to the cadavers is an important experience that means more than just learning to cut flesh and identify organs. It is important to strip away much of the excessive or redundant amount of information coming at the student, yet keep the humanistic and emotional aspect of learning to become a more complete physician intact.

The physical med school will require adequate study space, but a library with books is certainly not as necessary as it once was. As a matter of fact, I recently received an email from my school notifying all students that librarian hours will be cut to 20 hours per week due to the lack of student demand. Of course the library will remain open 24/7 but faculty and staff will no longer be available for as many hours. With almost all textbooks having digital formats, less and less space will be needed on bookshelves but students should have the opportunity to order physical books through their library, or a central library in a city or region. I began college in 2004 and all textbooks in biology were over 500 pages, weighed 10 lbs. and cost hundreds of dollars with a new addition of the book arriving every other year, making the books resale value almost nil. My younger brother recently graduated from college studying biology and all of his textbooks were digital, much cheaper, contained animations of biological pathways and reactions, and have the added benefit of being able to download updates so that the book always has the newest material. This is how the new generation of doctors will be studying. I still like the feel of paper between my fingers but there’s no reason to prefer it beyond familiarity and nostalgia. Digital formats are superior in every aspect except maybe they’re a little harsher on the eyes (but that could also be because I didn’t grow up staring at monitors).

The med school of the future still needs to contain conference rooms and an auditorium for notable lecturers or guest speakers so that more ears can be reached rather than speaking to a mostly empty room but with a digital camera pointed at the speaker. Something needs to be said about being in the presence of a great speaker who can advocate passionately about their novel ideas, and the sound of clapping that gives energy to a room can really make their notions hit home.

Ultimately if students are doing 80% of their learning in front of their computer screen is there a point where administrators have to be careful so that students don’t start to ask, “am I getting my money’s worth?”

If more schools develop online-only learning tools, how will teachers and professors be viewed by society? Will they be marginalized in their own classroom and become relegated to only answering the sparse questions from the student that can’t find his answer on Google? Will this shift free up more time for professors at higher institutions to pursue their own research or projects regardless of the field? These are the questions that medical schools will begin to face as more universities begin to shift their content into online databases that can be accessed by enrolled students as well as the public.

As tuition skyrockets and students are saddled with hundreds of thousands of dollars of debt, many feel as though they need to make up for lost time not spent earning a paycheck in the workforce and become highly specialized physicians. Highly specialized physicians are great when there is a pressing need for them, but the Association of American Medical Colleges (AAMC) reports that there will be a shortfall of 45,000 primary care physicians by 2020 so more needs to be done to incentivize students to pursue more broad (and often lower paying) types of doctors. There is also projected to be a shortfall of specialty physicians, but if primary care is emphasized in America, the use of specialty physician will wane as diseases and other illnesses will be caught and treated earlier rather than being able to progress to more difficult-to-treat stages which ends up increasing health insurance premiums across the board.

Another effort to lower costs of medical school is being explored by New York University, and having a 3 year medical degree. Although this is a new frontier for U.S. schools, where is the incentive for a private university to completely forego millions of dollars from its students by axing a year of payable tuition? This is another example where the profit-motive and efficient and effective healthcare do not coincide. The medical school industry, much like healthcare in the U.S., needs to reduce costs but maintain its efficiency in pumping out quality physicians. There is a difference between taking shortcuts and cutting corners and right now medical schools in the U.S. aren’t doing either, which is hurting both medical students as well as the future delivery of healthcare in America. The shortsightedness of the medical education system is forcing students to rack up enormous amounts of debt which ultimately will end up harming the population decades down the line either because the debt will discourage enrollment, or students will feel compelled to pursue higher-paying specialties rather than serving in a more utilitarian role. Medical schools would be wise to implement cost-saving measures that may prove to enhance student training while by embracing the latest technological advances. In many circumstances bloated industries and less-effective methods would be phased out by new and cheaper start-ups. In the highly regulated medical school field this type of progress is impeded by old ways of thinking and layers upon layers of bureaucracy. The last thing anybody wants to think walking out of the supermarket, a car dealership, or a campus is, “What did I just pay for?”

2 Comments

Filed under Education, Health Care and Medicine

Paid Sick Leave and the Three Lenses of Policy Analysis

Some political debates have two equally valid sides.  More often than not, however, the evidence is significantly more one-sided than journalists and pundits suggest.  AB 1522, a bill that the California Senate’s Committee on Appropriations just shunted into its Suspense File for consideration on August 14, is an example of legislation for which there is no ethical, intellectually honest opposition.  Three related lenses of policy analysis demonstrate why AB 1522’s minimum requirement of three paid sick days for all California workers deserves our support.

The ethical lens: The debate about paid sick leave, at its core, is about values.  It is undisputed that high percentages of low-income workers, particularly women and Latinos, currently lack the access to paid sick leave enjoyed by more privileged populations.  Supporters of a guaranteed minimum number of days recognize that low-income workers must often decide between working through illness and leaving bills unpaid.  Nobody should have to make that choice.

Opposition to guaranteed paid sick days, on the other hand, elevates considerations of employer profit and flexibility above the job security and subsistence of sick low-income workers.  No matter its professed motivation, therefore, anti-paid sick day activism is immoral by most people’s standards.

The factual lens: Few opponents of AB 1522 explicitly state a disregard for the plight of the working poor.  Instead, they call the bill a “job killer,” enumerating a long list of reasons that guaranteed paid sick leave will allegedly harm working Americans.  Some of the listed reasons are obvious fabrications; for example, the idea that employers who already offer paid sick leave “will have to completely change their existing policies and accounting procedures” is directly contradicted by the law’s provision that “an employer is not required to provide additional paid sick days…if the employer…makes available an amount of leave that satisfies the accrual requirements.”

Other opposition arguments, though slightly more time-consuming to debunk, are no less untrue.  To contend that AB 1522 “will reduce jobs,” its detractors, like those who oppose paying employees a living wage, embrace an economic theory that’s inconsistent with the facts.  Even studies that rely exclusively on the unverified assertions of employers fail to suggest negative economic consequences of paid sick leave laws.  The first report opponents of AB 1522 attempt to marshal in support of their claims concludes only that it was “too early to make a definitive judgment about” the economic effects of Connecticut’s paid sick leave law in February 2013.  A more comprehensive study of the Connecticut law’s effects in March 2014 notes:

most employers reported a modest effect or no effect of the law on their costs or business operations; and they typically found that the administrative burden was minimal.  [Despite] strong business opposition to the law prior to its passage, a year and a half after its implementation, more than three-quarters of surveyed employers expressed support for the earned paid sick leave law.

The findings from the opponents’ second citation, a 2011 report from the Institute for Women’s Policy Research, similarly contradict their claims.  The study finds that “most San Francisco employers reported that implementing the [city’s Paid Sick Leave Ordinance] was not difficult and that it did not negatively affect their profitability.”  While “a relatively small share of employers and employees” reported negative effects, the study concludes that the law “is functioning as intended.”  Just about every study on the economic effects of paid sick leave legislation, in fact, refutes the myths propagated by opponents of the laws.  Research studies also clearly demonstrate “that gaps in paid sick leave result in severe impacts on public health.”  This clear consensus helps explain why “the rest of the world’s rich economies have taken a legislative approach to ensuring paid sick days.”

The political lens: Despite the clear ethical arguments, research consensus, and overwhelming public support in favor of guaranteed paid sick day laws, several states have passed bills that preempt cities’ attempts to enact such legislation.  In 2008, a more robust sick leave bill (AB 2716) died after ending up in the Suspense File of the California Senate’s Committee on Appropriations, the same place in which AB 1522 currently resides.  A coalition of corporate lobbyists, led by chambers of commerce and the American Legislative Exchange Council (ALEC), is responsible.  This coalition has, in the words of David Sirota, successfully recast their “desire to exploit workers as fight-for-the-little guy altruism” by confusing the public and politicians with a relentless stream of unfounded claims.

A simple analysis of the broader advocacy decisions and agendas of the parties to a debate can help us assess the likely veracity of each party’s claims.  For several years now, the corporate coalition that opposes AB 1522 has been systematically “reshaping the fundamental balance of power between workers and employers.”  They have misled the public about a wide variety of issues and maintain clear power and profit motives for misleading the public about sick leave.  People unfamiliar with the specifics of AB 1522 could compare the backgrounds of its opponents with its supporters (typically academics, labor organizations, and other groups that advocate for low-income people) and recognize that proponents of AB 1522 are significantly more likely to be telling the truth.  This political lens heuristic isn’t failsafe – first impressions can be wrong and even the worst organizations sometimes endorse correct policy decisions – but it always provides valuable perspective.  Funding sources and political allies are especially important indicators of truth when topics involve complex research findings and/or similar ethical arguments from each side of a debate.

On the issue of guaranteed paid sick leave, however, each of the three lenses – ethical, factual, and political – is extremely straightforward; if anything, the three days required by AB 1522 are too few.  California lawmakers should rectify their predecessors mistakes and move the bill forward on August 14.

Note: Versions of this post originally appeared on The Left Hook and The Huffington Post.

Update (8/30/14): An amended version of the bill that “would exempt in-home caregivers from the requirement” has “cleared the legislature.”  The SEIU and other unions pulled their support because of the unnecessary and unethical exemption, and I believe they were correct to do so.

2 Comments

Filed under Business, Labor

StudentsFirst Vice President Eric Lerum and I Debate Accountability Measures (Part 1)

After my blog post on the problem with outcome-oriented teacher evaluations and school accountability measures, StudentsFirst Vice President Eric Lerum and I exchanged a few tweets about student outcomes and school inputs and decided to debate teacher and school accountability more thoroughly.  We had a lengthy email conversation we agreed to share, the first part of which is below.

Spielberg: In my last post, I highlighted why both probability theory and empirical research suggest we should stop using student outcome data to evaluate teachers and schools.  Using value added modeling (VAM) as a percentage of an evaluation actually reduces the likelihood of better future student outcomes because VAM results have more to do with random error and outside-of-school factors than they have to do with teaching effectiveness.

I agree with some of your arguments about evaluation; for example, evaluations should definitely use multiple measures of performance.  I also appreciate your opposition to making student test score results the sole determinant of a teacher’s evaluation.  However, you insist that measures like VAM constitute a fairly large percentage of teacher evaluations despite several clear drawbacks; not only do they fail to reliably capture a teacher’s contribution to student performance, but they also narrow our conception of what teachers and schools should do and distract policymakers and educators from conversations about specific practices they might adopt.  Why don’t you instead focus on defining and implementing best practices effectively?  Most educators have similar ideas about what good schools and effective teaching look like, and a focus on the successful implementation of appropriately-defined inputs is the most likely path to better student outcomes in the long run.

Lerum: There’s nothing in the research or the link you cite above that supports a conclusion that use VAM “actually reduces the likelihood of better future student outcomes” – that’s simply an incorrect conclusion to come to. Numerous researchers have concluded that using VAM is reasonable and a helpful component of better teacher evaluations (also see MET). Even Shankerblog doesn’t go so far as to suggest using VAM could reduce chances of greater student success.

Some of your concerns with VAM deal with the uncertainty built within it. But that’s true for any measure. Yet VAM is one of the few (if not the only) measure that has actually been shown to allow one to control for many of the outside factors you suggest could unfairly prejudice a teacher’s rating.

What VAM does tell us – with greater reliability than other measures is whether a teacher is likely to get higher student achievement with a particular group of students. I would argue that’s a valuable piece of information to have if the goal is to identify which teachers are getting results and which teachers need development.

To suggest that districts & schools that are focusing on implementing new evaluation systems like those we support are not focusing on “defining and implementing best practices effectively” misses a whole lot of evidence to the contrary. What we’re seeing in DC, Tennessee, Harrison County, CO, and countless other places is that these conversations are happening, and with a renewed vigor because educators are working with more data and a stronger framework than ever before.

Back to your original post and my issues with it, however – focusing on inputs is not a new approach. It’s the one we have tried for decades. More pay for earning a Masters degree. Class size restrictions and staffing ratios. Providing funding that can only be used for certain programs. The list goes on and on.

Spielberg: I don’t think anyone thinks we should evaluate teachers on the number and type of degrees they hold, or that we should evaluate schools on how much specialized funding they allocate – I can see why you were concerned if you thought that’s what I recommended.  My proposal is to evaluate teachers on the actions they take in pursuit of student outcomes and is something I’m excited to discuss with you.

However, I think it’s important first to discuss my statement about VAM usage more thoroughly because the sound bites and conclusions drawn in and from many of the pieces you link are inconsistent with the actual research findings.  For example, if you read the entirety of the report that spawned the first article you link, you’ll notice that there’s a very low correlation between teacher value added scores in consecutive years.  I’m passionate about accurate statistical analyses – my background is in mathematical and computational sciences – and I try to read the full text of education research instead of press releases because, as I’ve written before, “our students…depend on us to [ensure] that sound data and accurate statistical analyses drive decision-making. They rely on us to…continuously ask questions, keep an open mind about potential answers, and conduct thorough statistical analyses to better understand reality.  They rely on us to distinguish statistical significance from real-world relevance.”  When we implement evaluation systems based on misunderstandings of research, we not only alienate people who do their jobs well, but we also make bad employment decisions.

My original statement, which you only quoted part of in your response, was the following: “Using value added modeling (VAM) as a percentage of an evaluation actually reduces the likelihood of better future student outcomes because VAM results have more to do with random error and outside-of-school factors than they have to do with teaching effectiveness.”  This statement is, in fact, accurate.  The following are well-established facts in support of this claim:

– As I explained in my post, probability theory is extremely clear that decision-making based on results yields lower probabilities of future positive results when compared to decision-making based on factors people completely control.

– In-school factors have never been shown to explain more than about one-third of the opportunity gap.  As mentioned in the Shanker Blog post I linked above, estimates of teacher impact on the differences in student test scores are generally in the ballpark of 10% to 15% (the American Statistical Association says it ranges from 1% to 14%).  Teachers have an appreciable impact, but teachers do not have even majority control over VAM scores.

Research on both student and teacher incentives is consistent with what we’d expect from the bullet points above – researchers agree that systems that judge performance based on factors over which people have only limited control (in nearly any field) fail to reliably improve performance and future outcomes.

Those two bullet points, the strong research that corroborates the theory, and the existence of an alternative evaluation framework that judges teachers on factors they completely control (which I will talk more about below) would essentially prove my statement even if recent studies hadn’t also indicated that VAM scores correlate poorly with other measures of teacher effectiveness.  In addition, principal Ted Appel astutely notes that, “even when school systems use test scores as ‘only a part’ of a holistic evaluation, it infects the entire process as it becomes the piece [that] is most easily and simplistically viewed by the public and media. The result is a perverse incentive to find the easiest route to better outcome scores, often at the expense of the students most in need of great teaching input.”

I also think it’s important to mention that the research on the efficacy of class size reduction, which you seem to oppose, is at worst comparable to the research on the accuracy of VAM results.  I haven’t read many of the class size studies conducted in the last few years yet (this one is on my reading list) and thus can’t speak at this time to whether the benefits they find are legitimate, but even Eric Hanushek acknowledges that “there are likely to be situations…where small classes could be very beneficial for student achievement” in his argument that class size reduction isn’t worth the cost.  It’s intellectually inconsistent to argue simultaneously that class size reduction doesn’t help students and that making VAM a percentage of evaluations does, especially when (as the writeup you linked on Tennessee reminds us) a large number of teachers in some systems that use VAM have been getting evaluated on the test scores of students they don’t even teach.

None of that is to say that the pieces you link are devoid of value.  There’s some research that indicates VAM could be a useful tool, and I’ve actually defended VAM when people confuse VAM as a concept with the specific usage of VAM you recommend.  Though student outcome data shouldn’t be used as a percentage of evaluations, there’s a strong theoretical and research basis for using student outcomes in two other ways in an input-based evaluation process.  The new teacher evaluation system that San Jose Unified School District (SJUSD) and the San Jose Teachers Association (SJTA) have begun to implement can illustrate what I mean by an input-based evaluation system that uses student outcome data differently and that is more likely to lead to improved student outcomes in the long run.

The Teacher Quality Panel in SJUSD has defined the following five standards of teacher practice:

1) Teachers create and maintain effective environments for student learning.

2) Teachers know the subjects they teach and how to organize the subject matter for student learning.

3) Teachers design high-quality learning experiences and present them effectively.

4) Teachers continually assess student progress, analyze the results, and adapt instruction to promote student achievement.

5) Teachers continuously improve and develop as professional educators.

Note that the fourth standard gives us one of the two important uses of student outcome data – it should drive reflection during a cycle of inquiry.  These standards are based on observable teacher inputs, and there’s plenty of direct evidence evaluators can gather about whether teachers are executing these tasks effectively.  The beautiful thing about a system like this is that, if we have defined the elements of each standard correctly, the student outcome results should take care of themselves in the long run.

However, there is still the possibility that we haven’t defined the elements of each standard correctly.  As a concrete example, SJTA and SJUSD believe Explicit Direct Instruction (EDI) has value as an instructional framework, and someone who executes EDI effectively would certainly do well on standard 3.  However, the idea that successful implementation of EDI will lead to better student outcomes in the long run is a prediction, not a fact.  That’s where the second usage of student outcome data comes in – as I mentioned in my previous post, we should use student outcome results to conduct Bayesian analysis and figure out if our inputs are actually the correct ones.  Let me know if you want me to go into detail about how that process works.  Bayesian analysis is really cool (probability is my favorite branch of mathematics, if you haven’t guessed), and it will help us decide, over time, which practices to continue and which ones to reconsider.

I certainly want to acknowledge that many components of systems like IMPACT are excellent ones; increasing the frequency and validity of classroom observations is a really important step, for instance, in executing an input-based model effectively.  We definitely need well-trained evaluators and calibration on what great execution of given best practices look like.  When I wrote that I’d like to see StudentsFirst “focus on defining and implementing best practices effectively,” I meant that I’d like to see you make these ideas your emphasis.  Conducting evaluations on this sort of input-based criteria would make professional development and support significantly more relevant.  It would help reverse the teach-to-the-test phenomenon and focus on real learning.  It would make feedback more actionable. It would also help make teachers and unions feel supported and respected instead of attacked, and it would enable us to collaboratively identify both great teaching and classrooms that need support.  Most importantly, using these kinds of input-based metrics is more likely than the current approach to achieve long-run positive outcomes for our students.

Part 2 of the conversation, posted on August 11, can be found here.

6 Comments

Filed under Education