Schedule and Events



March 26-29, 2012, Software Test Professionals Conference, New Orleans
July, 14-15, 2012 - Test Coach Camp, San Jose, California
July, 16-18, 2012 - Conference for the Association for Software Testing (CAST 2012), San Jose, California
August 2012+ - At Liberty; available. Contact me by email: Matt.Heusser@gmail.com

Thursday, April 07, 2011

Quality Metrics (...and other Fine Myths)

When I look at our North American (or at least developed nation) public systems, I am, well ... sad.

Education, Health Care, Government, pick any area of public life and you are likely to see waste, declining performance indicators and increasing cost.

It seems strange. For twenty-five years we have had this resounding gong of public education reform, but despite our best effort, things keep getting worse. It's all most as if we are applying Weinberg's 1st Law Of Management: If something isn't working, do more of it.

Yet I have hope.

You see, while I believe things are getting work in many public sectors, I believe that is accident, not essence. I believe it is possible that those things can be managed well. To borrow a line from Dr. Deming, the systems we have set of guarantee a certain outcome based on human nature, but it may be possible to change the system.

If I had to give a label to this idea of improvement, one label that might fit is quality. I am a strong believer in the idea of quality, in everything from pride in work to effective work to continuous improvement; I am even a card-carrying member of the American Society for Quality.

Now over years there have been a number of ways to look at quality. You can look at it as a toolbox of techniques, or a philosophy, or, as I prefer, a sort of systems thinking approach to problem solving. In any event, the term can mean different things to different people and I am okay with this; the root of qual is sort of an intangible that defies measurement. (If you could objectively measure it, it's not qual anymore, it is quant.)

Enter Quality

Now about this idea of quality improvement as systems thinking. That suggests that instead of numerical targets, we seek to understand what is actually going on in the process, optimizing for the things we value the most, using numbers as indicators but not for control.

I may have worded that poorly, but something like that is what a great number of systems thinkers advocate, from Deming to Weinberg, and, most recently, the British Consultant Mr. John Seddon.

Notice the inherent conflict between systems thinking and "management (control) by the numbers." With control by the numbers, you can look at numerical targets and, if the process is within specific control limits, well, you don't have to worry about it. The numbers are a sort of report card which allows you to have an understanding of the system without having to be directly involved in the work. Many control-oriented management thinkers often believe that separating management from the work is essential for success.

The problem comes when the report card itself is causing the dysfunction. John Seddon put it this way in a recent lecture:

Twenty-five years ago or more I was studying peoples behavior in organizations. You know, it's kind of odd that you take honest, God-fearing people off the street, put them in a building, call it an organization and they behave in very strange ways. It wasn't until I got to the work of Deming that I saw that it is actually the system that governs their behavior ... do you remember in the 1970's people went to Japan to find out what they did? They woke up to the Japanese Miracle so they go over and 'let's have a look', and they came up with the idea that this was quality circles and suggestion schemes and then it became kind of TQM and it failed, didn't it? Because they couldn't see the thing you need to see, which is a different way of thinking about the design and management of work.


Now back to my words

If we take Mr. Seddon seriously (and I do), then we will want to scrutinize applications of 'quality tools' to make sure the measurements make sense, and, even if they do make sense, will they introduce dysfunction into the organization. (The most common form of dysfunction I have seen is exploiting the difference between what is measured and the actual desired outcome, but another common form is when optimizing individual parts of a process hurts the whole.)

All this brings me to Mr. Paul Borawski's recent post on the ASQ national quality blog about the Pewaukee School District. In Paul's words:

Dr. Sternke is a passionate and committed leader. She has personally invested in understanding her school district as a system and applied the tools of improvement in the conduct of the district’s mission. It’s all there: A mission, objectives, strategy, and metrics. And over time, by concentrating on the vital few, Dr. Sternke, and the devoted staff of the school district have driven performance to ever higher levels. Success is measured in the classroom, but the business of the school system to support educational excellence is managed, too.


All this sounds pretty good, right? Here you have a professional looking at education as a system and overhauling it; that's good. The plan is public; that's good too. They have a defined strategy they can track against, which is good. So let's look at those goals.

What's your goal?

I originally had several thousand words here, listing some of the goals of both the local district and the school. The district requirements are exhaustive, and include everything from revising 100% of the curriculum to comply with "power standards" to getting a certain number of twitter followers to training a percentage of the teachers to use computers at a certain level of competence.

Instead of going through all of those, though, I decided to save your eyes and review just one document; the high level review of the high school performance in 2008-2009:

Academic Goal #1: Identify essential concepts and skills in all subject areas by June, 2009.

Results: Collaborative Department Teams developed Power Standards in each subject area in preparation for the 2008-09 school year. Power Standards were articulated to students via course syllabi and many staff members posted their standards on classroom walls for display and review. Essential concepts and skills were continuously reviewed and updated in all subject areas. Essential concepts/skills are clearly articulated in unit Curriculum Maps and they are the basis for the development of daily Learning Objectives.

Academic Goal #2: Evaluating the Pewaukee School District’s non-fiction writing program.

Results: Data collected during the 2008-09 school year focused on the frequency of non-fiction writing assigned. Individual teachers collected student samples and anchor papers for each formal writing exercise. Curriculum Maps and end-of-unit assessments were revised to reflect our focus on non-fiction writing and data clearly indicates that students wrote consistently and continuously throughout the year. PHS staff members played an integral role on the District Literacy Committee where the overall writing program was evaluated.

Academic Goal #3: 50% of all Common Unit Assessments will be comprised of mid to upper level critical thinking questions/activities as measured by Bloom’s Taxonomy.

Results: Data collected at the end of the 2008-09 school year indicates that nearly two thirds of all questions/activities found on end of unit assessments require students to think critically.


Is this kind of improvement really going to change a school system?

Forming a strategy

Notice that all of these goals are process goals -- none of them are actually tied to outcome. To get on my soapbox just once, this is a classic middle-management mistake. Measuring the process gives management a sense of control, but it does very little (or less!) to indicate that the outcome will be good.

When you think about it, measuring the process will tend to put our focus on the means (the education process) not the ends. (An educated citizenry? A productive citizenry?)

What is the real goal of the school system? With a strategy written in this way we don't know.

Now If I were looking at forming a higher-ed strategy, I'd start with asking the teachers what they need. My guess was it might involved new textbooks, guest speakers, time to develop materials. Then I would ask the teachers, parents and students what success means to them.

Somehow I doubt it would mean a percentage of questions have a higher level in Bloom's Taxonomy, that the school had created across-the-board standards, or that a certain percentage of classes had complied with a new special way to teach.

Indeed, instead of "standards", I would likely let the teachers experiment with whatever they would like, the use whatever external tools I have (state standard tests, pass/fail rates, AP test scores, student and parent qualitative evaluation, peer review) to measure outcome.

Then I would ask the high-scoring teachers what was working for them.

You see, first I would study the system to see what was working, then ask how we could improve improve.

Along the way, I might identify some obstacles in the nature of the work; for example, many schools are set up like a factory, with a "batch" size of thirty students or so, instead of using lean concepts like one-piece flow. Ideally, it would be nice to customize education to meet the child, so I would be talking about balancing that one-piece flow against other economic realities. I'd be talking about tough choices, about why we had to done thing and not another. (I can't help but notice the term budget does not appear in the goals, nor does opportunities or obstacles.)

I wanted a quality plan, but, I am afraid to say, we got a

C'mon, Matt, do you really believe this batch size vs. one-piece flow stuff?

Well, let me tell you this: I did not realize it at the time, but when my parents moved me from a school with a 30-to-1 student/teacher ratio to a private school at ten-to-one, it was in order to avoid the stigma of my failing out of 2nd grade. Within a few years of private school I was dramatically outscoring my peers.

Being a believer in continuous improvement, we decided to homeschool our children, driving the student/teacher ratio even lower. So yes, it does matter.

Except, of course, you will see no mention of that in the strategy document for the district.

I can't really blame the administrators or leadership of the Pewaukee School district. Given what they were given, I don't even think they did a terrible job.

You see, the school district lives within a greater system, the American Public Education system. That education system has it's regulations, mandatory tests, departments, laws and state offices, that frames the problems of education, and creates rewards and incentives, that make this kind of strategy very sad ... but all-too predictable.

Over the part decade we have seen a huge increase in education delivery by internet, in charter schools, homeschools, and in competition. It is competition in which I have hope. Competition is the opposite of standardization; it allows a thousand flowers to bloom, and then us to pick the best flowers by outcome, not process.

Perhaps, over time, we can pick the methods that seem produce good outcome. But to do that we need to experiment.

When you think about it, without the Pewaukee School district trying something, and doing it in public, we never would have had this essay. They are experimenting. In many ways, they should be applauded.

But make no mistake; you can't just make up a bunch of new standards and give them numerical targets and call it "quality."

It's got to be, like, good and stuff, ya know?

UPDATE: I shared the first draft of this interview with a few of my fellow "Influential Voices Program" Bloggers. One of them, Aimee Siegler, pointed out to me that her children in Wisconsin, in a district near to Pewaukee, are involved in programs that try to provide individualized attention, adjusting the program based on the child's ability level and less on age ... decreasing the batch size.

So it is happening. For some reason, though, that isn't on the radar for "continuous improvement" in the school system.

I think it should be.

4 comments:

Bruce Waltuck said...

thank you, Matt, for an insightful and useful post. I think you have begun to see the fundamental nature of the problem, in education, and in any effort to improve the results of a human-designed process. What Deming saw, what Seddon saw, and what you see now, are the underlying true nature of our organizations and processes. We can only "manage" and "control" those systems and processes which are fundamentally linear and deterministic.

So, we can achieve statistical control and stability in the complicated technical system/processes of building a Toyota. But as you noted in your post, there are complex system dynamics that can not be managed or controlled in this way. These require shifts in structure, beliefs, and behaviors- much harder to influence and control.

Also if I may, a word about your opening lines- regarding your perception of "waste" in various sectors of the economy. Having worked as a change leader for many years in predominantly public organizations, I learned the lessons that Deming, Shewhart, and others tried to teach us- knowing the mind of the customer is often hard, and often variable. Moreover, what we define as "wasteful" or non-value added activity, depends on what we define as the important objectives of the system. And THAT depends on one's values and beliefs. In education, as you reference, we all generally want "better learning" for our kids. But data (thank you again, Dr. Deming) make clear that student results on standardized tests is NOT an indicator of "teacher quality."

Thank you again, and I look forward to reading more of your posts in behalf of ASQ.

Bruce Waltuck,
M.A., Complexity, Chaos, and Creativity
President, Freethinc...For A Change
Past Chair, Government Division, ASQ
@complexified on twitter

Cindy Veenstra said...

Hi Matt, this entry is to express a concern and make a recommendation about blogging about “A View from the Q” blog entries.

I appreciate your positions about quality metrics, making sure metrics make sense, the use of numerical targets, having goals related to outcomes and the importance of connecting processes to outcomes in school systems. Here is my concern. In my reading of the information on the school district’s website, I found that the Pewaukee School District has both process and outcome goals. Furthermore, to win the state’s highest Baldrige-like award for performance excellence, i.e. the 2010 Governor’s Forward Award of Excellence, is a significant district achievement and indicates to me that this district has connected process goals to outcomes and has a continuous improvement culture that is student-focused.

Briefly, as an example of the outcomes that this school district is achieving, it has reported a 98% graduation rate; this compares to an 89% state graduation rate (Wisconsin Department of Public Instruction) and a 75% national rate (National Center for Education Statistics). A high school graduation rate of 98% only occurs when there is a student-focused, systems approach to education. I wish all school districts were doing this well!

I personally would like to see all of us encourage more school districts and colleges to consider using the Baldrige Education Criteria framework in their continuous improvement plans. http://www.nist.gov/baldrige/publications/education_criteria.cfm

As I indicated, I would like to make a recommendation. I know of two other blog entries by other ASQ “influential voices” bloggers who discussed the latest “A View from the Q” blog entry in their separate blogs. I would like to recommend that you comment inline to Paul Borawski’s blog on entries related to his blog. Then a much more robust conversation of issues related to each topic would develop. It is from conversations like this, that more progress will be made.

One final thought, if you have not heard of ASQ’s NQEC “quality in education” conference, check it out.

Cindy Veenstra
ASQ Education Division Chair
cpveenst@umich.edu

Matthew said...

Fascinating. I'm not sure where Cindy got those metrics, but I'm going to start an email dialogue with her to talk about them.

Eric said...

Matthew, I must agree with Cindy--but perhaps I can draw some useful analogies. My ulterior motive: How would we explain Baldrige in Education to Reed Hastings (Purify, Netflix, now Dept of Ed Equity Commission).

Consider a hypothetical teacher with two dozen asynchronous, polymorphic objects. Her goal is not merely they fulfill their "roles," but that the whole be greater than the sum of the (two dozen) parts. These objects need to "lase."

When we test the system, though, there's lots of dead code. It turns out methods aren't called reliably (brains aren't CPU chips). We can improve reliability by increasing the Bloom's taxonomy level of our polymorphic objects--and they will be more likely to lase.

We also need to think through call graphs and apply design patterns with insight. The curricular equivalent is identifying power standards.

So Pewaukee is addressing an unreliable system by identifying the source of dead code and refactoring the system. It's an impressive effort.

Do these analogies help?