Pitfalls of Slogans and Targets: Deming in Schools Case Study (Part 16) - a podcast by The W. Edwards Deming Institute

from 2023-11-28T20:00

:: ::

Slogans and exhortations don't work to motivate people. Targets usually encourage manipulation or cheating. John Dues and Andrew Stotz discuss how these three strategies can hinder improvement, frustrate teachers and students, and even cause nationwide scandals.

TRANSCRIPT

0:00:02.4 Andrew Stotz: My name is Andrew Stotz. I'll be your host as we continue our journey into the teachings of Dr. W Edwards Deming. Today I'm continuing my discussion with John Dues, who is part of the new generation of educators striving to apply Dr. Deming's principles to unleash student joy in learning. This is episode 16, and we're continuing our discussion about the shift from management myths to principles for the transformation of school systems. And today we're gonna be talking about principle 10 "eliminates slogans, exhortations, and targets." John, take it away.

 

0:00:37.1 John Dues: Good to be back, Andrew. Yeah, we've been talking about these 14 principles for educational systems transformation for a number of episodes now. I think one, one important thing to point out, and I think we've mentioned this multiple times now, but really the aim in terms of what we're hoping the listeners get out of hearing about all these principles is really about how they all work together, as a system themselves. So, we started with create constancy of purpose. We've talked about a number of other things, like work continually on the system, adopt and institute leadership, drive out fear. Last time we talked about break down barriers. We're gonna talk about eliminating slogans and targets this time, which is principle 10. But really, as you start to listen to 1, 2, 3, 4, 5, 6, 7, 8, 9, and now 10, what should start to become clear is how all of these things work together.

 

0:01:34.5 JD: If you are operating as a leader, for example, within sort of the Deming philosophy, one of the things you are gonna do is eliminate these slogans. So all these principles shouldn't be studied in isolation. We study them together, see how they all work together. But let me just start by just reading principle 10 so you have the full picture. So principle 10 is "eliminate slogans, exhortations and targets for educators and students that ask for perfect performance and new levels of productivity. Such exhortations only create adversarial relationships as the bulk of the causes of low quality and low productivity belong to the system, and thus lie beyond the power of teachers and students." So really what we're talking about is, what's wrong with slogans, exhortations and targets for educators and students, because these things are, pervasive, I think.

 

0:02:29.5 JD: We've seen them, we've seen the posters on the walls with the various slogans. And, of course targets are everywhere in our educational systems. In my mind the main problem is that they're directed at the wrong people. The basic premise is that teachers and students could sort of simply put in more effort, and in doing so, they could improve quality productivity, anything else that's desirable in our education systems. But the main thing is that, that doesn't take into account that most of the trouble we see within our schools are actually coming from the system. And I think we've talked about this quote is probably one of Deming's most well-known quotes, but he said, "Most troubles and most possibilities for improvement add up to proportion, something like this, 94% belong to the system, which is the responsibility of management, 6% is special." And that's more like, can be sort of tagged or pinned to individual students or individual educators working within the system. So I think that's a really important thing to revisit 'cause it sort of is at the heart of all of these, all of these principles.

 

0:03:47.7 AS: It's interesting, like, maybe you could give some examples of what type of, slogans or targets or exhortations that you've seen, in your career and what's going on in education these days.

 

0:04:06.5 JD: Yeah, I mean, I'm gonna give an example here, kind of walk through an example in a second. But there, they're really everywhere, I mean, to varying degrees probably in different places. But, one, that one that sticks out in terms of, a target is when I first started my career in 2001, I was a teacher in Atlanta Public Schools. And No Child Left Behind had just come out. And, basically as they, as the leadership at the school sort of presented what was in this legislation, you know, they would always put up a just chart that basically said, a certain percentage of students are expected to be proficient across the country on state tests. And that, that percentage would increase over time starting in 2001 when the legislation was rolled out. And by the 2013, '14 school year, the way the tables were laid out is that 100% of students would be proficient in reading and math across the country in third through eighth grade. And of course, that didn't come to fruition. There's no chance that that ever would be the case. And it was also the case that there was really no methods attached to that target. So that's a really good example of a target that was sort of pulled out of the sky. And, basically, over the course of a dozen years, it was supposed to sort of, so somehow magically come to be.

 

0:05:38.2 AS: That's great. The idea of 100%. I mean, like what fool would say that, you would have 100% of anything. I mean, you just can't get anything to that point. But one question I have about that, I suspect that in those types of cases, it just gets swept under the rug and nobody's looking at that number the way that they looked at it back then, but maybe, maybe they do look at it. But my question would be that No Child Left Behind if we were able to objectively measure the improvement that was caused by that, or a devolution, like did, if it was, what was the starting point, for No Child Left Behind?

 

0:06:26.0 JD: Well, so, it, that would vary by district. If I remember right, I think the, the target early in the 20000s was something like in the 50 or 60%, something like that, right? And then it would...

 

0:06:40.3 AS: Right, so let's say 50 to 60%. And I wonder at the end of that period of 2013, if we could objectively compare and calculate that number, what would be your estimate of where it would be if it was 50 to 60 originally, where do you think it was at the end of 2013?

 

0:07:00.0 JD: 50% to 60%.

 

0:07:02.4 AS: So no improvement?

 

0:07:02.5 JD: No. I mean...

 

0:07:02.5 AS: Incredible.

 

0:07:04.2 JD: That could vary a little bit by time and place, but it's a little bit even hard to pin down because, the way that the test was constructed in 2001 in Georgia, for example, would be different than the way the test was constructed by 2013-14. So even, even the test itself had changed, the standards had changed, a number of things had changed over time. Also, for folks that know much, about what was going on in Atlanta by, by 2013-14, the superintendent, who would've been the superintendent from about 2001 until, I don't know, 2010 or something, she was actually charged under the RICO statute for sort of, yeah, I don't know if that was warranted or not. I think it was unprecedented, that's for sure. But there was a cheating scandal that was systematic from superintendent to principals down to even teachers. That was pretty pervasive because there was a lot of, in Atlanta at least, there was a lot of monetary incentives tied to the test score improvements. And so I know that it did result in a number of people being charged with various crimes, including the superintendent and number of principals.

 

0:08:18.3 AS: That's incredible.

 

0:08:20.6 JD: Incredible. Yeah. Yeah.

 

0:08:22.8 AS: Yeah. And there was a trial, there was a trial, I'm looking here on the internet. The trial began on September, in September of 2014 in Fulton County Superior Court.

 

0:08:32.3 JD: Yeah. Right around that time.

 

0:08:33.7 AS: Incredible.

 

0:08:33.8 JD: And so I was gone from Atlanta by that time. So I don't know all the details, but I have read a little bit about it, and I think, again, because there's these targets, that's certainly not an excuse for systematically cheating on these tests for sure. But, a byproduct of some of these testing regimens and some of the monetary incentive systems that were put in place was, cheating did happen in, in a number of places in the United States. Especially at the height of when the scrutiny was highest on these test results. So again, it's not, that shouldn't be the expectation even in a system where there's a lot of focus, certainly, but it was a byproduct. So you, you would wanna ask the question, why did that, why did that happen?

 

0:09:20.5 AS: Yeah.

 

0:09:21.9 JD: I mean, I think, yeah, go ahead.

 

0:09:25.2 AS: I was just gonna say that I also wanted to talk about, we were talking before we went on about the word "exhortation," which is kind of an, an old word, kind of a, and so I was looking it up on the dictionary. It says, "an address or communication emphatically urging someone to do something," and they use an example of "no amount of exhortation had any effect." And then I thought about, one of the questions I always ask students when I start my class, is "who's responsible?" And I want the listeners and the viewers to think about this answer to this question. Who's responsible for students being on time to class, the student or the teacher? And of course, the majority of students are gonna say the student. And if I ask the teachers, of course they're gonna say, student, it's personal responsibility. And most of the listeners and viewers would probably say the same. And then I want to explain a situation that I do every time I start my class. My class starts at 01:00 PM in this particular semester. And as soon as the door, as soon as 01:00 PM came, I just locked the door and I started teaching.

 

0:10:39.8 JD: And this is university setting?

 

0:10:40.7 AS: This is at university.

 

0:10:44.0 JD: Yeah.

 

0:10:44.6 AS: And when I did that, the university students, some of them had the, they were outside and kind of knocking on the door or no, wondering if they can come in. And I didn't let them in until after five or 10 minutes of teaching. And then I let, I went out and talked to them a little bit about, being on time and, please, be on time to my class or else I'm gonna lock the door and you're not gonna be able to come back in. And so I did that a couple times until all the students, I have 80 students in that class, and they all were in. And the next time that I, had my class, 100% of the students were on time. They were in there and ready to go. In fact, I had a funny case, John, I was, I was visiting a client of mine, which is north of the city of Bangkok. And I told my client, I gotta get outta here now because if I'm late to my class, my students are gonna lock me out.

 

0:11:32.7 JD: They're gonna lock you out. Yeah.

 

[laughter]

 

0:11:33.0 AS: But the point of the story is for the listeners and the viewers out there, if you said that the students are responsible for being on time, but I've just presented a case where the teacher changed something about the way that the, the class was done. That changed the outcome of the students. Can you still say that it is the students, and in fact, if you were to, to listen, if you went, we went to a, a high school or university and we sat down with all the teachers that would they be saying no amount of exhortation had any effect on the students being on time. These guys are just irresponsible.

 

0:12:17.4 JD: Yeah. Yeah. It's interesting 'cause I think, David Langford, on one of the episodes he did, talked about the problem of kids being, or students being late to class. And in that particular scenario as a high school, and, when you ask the kids, why were you late? They said, "well, the teacher doesn't start until five or seven minutes into the period anyway, so why, why do I need to come on time?" So, there is some truth to thinking about who, who is creating the system, what is that system? What types of behaviors does that system encourage? That's certainly a good way to sort of analyze each, each situation.

 

0:12:53.7 AS: Yeah. I mean, it makes you think, and I think what David highlights too is like, what's the priority here? And, where do we want, is it so important that someone's gonna be there at exactly this moment or does it matter if it's five minutes before, five minutes after? And I think that there's, there's an interesting discussion on that.

 

0:13:13.1 JD: Yeah.

 

0:13:13.8 AS: And for the listeners and the viewers out there, you're gonna make up your own mind. But I think that the key thing is that what you're saying when you talk about 94% of, the output or the result of something is the result of the system. And that helps us to focus beyond just, putting the pressure on students or administrators or educators or employees.

 

0:13:36.1 JD: Yeah. Yeah. And I think, one of the tools that I've talked about repeatedly and I'm a very big fan of is, is the process behavior chart or what some people call a control chart. And the reason for that is because when you use that chart, you can then tell what problems are coming from the system itself, and that's the responsibility of management and what problems are coming from other causes and may take some other types of sort of approaches. I think just knowing that is a really important sort of upfront step when you're considering that 94%, 6% problem. You can actually tell what's coming from the system, and then there's one approach and what's coming from special, special causes. And then there's another approach to, to improvement. And I suspect that, you know, when you chart data in this way over time, the vast majority of systems are stable, but unsatisfactory.

 

0:14:38.5 JD: And I think that's probably where things like, targets, exhortations, these slogans when you have a stable but defective system, that's the point where, these exhortations, et cetera are particularly pernicious, you know? I think, goal setting seems like a good idea, but it's really useless in that type of situation. It's really often an active desperation actually, when you set a goal in a stable but defective system. So I was gonna sort of talk you through a, through an example of how this, perhaps, could show up.

 

0:15:23.6 AS: Yep.

 

0:15:26.2 JD: We... This is going back a couple years, but as the pandemic rolled out, and I think we've talked about this data before, but we were really closely charting and paying attention to: are kids engaged in remote learning? And again, this example's from the pandemic, but this can come from any data that that's important to you. And almost all of this data unfolds over time. But we were looking at, how, how engaged are kids in remote learning? And it was really important for us to first define engagement. And so for us, this question always comes up, what do you mean by engagement? For us, this meant, kids did a remote lesson with the teacher and then they had a practice set in math. So what percent of the kids completed that full practice set?

 

0:16:17.6 JD: And basically when we, when we charted this, what we see, we did this for, about five weeks. We charted the data. So we had about 24 days worth of data. This was eighth grade math. And the first day 62% of the kids were engaged the second day, 67, the third day, 75%, fourth day, 84%, and then down to 77%. And then the next day, 71%, the next day, 58%, the next day 74%. So you can kind of get the picture here that this data was sort of bouncing around. And when we took that out to 24 days, that first day was 67%, the 24 day was 68%. And then sort of, we looked at the average over those 24 days, it was about 67%, a high of, 84%, a low of 49%. But when you put this on a process behavior chart, what you see is it's a stable system.

 

0:17:17.3 JD: Meaning there are these ups and downs, some are above that 67% average, some are below it. When we look at sort of the natural process limits. So those are sort of the boundaries of the system based on the magnitude of the variability over time, it was sort of suggesting with this system, we could expect a low of 42% engagement, a high of 91% engagement, but mostly it's bouncing around this average. Now if imagine, that you're this eighth grade math teacher and the principal comes and says, this engagement data is not high enough, we're gonna create these posters across the school, we're gonna start this campaign. You can almost picture this in different places, right? And it says these posters say 100%...

 

0:18:06.3 AS: Graphic design.

 

0:18:07.0 JD: Yeah, that design, you have this poster and it says "100% engaged. We can achieve it if you believe it." Right? And you can almost imagine these posters going up in a school, and it's just this sort of proclamation. But when you look at the data, it's just a stable system. And what we can expect is this, these data points bouncing around the 67% average. School, the school leadership wants higher engagement rates. They want fewer days with the low rates. But the problem with a poster or a target or exportation is that you're, you're basically asking the teacher to do what they're unable to do. And we do this in all types of settings, all types of, work settings, not just, not just in education. If you look at this particular system, the upper limit's at 91%. So basically the...

 

0:19:10.0 JD: The system's not capable of achieving 100% remote learning engagement, and so basically the effect is then fear and mistrust towards leadership, and I think, you know, when you look at this remote learning engagement data, that's probably what happened to a lot of people, but if we go back to that No Child Left Behind example, the Federal Government, 'cause that's who is setting the proficiency targets, for No Child Left Behind, its federal legislation, teachers knew, principles knew that in many places, the system that was in place for education was not capable of hitting those targets, it just...

 

0:19:50.1 JD: It wasn't in the capability of the system, and then so if you are an individual operating within that system, you're trying to navigate that, you're gonna try to hit that target no matter what, and then in some places, they chose to do things that went as far as cheating, because they were trying to hit that target. Now, I'm not absolving those individual educators of responsibility, but it was that system that they were operating in that sort of caused that behavior to then happen. You know the worst case scenario is people did, the adults did cheat. And I'm sure there were other things that were happening in other places that didn't rise to the level of cheating, but I think we've talked about it before, there's really only three options in response to data that's not satisfactory. You can improve the system. That's the ideal. That's what we're talking about here. That's what we're going for here. You can sort of... What do you wanna call it? It's not as far as cheating, but you can sort of...

 

0:21:02.6 AS: Manipulate or...

 

0:21:04.4 JD: Manipulate the data in some way, or you can manipulate the system in some way, and that's I think what we were seeing. So the worst case scenario in Atlanta, they manipulated the data. But I think in many places, this idea of manipulating the system is less clear, but what happened in many places, and I think we've actually talked about this, that there was this over-emphasis on reading and math at the expense of other types of academics, and that's a manipulation of the system. That's not cheating necessarily, but it is sort of in my mind, sort of cheating kids out of a well-rounded education, and that was a product of so much emphasis on just reading and math test scores, and again, a lot of this was well-intentioned because people were...

 

0:21:53.5 AS: It's all well-intentioned. What are you talking about a lot of it?

 

0:21:56.9 JD: It's all well-intentioned but what actually happens as a result of putting these systems and these testing systems in place, and especially the sanctions or even the incentives on the positive side, the money. What actually happened...

 

[overlapping conversation]

 

0:22:10.6 AS: Holding back funding or providing additional funding, if you can hit these targets or that type of thing.

 

0:22:15.4 JD: Right, right, yep. And so you get all these unintended consequences that are produced as a result of the system, and we talk about these things as side effects, just like with drugs, there's these side-effects, but they're not really side effects, they're things that commonly happen, they're things that you would expect to happen as a result of doing these things, but we sort of put them in this... We've given this language as if they're these small things that happen over here, but really they're the sort of the typical unintended consequences that you could expect when you design a system in that way, whether the side effects of a drug or the side effects of cheating in a very strict, sort of, and regimented testing system, an accountability system in a school district.

 

0:23:03.6 AS: I couldn't help but laugh 'cause I thought about Robin Williams, and he had this skit he used to do when he was alive, and he talked about the drugs, drugs that people that the companies are marketing. And he said I was going through the side effects and I was like reading these horrific things that they had a list and he's like, I'd call that an effect.

 

0:23:21.4 JD: [chuckle] Right, right, yeah. Yeah.

 

0:23:24.0 AS: Let me ask you about this slogan, "We can achieve it if you believe it." Now, some students may respond to that, John, what do you say about the fact that... You know, because every time that you talk about getting rid of targets and getting rid of slogans and stuff, that people say, sometimes it works and it works for some people, and some people are driven that way, and when they hear that, they respond to it. What do you say to that?

 

0:23:57.5 JD: Well, I would say prove it, I wanna see if you're telling me that was actually successful, sometimes people will sort of dress up an anecdote. So, one, I'd wanna see the evidence that that did have the intended...

 

0:24:13.2 AS: Okay great answer and that's a lesson for everybody listening and viewing is always go back and say, prove it, 'cause I'm making an assertion.

 

0:24:21.3 JD: Yep. Yeah.

 

0:24:22.1 AS: And my assertion is that it helps certain people, actually, the burden of proof, of course, is on me as I make that assertion and you're asking me to prove that, which is a very, very logical and sensible thing to do. What else would you say?

 

0:24:38.0 JD: Well, well, I would say that, you know, Dr. Deming often talked about this idea, I think he got it from Taiichi Ohno, this idea of the loss function, which is basically like...

 

0:24:51.9 AS: Taguchi.

 

0:24:52.0 JD: Taguchi loss function, sorry.

 

0:24:54.4 AS: Yeah.

 

0:24:54.9 JD: And basically, think of an inverted parabola inverted U basically... And here is an optimum.

 

0:25:03.3 AS: Or think of a U. Think of a U.

 

0:25:03.4 JD: Now either side of it... An inverted U, yep, and the optimum is at the bottom of the U, but there's loss as soon as you start to move away from the U, but that loss comes on both sides. So, you know, the people that are anti-testing versus the people that wanna put strict sanctions and rewards in place, probably the answer is somewhere in between there, because we have to know how our students are doing, so we do need some data, so I would be probably a proponent of kids being given some type of standardized tests and can we sort of know the scores at the aggregate level, perhaps at the school level, by subject and grade level, but there's not sanctions and rewards tied to that in any way, it's just information. So that's one thing that'd be a big difference between, you know, between what we could be doing with this data and what's actually being done.

 

0:26:06.1 JD: So like taking the eighth grade math engagement data, for example. In terms of what would you do? I mean, I think if I was gonna put a poster up with sort of an explanation of how we're gonna approach the remote learning, maybe the first poster that I'd want staff to see is a list of what we're gonna be doing month by month to sort of deal with the reality of remote learning, maybe that first month, it's just making sure... The strategy is to make sure every kid has a device and access to reliable internet connectivity, right? That's very different than this proclamation, that 100% of kids is gonna be engaged, because as soon as I see that, as a teacher, I know that's not gonna happen. Especially if there's no other sort of methods tied to that. Maybe in month two, after I get all the kids devices and connectivity, that's reliable, we can do some training on, well, how do you even teach? What are the methods that a teacher can employ in a remote learning environment, and maybe all along, I am tracking the data, there's nothing wrong with tracking the data, but I'm putting it on that chart, I'm tracking it over time, and as we implement these various approaches to remote learning, I can see how that's impacting, but I'm doing that with students and teachers, and I'm not just plotting the data and then not giving a set of methods that sort of accompany the sort of march towards continual improvement.

 

0:27:47.2 JD: And the same thing, the same approach could be used with that test data from Atlanta, you know if the idea was, I'm gonna sort of start charting this data and seeing how we're doing over time, and I'm working with teachers and students to come up with ideas to how to improve this, to march closer to that 100% proficiency goal, I mean that's a noble goal, assuming that the test is well-constructed and that we want obviously more and more kids to be marching towards proficiency for sure, but we don't want all these other side games going on that come about when you sort of just simply have targets without methods, and I think that's the point. And if you take that approach, I think then teachers sort of understand that the leaders, the school leaders or the district leaders, they're taking some of that responsibility for a lack of engagement or low test scores or whatever it is, and they're trying to remove those obstacles systematically, that's a very different, different approach, 'cause I'm not suggesting that people shouldn't have goals. That's not what I'm suggesting. I set goals for myself all the time, I think they're actually helpful and necessary tools for individuals, but I think when you set numerical goals for other people without a set of methods to accomplish those goals, then you get the opposite effect of what was intended and you know, that's what I see happen over and over and over again in the education sector.

 

0:29:28.2 AS: And what I like to say is that two things about that, which is one is that if, if you're setting a goal, just don't tie compensation or other benefits to the goal or other punishment. Set the goal and then use it as a tool and track the information and discuss it. It's the same thing with compensation, once you start to tie compensation to specific goals, then you start to mess around with the incentive structure. And that's the first thing I also think the other thing I'd like to say is that if the object that you are measuring through your goal or target or whatever knows that it is being measured, look out. Now, I have a ruler right here, and if I measure the height of this glass, the glass doesn't know I'm measuring it, and so there's no change in anything in the glass, but when a human being knows that they're being measured, it causes a change. Just the knowing of that.

 

0:30:46.9 JD: Okay.

 

0:30:49.6 AS: So. Okay. So that helps us to understand about slogans, and what you're talking about is the idea of maybe replacing slogans with "How are we improving the system?" And, you know, I've started doing that in my Valuation Masterclass Boot Camp, where I was at the end of each session... At the end of each six week period, I have a survey that I give to students and I asked them for feedback, and how can we improve this? And then what I do is I take all those and I give them to my team and then we have a discussion and we kind of rank them, and then we go back on the final day and we say, by the way, these are the improvements we're making. And these are the improvements we did the last, this current time that you guys didn't realize, and then that way, the students also are kind of involved and interested in what we're doing, that we're asking for their feedback on how to improve the system, and we're telling them.

 

0:31:44.9 AS: I don't generally announce it beforehand, like put up something about, "Here's all the changes that we're making in this boot camp," 'cause I just want them to have a natural experience, I don't necessarily need them to be thinking like, "Okay, so this is new", and also some of the things that we're trying, we're testing and we're observing how they work and if they work, and so we may abandon that thing, so it may not make sense to just necessarily advertise it, but when we have some big things like this time, we got some excellent feedback in our last one, and now, I decided that when we do the boot camp, we're gonna have, let's say, 30 or 40 people, and we're gonna cover it one industry, we're gonna value companies in one industry, so we're gonna do the automotive industry, and then that allows everybody to work together in the first week, say, "Let's analyze this industry before I tell you which companies each of you are valuing." And so that's a new innovation that we're trying to do this time, and so there's a lot of work on our side to get that prepared.

 

0:32:46.7 JD: Yeah. And it sounds like there's methods, there's methods attached to the goal of improvement. That's the most important thing, I think.

 

0:32:58.1 AS: Yeah, I mean I feel like... One of the things I feel like, and I think maybe some of the listeners or viewers may feel like this, sometimes I don't measure it the way I maybe should. What I do is I get feedback from the customer, from the student in this case. And then I bring that feedback to my team and I ask my team to kind of rank what they think about those, and then we identify, let's say three of those recommendations that we think, Okay, this is good. Let's implement it. And then we test it. We don't have an exact measurement that say, "Okay, well, you wanna say, "Did that work at the end of a six-week period?" We just kind of know whether it worked or not, how much trouble it was, how much benefit we thought it got, and then we get some feedback at the end, and maybe the feedback from students at the end is part of the data. But I'm just curious, what are your thoughts about people who are doing things necessarily, they may be doing the right things, but they may not necessarily be measuring it in the way that they could or should, including myself. What are your thoughts on that?

 

0:34:08.7 JD: Yeah. I mean... Well, I mean, I think there's quantitative data and qualitative data, and it sounds like what you're doing is relying more on qualitative data, including this experience of the students. I mean, I think generally, probably some things lend themselves to more quantitative data, some things lend themselves to more qualitative data. I mean, I think the key here is to set up a system for improvement, identify what's most important to you in terms of... 'Cause you can't focus on everything at once, what are you gonna focus on? Get, you know, get other people involved. So it's not just coming from you, and it sounds like there's a team here working together, you're also doing it repeatedly over time. I don't think there's necessarily a right or wrong answer on this. I think the most important thing is to, for me, I think about looking at this stuff, putting the data on a chart over time, again, that can be quantitative or qualitative data, determine what the sort of capability of the system is, get some baseline data. I think that's really, really important. And then understand, is what you're seeing sort of typical, is it bouncing around an average within some limits, or do you see special causes in your data? I think those are the most important things.

 

0:35:57.2 JD: And then the other thing, I think if we're talking about a school and if we really wanna make breakthrough improvements, then I do think at the end of the day, that continual improvement sort of approach has to involve students and teachers, I think it has to. And so I think there's different ways to go about doing that, but I think if you do those things, then you're well on your way to improved outcomes.

 

0:36:25.1 AS: I do have one question I ask them at the end, and that is, I give them a range of value, and I say, now that you've experienced the Valuation Masterclass Boot Camp, how, what would you say is the value that you received? And definitely in the beginning that kept going up because we kept improving and they could feel that value, and I didn't give them any guidance, the rating did never change, but it was moving up. Now it's kind of flattened off, and so I think we've, we've got a challenge if we wanna bring that to another level, but that's one of them. Well, John, without, without any exhortations to the listeners, I would love it if you could just wrap up the main takeaways that you want us to get from this discussion.

 

0:37:15.7 JD: Yeah, I think you know, maybe putting a fine point on those things, I think what I've come to appreciate is continual improvement is really the combination of plotting data over time and combining it with that Plan Do Study Act cycle, which we've talked about multiple times. So the first recommendation is whatever metrics are most important to you, plot them on a chart in time order, and then... It can be intimidating at first. But the calculations on the process behavior chart, to add in the upper and natural process limits or control limits is really, really valuable, because then you can start to understand the capability of the system And then you start to understand what would it really take, what would we really have to do to actually shift those limits and indicate a pattern of the data that actually indicates that we've brought about improvement. The other reason those limits are really important is because it does help you understand, do you just have this common cause system where there's lots of different cause and effect relationships, but there's not really a single one you can hone in on, and so then you know you're not trying to improve one component, but the entire system systematically. So I think for those reasons, it gets a little technical with the process behavior chart, or the control chart but they are...

 

0:38:46.7 JD: I think it's the most powerful tool that we have in the continual improvement tool box. So I would highly suggest at least a couple of people on, on your school district team have that sort of skill set, because then you don't waste your time on improvement efforts, and you can also tell when something you tried has actually resulted in improved outcomes for kids or for teachers or for schools.

 

0:39:13.1 AS: John, on behalf of everyone at the Deming Institute and the listeners and viewers, I wanna thank you again for this discussion. For listeners remember to go to deming.org to continue your journey. You can find John's book, "Win-Win: Dr. W. Edwards Deming, the System of Profound Knowledge, and the Science of Improving Schools" on amazon.com. This is your host Andrew Stotz, and I'll leave you with one of my favorite quotes from Dr. Deming, "People are entitled to joy in work."

Further episodes of In Their Own Words

Further podcasts by The W. Edwards Deming Institute

Website of The W. Edwards Deming Institute