In science, the term “Observer Effect” means that the act of observing will influence the phenomenon being observed. Said differently, the observation of a system will actually impact and change that system. In a business context, a similar phenomenon is seen which can be considered the “Measurement Effect”. In short, what an organization decides to measure for initiatives, teams, and individuals will directly impact the behaviors and results. Essentially, we are what we measure.
When done intentionally, this is a strategic and effective way to drive the behaviors and results we want and need for our business. The problem arises when our attempts to measure various aspects of our business and our people have unintended and undesireable consequences. For some extreme examples:
1) If we measure initiatives primarily on their speed to market, then we will likely end up with smaller, simpler, less unique technical solutions.
2) If we reward teams for a “lack of mistakes”, then teams may be less likely to take on additional work, attempt anything of risk, or to show agility or speed.
3) If we just simply measure and track too many things in a given process, teams may avoid the process altogether, focus on the “wrong” things, or find ways to cut corners.
4) If an individual’s performance and compensation is largely derived by the sheer number of initiatives that he successfully completes, he will likely take on more and safer programs, and avoid the bigger, riskier ones.
The list can go on and on… essentially, we must be strategic, cognizant, and deliberate about what we measure and track within our organizations. These choices will largely shape the culture, behaviors, and results of our initiatives, our teams, and our individual performers.
As I close out this topic on quality, I am going to focus on some measurement ideas specifically around delivering more “Amazing” versus “Perfect” innovations. This is not intended to be a comprehensive list, but some examples and case studies for driving greater quality in our Innovation.
Do you want to BREAK LESS RULES? MAKE LESS RULES. It has become so easy to generate, track, and report data on virtually every detail of every process, that we can become inundated with information. As technology continues to expand, and computing power increases exponentially, our ability to measure more and more details increases each and every year. In an effort to manage the uncertainty and complexity, often organizations will enact more and more “quality measures” in order to control the outcome and minimize risk. The intent, of course, is sound and often involves logical solutions to the growing amount of information. The challenge… the increase in “rules” can not only bog down the process but also cause individuals to put a lower emphasis and awareness on the truly more important “rules” in order to satisfy the sometimes overwhelming quantity of less important ones. The solution… We must remember that just because we can track it doesn’t mean that we should. We need to get clear on the handful of “rules” that are truly important and track those, emphasizing the ones that are the most critical drivers of the process. This will not only drive efficiency in the system, but also will ultimately increase accuracy and quality, as individuals will emphasize following the important rules rather than wasting time worrying about breaking some unimportant ones.
Celebrate “WEB GEMS” more than a “LACK OF ERRORS”
I am a lifelong baseball fan and have always been a “nerd” for all of the statistics of the game. As such, I am a huge fan of Moneyball, a true story of how the evaluation and prediction of player and team performance is being revolutionized by new statistics and measures. There are many direct reapplications to the business world and one that is relevant here is in the assessment of defensive players. Historically, the top measure for evaluating the performance of a defensive player was “Fielding Percentage”. This percentage is calculated as the percentage of balls hit to a player (called “chances”) that were expected to be outs, of which were actually converted to outs. While this statistic can be informative, it is highly incomplete and potentially misleading. For example, this statistic does not account for a player’s range (i.e. his ability to increase his number of chances). The fielding range of the most talented shortstop may produce significantly more chances per season than an average shortstop… and because these chances are often more risky then there can be an increased chance for errors. So, this more talented shortstop may have a lower fielding percentage than the average shortstop, while actually making more plays and preventing more runs due to his increased range. Taken to an extreme… if a shortstop truly wanted to maximize his fielding percentage, he would avoid difficult plays altogether and focus on executing the simple plays with excellence. While this would positively impact his fielding percentage, the team would suffer as more runs would score for the opposing team. So instead of evaluating defensive players based on “lack of errors”, teams are now assessing performance based upon the number or runs prevented from scoring (which factors into consideration the shortsto’s ability to successfully execute plays that the average shortstop cannot). ESPN has a segment known as “Web Gems” in which they recognize the top plays on a given night and track them throughout the season. These are the “Amazing” plays that, over the course of a season, can serve to highlight who the truly outstanding defensive players are. As we track individual performance in our organizations, we should focus less on tracking the minimization of errors, and more on tracking “Web Gems”.
Factor in “DEGREE OF DIFFICULTY” when evaluating performance.
To keep with the sports analogies, one concept from Gymnastics (among other sports) which can be reapplied to organizations is the assessment of “Degree of Difficulty”. Essentially, while a gymnast ultimately is evaluated for her execution, she can obtain a higher total score based upon the level of challenge for her particular routine. Essentially, the riskier and more complicated the routine, the higher the potential score. As we evaluate the performance of individuals in our organizations, are we assessing them merely on how often they “nail the landing” or are we also considering the degree of difficulty for their “routine”? During annual performance reviews, are individuals rewarded for the quantity of results or for the quality? It is critical that Degree of Difficulty be considered in assignment planning and performance reviews, not only to reward individuals for taking on a greater challenge, but also to encourage the entire organization to work on the biggest rather than the most initiatives. Essentially, we should reward those who enable “Bigger, Better, and Fewer”, and not those who just deliver “More, More, More”.
Be Clear on Points of SUPERIORITY, Points of PARITY, and Points of INFERIORITY
Finally, I want to share an example of how being very clear on our initiative success measures can have a dramatic impact on the final result. Developing superior products is a goal for most innovative organizations, and a key challenge that many of us face. Even within this goal, we must be careful to be crystal clear on our definition of “superiority” so as to ultimately guide the right behaviors and results. Do you remember when the first Blackberry smartphones started to make their ways into the business world? For me, this is an extremely memorable event as it revolutionized how I did my work. Suddenly, my email, calendar, and phone were all literally in the palm of my hand and I had much more flexibility in how and when I worked than ever before. At the time, not everyone in the organization had one so there was a “status” element to these as well… so much so that some individuals (who will remain nameless) were actually going so far as to “accidentally drop” their current phones so as to get upgraded to a new Blackberry. This really was a remarkable device at the time, which enabled more freedom and flexibility than ever before. That being said… if you break down the actual performance elements of the device, it left a lot to be desired. The phone quality was terrible, as it was very difficult to have a consistent conversation without poor sound quality and dropped calls. The calendar was highly clunky and actually created a ton of errors in syncing across devices. And then there was the email… the typing was so difficult and inconsistent, that you wanted to insure that “Sent from my Blackberry Device” appeared at the end of the email so that others knew that the gibberish you sent was due to the phone and not the product of your own idiocy! Yes, overall this product offered superior benefits through not only the unique combination of features, but more importantly through changing the way that business communication took place. However, for the main functions of the device (phone, email, calendar) the device was actually INFERIOR to existing, competitive options. So the question is… would your organization have launched this highly successful product with these points of inferiority or would it have waited until each vector was parity or better to competition? The point is, even for something as simple conceptually in the “big picture” as developing superior products, we must be careful that our “smaller picture” measures are specifically tailored to enable the overall success. This may include defining not only points of superiority, but also points of parity, and even acceptable points of inferiority in the quest for developing superior products.
(Of note… we now all have iPhones at work and not Blackberrys (yes, there were a lot of dropped and broken Blackberrys during that transition as well!), highlighting the importance of evolving our success measures with market demands and competitive activity.)
To conclude, I wrote these 4 posts largely as a challenge to more broadly think about our definitions of “Quality”. This is by no means to say that we should no longer work to deliver “Perfect” executions. Clearly, there is nothing more important than the consumer getting a safe, consistent product that is made both as intended and cost-effective to the business. That being said, my challenge is to insure that as we define “Quality” it is not merely about the absence of mistakes, but also about the delivery of “Amazing”. The same rigor that is placed upon perfection at execution should be carried into the definition and invention phases as well… however, rather than focusing of minimizing mistakes, defects, and risks, we should focus on maximizing benefits, experiences, and possibilities. Quality is and should be the job of each and every one of us, and is more than merely delivering quantity without errors. We should broaden this definition so as to drive overall value both to the consumer and to the business through focusing on “First Amazing… Then Actionable” with the right measures to support it.
Author’s Note…I had trouble picking an image to represent this post, and ultimately went with this shot from Dead Poet’s Society, as this movie highlights many of the key themes from this broad discussion on Quality including: “Perfect is the Enemy of Amazing”, “Break Less Rules… Make Less Rules” , and “Focus First on Delivering Delight” . I highly recommend the clip (and the movie in general) to provoke thought and inspiration. Have a great weekend!
http://www.youtube.com/watch?v=tpeLSMKNFO4
—————–
Check out my book, Agents of Change, available in paperback and eBook additions on Amazon.com
very interesting article, much food for thought here… I work in the pharmaceutical industry, uber conservative industry, what should we be measuring and how does this translate to us..?
LikeLike