... where visionaries, game changers, and challengers discuss business models
This posting is in response to Steven Willoughby's experience with using customer development retrospectively with his company, and his request for similar examples.
I had a similar painful experience with a client that brought me in to discuss why their program was not being successful. Using the Customer Development Model (CDM) as a baseline process we were able to gain some significant insights on what they had and hadn't done.
The client was a body that had a certification program as one of it's major business lines. They had been asked to develop a new program to train and certify practitioners. After 2 years of development they launched it across all their clients. In the following 3 years they had certified less than 100 and there were about 100 in the pipeline, out of a potential population of more than 5000. Needless to say they were concerned and baffled. I felt the CDM was a good tool to look at what had happened.
We began with Customer Discovery. How did they go about understanding and testing the problem-solution fit and understanding their clients' needs? They had spent a lot of time and effort talking to organizations about certification and the design of programs. Problem 1 surfaced. It was apparent they hadn't really understood the difference between partners and clients. In determining the solution they had given a lot of attention to organizations that were partners over the people who actually needed the certification in the practice of their jobs. We talked about business models and the nature of the client and partners in the business model, as well as understanding the clients' jobs-to-be-done and the needs to do that job.
Having spent almost two years developing the certification process, they were very proud of the depth and quality of the training and certification program. They won accolades and compliments from the bosses of the client organizations. However the practitioners and the line managers were not so impressed and were either not signing up, or dropping out before completing the program. There seemed to be issues with the product-market fit. That meant they had to go an interview clients - not the bosses, but individuals who would use the program and their direct operational managers (who sent them on and paid for their training).
Problems 2-3-4 quickly surfaced. The program was too academic in nature, not practical enough to be of ready use for the practitioners. It was too administratively burdensome and costly for both the attendee and the managers who sent them on training. Importantly, the program took too long to get through. Either the attendee lost interest or they moved on to other jobs before completing the program.
So what had happened? It seems that the organization skipped the validation stage and went straight to customer creation, scaling the program across all their clients. The certification organization was pleased and satisfied they had developed a very high quality training program that met their clients' needs. Unfortunately, the real market clients, the only ones that mater, didn't see it that way. In launching the program the organization had not run pilots or interviews to test the product with the appropriate client segments. They had skipped right over the validation of the solution, not testing effectiveness or cost.
Even though they had spent a lot of time in discovery, though some of it was misdirected, skipping over the validation of the fit of their product to the primary market resulted in a disconnect. The unfortunate outcome of this epiphany (to borrow for Steve Blank's original title) was the organization was going to have to re-consider and re-design its certification program if they wanted to achieve scale and wide-spread adoption.