Eric ries lean startup download pdf






















Lefkowitch MD. Pagliarulo PT EdD. Skoog Stanley R. Winzenz Mario De Govia. The Lean Startup "Most startups are built to fail. But those failures, according to entrepreneur Eric Ries, are preventable. Startups don t fail because of bad execution, or missed deadlines, or blown budgets.

They fail because they are building something nobody wants. Whether they arise from someone s garage or are created within a mature Fortune organization, new ventures, by definition, are designed to create new products or services under conditions of extreme uncertainly. Era todo improvisado. Estaba intentando avanzar y montar un producto de forma razonable.

No obstante, estos primeros resultados eran extremadamente significativos para predecir la pauta futura de IMVU. Un producto con fallos e incompleto parece inaceptable.

Cuando se dude, hay que simplificar. Por ejemplo, considere un servicio que se vende con un mes de prueba gratis. Antes de que el consumidor pueda usar el servicio, tiene que apuntarse para la prueba. Si lo piensa, es una pregunta de acto de fe. Incluso si se apuntan, hay muchas oportunidades para el despilfarro. Cada una de estas implementaciones se produce a un nivel profundo del sistema y requiere know-how especializado para que la experiencia del usuario sea excepcional.

De hecho, una de las mayores ventajas competitivas de Dropbox es que el producto funciona de forma tan perfecta que la competencia se esfuerza por emularlo. Pero Dropbox hizo algo diferente. La respuesta era casi siempre que no. El reto era su imposibilidad de mostrar el funcionamiento del software con un prototipo. Nuestra lista de espera fue de 5. En este punto, la web busca recetas que encajen con sus necesidades, calcula el precio y le permite que imprima la lista de la compra. Estos alimentos tienen que combinarse con las recetas apropiadas y adaptarse a los gustos del cliente, etiquetarse y clasificarse.

Pero, finalmente, alguien lo hizo. El director ejecutivo y el vicepresidente de producto, en lugar de crear sus negocios, se ocupan del penoso trabajo de solucionar el problema de un solo consumidor. Tras algunas semanas estaban listos para otro cliente. Esto puede pasar incluso si el PMV es rentable para la empresa.

Piense en ello. Para solucionar este problema, Max y Damon crearon un producto llamado Aardvark. Los primeros prototipos no consiguieron satisfacer a los consumidores. Un servicio para recopilar valoraciones de la web y dar los mejores consejos.

The Webb. Web Macros. Internet Button Company. Una forma para empaquetar pasos realizados en un sitio web y rellenar de forma inteligente formularios de funcionalidad.

Tal como lo describe Max: «Fundamos la empresa con fondos propios y creamos prototipos muy baratos para probarlos. Usamos seres humanos para replicar la esencia del programa tanto como fuera posible. Contratamos a ocho personas para gestionar las dudas, clasificar las conversaciones, etc.

En realidad, hicimos crecer nuestra semilla y llevamos a cabo un conjunto de rondas de la serie A[19] hasta que el sistema estuvo automatizado. Cada vez, rechazaron resolverlos en un primer momento. En lugar de eso, usaron «pruebas del mago de Oz» para hacer creer que el producto funcionaba.

Operan usando el famoso dictado de W. Permitir un trabajo descuidado en nuestro proceso inevitablemente conduce a una variabilidad excesiva.

A veces, sin embargo, los consumidores reaccionan de forma un poco diferente. Muchos productos famosos fueron lanzados en un estadio de «baja calidad» y a los consumidores les encantaron.

He vivido muchas experiencias parecidas. Pero antes de encargarnos de esto, decidimos probar otro PMV. Puede imaginarse nuestra sorpresa cuando empezamos a recibir feedback positivo de los consumidores.

Visto en retrospectiva, tiene sentido. Un PMV requiere el coraje de probar las asunciones de uno mismo. Pero de nuevo, esto no significa trabajar de una forma chapucera o indisciplinada. Esta advertencia es importante.

Ambos pueden desbaratar los esfuerzos de la startup si no se descubren a tiempo. En muchas industrias, las patentes se usan en primer lugar con finalidad defensiva, como medida disuasoria para mantener a los competidores a raya.

Parte del reto especial de ser una startup es la casi imposibilidad de que alguien se entere de tu idea, de tu empresa o de tu producto, y mucho menos un competidor. Una ventaja inicial pocas veces es lo suficientemente grande como para ser relevante, y el tiempo invertido en el modo invisible, lejos de los consumidores, es poco probable que proporcione una ventaja inicial. Muchas startups planean invertir en crear una gran marca y un PMV puede parecer un riesgo para esta marca.

De forma similar, los emprendedores de organizaciones existentes suelen verse condicionados por el miedo a perjudicar la marca ya consolidada de la empresa.

Esta actitud es precisamente la que uno ve cuando las empresas lanzan productos totalmente acabados sin haberlos probado antes. Es la esencia del modelo de desarrollo en cascada o del modelo de etapa-puerta. Si fracasa un PMV, es probable que los equipos pierdan la esperanza y abandonen el producto. Pero es un problema que se puede remediar. Ambos son inexcusables.

Es un ideal que suele estar lejos de la startup en esta primera etapa. Los empleados y los emprendedores tienden a ser optimistas por naturaleza. Queremos seguir creyendo en nuestras ideas incluso cuando es evidente que no van a funcionar. Las startups son demasiado imprevisibles para que las previsiones y los hitos sean precisos.

Debemos de estar siguiendo la pista correcta». A medida que los beneficios de las ventas se reinvierten en marketing y promociones, la empresa gana nuevos clientes. Este esquema permite valorarlo incluso cuando cambia el modelo. Segundo, las startups deben intentar poner a punto el motor para ir desde el punto de partida hasta el ideal. Esto puede requerir muchos intentos. Si no, el equipo de management debe concluir que su estrategia de producto tiene errores y necesita un cambio importante.

Por si solo, no es suficiente para validar todo el modelo de crecimiento. Estos PMV proporcionan el primer ejemplo de un hito de aprendizaje. Compare dos startups. Es un signo de que ha llegado el momento de pivotar. Desde un punto de vista de marketing no es demasiado significativo, pero el aprendizaje que obtuvimos no tiene precio.

A pesar de que suena complejo, se basa en una premisa muy simple. Cada grupo es una cohorte. Algunas mejoras en el producto ayudan un poco.

Armados con nuestro fracaso para conseguir poner a punto nuestro motor de crecimiento, estuve listo para formular las preguntas correctas. En cambio, con los datos en la mano, mis interacciones con los clientes cambiaron. De repente, nuestros temores sobre productividad se desvanecieron.

Siempre que se ejecute correctamente el plan, el trabajo duro aporta resultados. Sin embargo, estas herramientas para mejorar el producto no funcionan de la misma manera en l a s startups. A menudo lo veo en mi actividad como consultor.

Ciclo tras ciclo, el equipo trabaja duro, pero la empresa no ve resultados. Les preocupaba que los ingenieros no estuvieran trabajando duro.

A pesar de los constantes ajustes y correcciones, los resultados del negocio eran mediocres. El producto inicial, aunque defectuoso, era popular entre los primeros usuarios.

Ninguna de sus iniciativas actuales estaba teniendo impacto. El motor de crecimiento de la empresa funciona. La alternativa es el tipo de indicadores que usamos para juzgar nuestro negocio y nuestros hitos de aprendizaje, lo que yo llamo indicadores accionables.

Indicadores accionables vs. Hay todas estas redes sociales funcionando en la web». Grockit ofrece estos tres formatos de estudio. Estaban orientados a los procesos y eran muy disciplinados. Su primer producto fue acogido por la prensa como un gran avance. Esta historia ayudaba a mantener a los ingenieros centrados en la perspectiva del cliente a lo largo del proceso de desarrollo.

Su respuesta: «Esto no es responsabilidad de nuestro departamento. Farbood toma las decisiones; nosotros las ejecutamos». El propio Farbood no estaba seguro de que su equipo adoptase una verdadera cultura del aprendizaje. Llevaron a cabo iteraciones relativamente cortas, cada una juzgada por su capacidad de mejorar los indicadores sobre los consumidores.

Sin embargo, como Grockit usaba un tipo de indicadores equivocado, la startup no estaba mejorando. Un experimento split-test es en el que se ofrecen diferentes versiones de un producto al mismo tiempo. Observando los cambios en el comportamiento de los consumidores entre los dos grupos, se pueden hacer inferencias sobre el impacto de las diferentes variaciones.

El split-testing a menudo revela cosas sorprendentes. Pensaron nuevas ideas para realizar experimentos de producto que pudieran tener un mayor impacto. De hecho, muchas de estas ideas no eran nuevas. A medida que las historias pasan de un estadio a otro, se van rellenando los cubos. The book describes different strategies for someone to start a business, and it also tells some techniques for start-up companies. The author describes some major and common mistakes people make while starting any business.

They do not think about the target customer. They focus and give time and energy to the initial launched product to make it better, which is not right. The author also asked the businessmen to test the riskiest assumptions first. When we think of quality and design of our minimum viable product —. Innovation accounting — this means accounting for the innovation that is happening in the company.

The idea is to understand where the innovation is coming from and what good learning is happening to be able to promote that and to take away the other parts. We use our MVP to get started on the process of measurement in order to figure out where we are as a company, and then to figure out some measurement statistics and continue on with constant measurement. In a traditional business, we measure in chunks, specifically in terms of overall customer intake versus revenue and profits and so on.

But in Lean Startup, just like in every experiment we do, we must have a cohort. We must have a new group of data for that. Therefore: if we start with Hypothesis A and do an experiment, Hypothesis B and do another experiment, and Hypothesis C and do another experiment, we need to have results of them separately and be able to understand what happened, what the outcome was for each of them, and hence learn from every experiment done.

The key is to come up with actionable metrics as a result of these measurements. Pivoting means to hold one foot firmly on where you are with everything that you have learned, and to move the other foot around your new learning so you know where you were and what you have learned so far. Again, it's not about the big idea. Do we pivot and change course, or do we persevere with the direction that we have? But even while we're answering this fundamental question, what we need to keep in mind is the faster we can do it, the more time we have to get to take off.

So starting from where you are to get to the end of the runway, you have to be able to pivot really fast so you can get to the fastest validated learning. The more pivots you can afford, the faster you can learn, and the better your chances of success. There are many different kinds of pivots that go into the book, but just as an overview here are some of them:. A pivot is simply another hypothesis. As soon as you pivot, think of it as a new hypothesis that will require a new MVP to test.

So you will go through the build-measure-learn feedback loop again. The key for a startup is to be able to execute on those as fast as possible to learn as fast as possible and, as a result, to be able to survive. Every time a startup starts to execute, we have to identify which hypothesis we're going to test. How much of our effort contributed to the essential lessons we needed to learn? This question is at the heart of the lean manufacturing revolution; it is the first question any lean manufacturing adherent is trained to ask.

Learning to see waste and then systematically eliminate it has allowed lean companies such as Toyota to dominate entire industries. They were designed to eliminate waste too. The answer came to me slowly over the subsequent years.

Lean thinking defines value as providing benefit to the customer; anything else is waste. I realized that as a startup, we needed a new definition of value.

The real progress we had made at IMVU was what we had learned over those first months about what creates value for customers. Anything we had done during those months that did not contribute to our learning was a form of waste. Would it have been possible to learn the same things with less effort? Clearly, the answer is yes. If we had shipped sooner, we could have avoided that waste.

Also consider all the waste caused by our incorrect strategic assumptions. I had built interoperability for more than a dozen different IM clients and networks.

Was this really necessary to test our assumptions? Could we have gotten the same feedback from our customers with half as many networks? With only three? With only one? Is it possible that we could have discovered how flawed our assumptions were without building anything?

For example, what if we simply had offered customers the opportunity to download the product from us solely on the basis of its proposed features before building anything? Note that this is different from asking customers what they want.

We could have conducted an experiment, offering customers the chance to try something and then measuring their behavior. Such thought experiments were extremely disturbing to me because they undermined my job description. But if many of those features were a waste of time, what should I be doing instead? How could we avoid this waste? The effort that is not absolutely necessary for learning what customers want can be eliminated.

Thus, validated learning is backed up by empirical data collected from real customers. As I can attest, anybody who fails in a startup can claim that he or she has learned a lot from the experience. They can tell a compelling story. In fact, in the story of IMVU so far, you might have noticed something missing.

What evidence did we have? Certainly our stories of failure were entertaining, and we had fascinating theories about what we had done wrong and what we needed to do to create a more successful product. The next few months are where the true story of IMVU begins, not with our brilliant assumptions and strategies and whiteboard gamesmanship but with the hard work of discovering what customers really wanted and adjusting our product and strategy to meet those desires.

As we came to understand our customers better, we were able to improve our products. As we did that, the fundamental metrics of our business changed. In the early days, despite our efforts to improve the product, our metrics were stubbornly flat. Each day, roughly the same number of customers would buy the product, and that number was pretty close to zero despite the many improvements.

However, once we pivoted away from the original strategy, things started to change. This was critically important because we could show our stakeholders—employees, investors, and ourselves—that we were making genuine progress, not deluding ourselves. We were able to measure the difference in behavior between the two groups. Not only were the people in the experimental group more likely to sign up for the product, they were more likely to become long-term paying customers.

We had plenty of failed experiments too. Unfortunately, customers who got that VIP treatment were no more likely to become active or paying customers. After our pivot and many failed experiments, we finally figured out this insight: customers wanted to use IMVU to make new friends online. Once we formed this hypothesis, our experiments became much more likely to produce positive results.

Whenever we would change the product to make it easier for people to find and keep new friends, we discovered that customers were more likely to engage. These were just a few experiments among hundreds that we ran week in and week out as we started to learn which customers would use the product and why. Each bit of knowledge we gathered suggested new experiments to run, which moved our metrics closer and closer to our goal.

Unfortunately, because of the traditional way businesses are evaluated, this is a dangerous situation. The irony is that it is often easier to raise money or acquire other resources when you have zero revenue, zero customers, and zero traction than when you have a small amount. Everyone knows or thinks he or she knows stories of products that achieved breakthrough success overnight.

As long as nothing has been released and no data have been collected, it is still possible to imagine overnight success in the future. Small numbers pour cold water on that hope. This phenomenon creates a brutal incentive: postpone getting any data until you are certain of success. However, releasing a product and hoping for the best is not a good plan either, because this incentive is real.

When we launched IMVU, we were ignorant of this problem. Fortunately, as we pivoted and experimented, incorporating what we learned into our product development and marketing efforts, our numbers started to improve.

But not by much! On the one hand, we were lucky to see a growth pattern that started to look like the famous hockey stick graph. On the other hand, the graph went up only to a few thousand dollars per month.

We were quite fortunate that some of our early investors understood its importance and were willing to look beyond our small gross numbers to see the real progress we were making. Thus, we can mitigate the waste that happens because of the audacity of zero with validated learning.

We could have tried marketing gimmicks, bought a Super Bowl ad, or tried flamboyant public relations PR as a way of juicing our gross numbers. That would have given investors the illusion of traction, but only for a short time. Because we would have squandered precious resources on theatrics instead of progress, we would have been in real trouble.

Sixty million avatars later, IMVU is still going strong. Its legacy is not just a great product, an amazing team, and promising financial results but a whole new way of measuring the progress of startups.

Every time I teach the IMVU story, students have an overwhelming temptation to focus on the tactics it illustrates: launching a low-quality early prototype, charging customers from day one, and using low-volume revenue targets as a way to drive accountability. These are useful techniques, but they are not the moral of the story. There are too many exceptions. Not every kind of customer will accept a low-quality prototype, for example.

None of these takeaways is especially useful. The Lean Startup is not a collection of individual tactics. It is a principled approach to new product development. The only way to make sense of its recommendations is to understand the underlying principles that make them work. The tactics from the IMVU story may or may not make sense in your particular business. Instead, the way forward is to learn to see every startup in any industry as a grand experiment.

In other words, we need the scientific method. In the Lean Startup model, every product, every feature, every marketing campaign— everything a startup does—is understood to be an experiment designed to achieve validated learning. How should we prioritize across the many features we could build? What can be changed safely, and what might anger customers? What should we work on next? This is one of the most important lessons of the scientific method: if you cannot fail, you cannot learn.

A true experiment follows the scientific method. It begins with a clear hypothesis that makes predictions about what is supposed to happen. It then tests those predictions empirically. The goal of every startup experiment is to discover how to build a sustainable business around that vision.

It is known as one of the most successful, customer-friendly e-commerce businesses in the world, but it did not start that way. He envisioned a new and superior retail experience. Swinmurn could have waited a long time, insisting on testing his complete vision complete with warehouses, distribution partners, and the promise of significant sales.

Many early e-commerce pioneers did just that, including infamous dot-com failures such as Webvan and Pets. Instead, he started by running an experiment. To test it, he began by asking local shoe stores if he could take pictures of their inventory. In exchange for permission to take the pictures, he would post the pictures online and come back to buy the shoes at full price if a customer bought them online.

Zappos began with a tiny, simple product. It was designed to answer one question above all: is there already sufficient demand for a superior online shopping experience for shoes? In the course of testing this first assumption, many other assumptions were tested as well. To sell the shoes, Zappos had to interact with customers: taking payment, handling returns, and dealing with customer support. This is decidedly different from market research.

If Zappos had relied on existing market research or conducted a survey, it could have asked what customers thought they wanted. It had more accurate data about customer demand because it was observing real customer behavior, not asking hypothetical questions. It put itself in a position to interact with real customers and learn about their needs.

For example, the business plan might call for discounted pricing, but how are customer perceptions of the product affected by the discounting strategy? For example, what if customers returned the shoes? It also put the company in a position to observe, interact with, and learn from real customers and partners.

Although the early efforts were decidedly small-scale, that did not prevent the huge Zappos vision from being realized. In fact, in Zappos was acquired by the e- commerce giant Amazon.

A designer could help a nonprofit with a new website design. A team of engineers could wire a school for Internet access. Most of the volunteering has been of the low- impact variety, involving manual labor, even when the volunteers were highly trained experts. This is the kind of corporate initiative undertaken every day at companies around the world. On the surface it seems to be suited to traditional management and planning. However, I hope the discussion in Chapter 2 has prompted you to be a little suspicious.

Looked at that way, her plan seems full of untested assumptions—and a lot of vision. In accordance with traditional management practices, Barlerin is spending time planning, getting buy- in from various departments and other managers, and preparing a road map of initiatives for the first eighteen months of her project.

Like many entrepreneurs, she has a business plan that lays out her intentions nicely. Yet despite all that work, she is—so far—creating one- off wins and no closer to knowing if her vision will be able to scale. A second assumption could be that they would find it more satisfying and therefore more sustainable to use their actual workplace skills in a volunteer capacity, which would have a greater impact on behalf of the organizations to which they donated their time.

The Lean Startup model offers a way to test these hypotheses rigorously, immediately, and thoroughly. Strategic planning takes months to complete; these experiments could begin immediately.

By starting small, Caroline could prevent a tremendous amount of waste down the road without compromising her overall vision. Break It Down The first step would be to break down the grand vision into its component parts. The two most important assumptions entrepreneurs make are what I call the value hypothesis and the growth hypothesis. The value hypothesis tests whether a product or service really delivers value to customers once they are using it.

Experiments provide a more accurate gauge. What could we see in real time that would serve as a proxy for the value participants were gaining from volunteering? We could find opportunities for a small number of employees to volunteer and then look at the retention rate of those employees. How many of them sign up to volunteer again?

For the growth hypothesis, which tests how new customers will discover a product or service, we can do a similar analysis. Once the program is up and running, how will it spread among the employees, from initial early adopters to mass adoption throughout the company? A likely way this program could expand is through viral growth. If that is true, the most important thing to measure is behavior: would the early participants actively spread the word to other employees?

In this case, a simple experiment would involve taking a very small number—a dozen, perhaps—of existing long-term employees and providing an exceptional volunteer opportunity for them. Those customers tend to be more forgiving of mistakes and are especially eager to give feedback. Next, using a technique I call the concierge minimum viable product described in detail in Chapter 6 , Caroline could make sure the first few participants had an experience that was as good as she could make it, completely aligned with her vision.

Unlike in a focus group, her goal would be to measure what the customers actually did. How many volunteer a second time? How many are willing to recruit a colleague to participate in a subsequent volunteer activity?

Additional experiments can expand on this early feedback and learning. For example, if the growth model requires that a certain percentage of participants share their experiences with colleagues and encourage their participation, the degree to which that takes place can be tested even with a very small sample of people. If they are asked to recruit a colleague, how many do we expect will do so? Remember that these are supposed to be the kinds of early adopters with the most to gain from the program.

Put another way, what if all ten early adopters decline to volunteer again? That would be a highly significant—and very negative— result. We already have a cohort of people to talk to as well as knowledge about their actual behavior: the participants in the initial experiment. This entire experiment could be conducted in a matter of weeks, less than one-tenth the time of the traditional strategic planning process.

Also, it can happen in parallel with strategic planning while the plan is still being formulated. Even when experiments produce a negative result, those failures prove instructive and can influence the strategy. For example, what if no volunteers can be found who are experiencing the conflict of values within the organization that was such an important assumption in the business plan? If this or any other experiment is successful, it allows the manager to get started with his or her campaign: enlisting early adopters, adding employees to each further experiment or iteration, and eventually starting to build a product.

It will have solved real problems and offer detailed specifications for what needs to be built. Unlike a traditional strategic planning or market research process, this specification will be rooted in feedback on what is working today rather than in anticipation of what might work tomorrow.

To see this in action, consider an example from Kodak. Do consumers recognize that they have the problem you are trying to solve? If there was a solution, would they buy it? Would they buy it from us? Can we build a solution for that problem? For example, Kodak Gallery offered wedding cards with gilded text and graphics on its site.

The market research and design process indicated that customers would like the new cards, and that finding justified the significant effort that went into creating them. They were also hard to produce. Cook realized that they had done the work backward.

In a break with the past, Cook led the group through a process of identifying risks and assumptions before building anything and then testing those assumptions experimentally. There were two main hypotheses underlying the proposed event album: 1.

The team assumed that customers would want to create the albums in the first place. It assumed that event participants would upload photos to event albums created by friends or colleagues. The Kodak Gallery team built a simple prototype of the event album. It lacked many features—so many, in fact, that the team was reluctant to show it to customers. However, even at that early stage, allowing customers to use the prototype h e l p e d the team refute their hypotheses.

Further, customers complained that the early product version lacked essential features. Those negative results demoralized the team. The usability problems frustrated them, as did customer complains about missing features, many of which matched the original road map. Cook explained that even though the product was missing features, the project was not a failure. Where customers complained about missing features, this suggested that the team was on the right track. The team now had early evidence that those features were in fact important.

Through a beta launch the team continued to learn and iterate. Through the use of online surveying tool KISSinsights, the team learned that many customers wanted to be able to arrange the order of pictures before they would invite others to contribute. This process represented a dramatic change for Kodak Gallery; employees were used to being measured on their progress at completing tasks.

Most people either hand wash their clothing at home or pay a Dhobi to do it for them. Dhobis take the clothes to the nearest river, wash them in the river water, bang them against rocks to get them clean, and hang them to dry, which takes two to seven days. The result? As the brand manager of the Tide and Pantene brands for India and ASEAN countries, he thought he could make laundry services available to people who previously could not afford them.

VLS began a series of experiments to test its business assumptions. The entrepreneurs did not clean the laundry on the truck, which was more for marketing and show, but took it off-site to be cleaned and brought it back to their customers by the end of the day. They wanted to know how they could encourage people to come to the truck. Did cleaning speed matter? Was cleanliness a concern? What were people asking for when they left their laundry with t h e m?

They discovered that customers were happy to give them their laundry to clean. However, those customers were suspicious of the washing machine mounted on the back of the truck, concerned that VLS would take their laundry and run. VLS also experimented with parking the carts in front of a local minimarket chain. Further iterations helped VLS figure out which services people were most interested in and what price they were willing to pay. They discovered that customers often wanted their clothes ironed and were willing to pay double the price to get their laundry back in four hours rather than twenty-four hours.

The kiosk used Western detergents and was supplied daily with fresh clean water delivered by VLS. Since then, the Village Laundry Service has grown substantially, with fourteen locations operational in Bangalore, Mysore, and Mumbai. We have serviced more than 10, customers in the past year alone across all the outlets. The plan calls for it to accomplish this by setting up a call center where trained case workers will field calls directly from the public.

Left to its own devices, a new government agency would probably hire a large staff with a large budget to develop a plan that is expensive and time-consuming. However, the CFPB is considering doing things differently. In particular, his focus was on leveraging technology and innovation to make the agency more efficient, cost-effective, and thorough.

Using these insights, we could build a minimum viable product and have the agency up and running—on a micro scale—long before the official plan was set in motion.



0コメント

  • 1000 / 1000