Driven to Distraction – the future of car safety

If you haven’t gotten a new car in a while you may not have noticed that the future of the dashboard looks like this:


That’s it. A single screen replacing all the dashboard gauges, knobs and switches. But behind that screen is an increasing level of automation that hides a ton of complexity.

At times everything you need is on the screen with a glance. At other times you have to page through menus and poke at the screen while driving. And while driving at 70mph, try to understand if you or your automated driving system is in control of your car. All while figuring out how to use any of the new features, menus or rearranged user interface that might have been updated overnight.

In the beginning of any technology revolution the technology gets ahead of the institutions designed to measure and regulate safety and standards. Both the vehicle’s designers and regulators will eventually catch up, but in the meantime we’re on the steep part of a learning curve – part of a million-person beta test – about what’s the right driver-to-vehicle interface.

We went through this with airplanes. And we’re reliving that transition in cars. Things will break, but in a few decades we’ll come out out the other side, look back and wonder how people ever drove any other way.

Here’s how we got here, what it’s going to cost us, and where we’ll end up.


Cars, Computers and Safety
Two massive changes are occurring in automobiles: 1) the transition from internal combustion engines to electric, and 2) the introduction of automated driving.

But a third equally important change that’s also underway is the (r)evolution of car dashboards from dials and buttons to computer screens. For the first 100 years cars were essentially a mechanical platform – an internal combustion engine and transmission with seats – controlled by mechanical steering, accelerator and brakes. Instrumentation to monitor the car was made up of dials and gauges; a speedometer, tachometer, and fuel, water and battery gauges.
By the 1970’s driving became easier as automatic transmissions replaced manual gear shifting and hydraulically assisted steering and brakes became standard. Comfort features evolved as well: climate control – first heat, later air-conditioning; and entertainment – AM radio, FM radio, 8-track tape, CD’s, and today streaming media. In the last decade GPS-driven navigation systems began to appear.

Safety
At the same time cars were improving, automobile companies fought safety improvements tooth and nail. By the 1970’s auto deaths in the U.S averaged 50,000 a year. Over 3.7 million people have died in cars in the U.S. since they appeared – more than all U.S. war deaths combined. (This puts auto companies in the rarified class of companies – along with tobacco companies – that have killed millions of their own customers.) Car companies argued that talking safety would scare off customers, or that the added cost of safety features would put them in a competitive price disadvantage. But in reality, style was valued over safety.

Safety systems in automobiles have gone through three generations – passive systems and two generations of active systems. Today we’re about to enter a fourth generation – autonomous systems.

Passive safety systems are features that protect the occupants after a crash has occurred. They started appearing in cars in the 1930’s. Safety glass in windshields appeared in the 1930’s in response to horrific disfiguring crashes. Padded dashboards were added in the 1950’s but it took Ralph Nader’s book, Unsafe at Any Speedto spur federally mandated passive safety features in the U.S. beginning in the 1960’s: seat belts, crumple zones, collapsible steering wheels, four-way flashers and even better windshields. The Department of Transportation was created in 1966 but it wasn’t until 1979 that the National Highway Traffic Safety Administration (NHTSA) started crash-testing cars (the Insurance Institute for Highway Safety started their testing in 1995). In 1984 New York State mandated seat belt use (now required in 49 of the 50 states.)

These passive safety features started to pay off in the mid-1970’s as overall auto deaths in the U.S. began to decline.

Active safety systems try to prevent crashes before they happen. These depended on the invention of low-cost, automotive-grade computers and sensors. For example, accelerometers-on-a-chip made airbags possible as they were able to detect a crash in progress. These began to appear in cars in the late 1980’s/1990’s and were required in 1998. In the 1990’s computers capable of real-time analysis of wheel sensors (position and slip) made ABS (anti-lock braking systems) possible. This feature was finally required in 2013.

Since 2005 a second generation of active safety features have appeared. They run in the background and constantly monitor the vehicle and space around it for potential hazards. They include: Electronic Stability Control, Blind Spot Detection, Forward Collision Warning, Lane Departure Warning, Rearview Video Systems, Automatic Emergency Braking, Pedestrian Automatic Emergency Braking, Rear Automatic Emergency Braking, Rear Cross Traffic Alert and Lane Centering Assist.

Autonomous Cars
Today, a fourth wave of safety features is appearing as Autonomous/Self-Driving features. These include Lane Centering/Auto Steer, Adaptive cruise control, Traffic jam assist, Self-parking, full self-driving. The National Highway Traffic Safety Administration (NHTSA) has adopted the six-level SAE standard to describe these vehicle automation features:

Getting above Level 2 is a really hard technical problem and has been discussed ad infinitum in other places. But what hasn’t got much attention is how drivers interact with these systems as the level of automation increases, and as the driving role shifts from the driver to the vehicle. Today, we don’t know whether there are times these features make cars less safe rather than more.

For example, Tesla and other cars have Level 2 and some Level 3 auto-driving features. Under Level 2 automation, drivers are supposed to monitor the automated driving because the system can hand back control of the car to you with little or no warning. In Level 3 automation drivers are not expected to monitor the environment, but again they are expected to be prepared to take control of the vehicle at all times, this time with notice.

Research suggests that drivers, when they aren’t actively controlling the vehicle, may be reading their phone, eating, looking at the scenery, etc. We really don’t know how drivers will perform in Level 2 and 3 automation. Drivers can lose situational awareness when they’re surprised by the behavior of the automation – asking: What is it doing now? Why did it do that? Or, what is it going to do next? There are open questions as to whether drivers can attain/sustain sufficient attention to take control before they hit something. (Trust me, at highway speeds having a “take over immediately” symbol pop up while you are gazing at the scenery raises your blood pressure, and hopefully your reaction time.)If these technical challenges weren’t enough for drivers to manage, these autonomous driving features are appearing at the same time that car dashboards are becoming computer displays.

We never had cars that worked like this. Not only will users have to get used to dashboards that are now computer displays, they are going to have understand the subtle differences between automated and semi-automated features and do so as auto makers are developing and constantly updating them. They may not have much help mastering the changes. Most users don’t read the manual, and, in some cars, the manuals aren’t even keeping up with the new features.

But while we never had cars that worked like this, we already have planes that do.
Let’s see what we’ve learned in 100 years of designing controls and automation for aircraft cockpits and pilots, and what it might mean for cars.

Aircraft Cockpits
Airplanes have gone through multiple generations of aircraft and cockpit automation. But unlike cars which are just first seeing automated systems, automation was first introduced in airplanes during the 1920s and 1930s.

For their first 35 years airplane cockpits, much like early car dashboards, were simple – a few mechanical instruments for speed, altitude, relative heading and fuel. By the late 1930’s the British Royal Air Force (RAF) standardized on a set of flight instruments. Over the next decade this evolved into the “Basic T” instrument layout – the de facto standard of how aircraft flight instruments were laid out.

Engine instruments were added to measure the health of the aircraft engines – fuel and oil quantity, pressure, and temperature and engine speed.

Next, as airplanes became bigger, and the aerodynamic forces increased, it became difficult to manually move the control surfaces so pneumatic or hydraulic motors were added to increase the pilots’ physical force. Mechanical devices like yaw dampers and Mach trim compensators corrected the behavior of the plane.

Over time, navigation instruments were added to cockpits. At first, they were simple autopilots to just keep the plane straight and level and on a compass course. The next addition was a radio receiver to pick up signals from navigation stations. This was so pilots could set the desired bearing to the ground station into a course deviation display, and the autopilot would fly the displayed course.

In the 1960s, electrical systems began to replace the mechanical systems:

  • electric gyroscopes (INS) and autopilots using VOR (Very High Frequency Omni-directional Range) radio beacons to follow a track
  • auto-throttle – to manage engine power in order to maintain a selected speed
  • flight director displays – to show pilots how to fly the aircraft to achieve a preselected speed and flight path
  • weather radars – to see and avoid storms
  • Instrument Landing Systems – to help automate landings by giving the aircraft horizontal and vertical guidance.

By 1960 a modern jet cockpit (the Boeing 707) looked like this:

While it might look complicated, each of the aircraft instruments displayed a single piece of data. Switches and knobs were all electromechanical.

Enter the Glass Cockpit and Autonomous Flying
Fast forward to today and the third generation of aircraft automation. Today’s aircraft might look similar from the outside but on the inside four things are radically different:

  1. The clutter of instruments in the cockpit has been replaced with color displays creating a “glass cockpit”
  2. The airplanes engines got their own dedicated computer systems – FADEC (Full Authority Digital Engine Control) – to autonomously control the engines
  3. The engines themselves are an order of magnitude more reliable
  4. Navigation systems have turned into full-blown autonomous flight management systems

So today a modern airplane cockpit (an Airbus 320) looks like this:

Today, airplane navigation is a real-world example of autonomous driving – in the sky. Two additional systems, the Terrain Awareness and Warning Systems (TAWS) and Traffic Condition Avoidance System (TCAS) gave pilots a view of what’s underneath and around them dramatically increasing pilots’ situation awareness and flight safety. (Autonomy in the air is technically a much simpler problem because in the cruise portion of flight there are a lot less things to worry about in the air than in a car.)

Navigation in planes has turned into autonomous “flight management.” Instead of a course deviation dial, navigation information is now presented as a “moving map” on a display showing the position of navigation waypoints, by latitude and longitude. The position of the airplane no longer uses ground radio stations, but rather is determined by Global Positioning System (GPS) satellites or autonomous inertial reference units. The route of flight is pre-programmed by the pilot (or uploaded automatically) and the pilot can connect the autopilot to autonomously fly the displayed route. Pilots enter navigation data into the Flight Management System, with a keyboard. The flight management system also automates vertical and lateral navigation, fuel and balance optimization, throttle settings, critical speed calculation and execution of take-offs and landings.

Automating the airplane cockpit relieved pilots from repetitive tasks and allowed less skilled pilots to fly safely. Commercial airline safety dramatically increased as the commercial jet airline fleet quadrupled in size from ~5,000 in 1980 to over 20,000 today. (Most passengers today would be surprised to find out how much of their flight was flown by the autopilot versus the pilot.)

Why Cars Are Like Airplanes
And here lies the connection between what’s happened to airplanes with what is about to happen to cars.

The downside of glass cockpits and cockpit automation means that pilots no longer actively operating the aircraft but instead monitor it. And humans are particularly poor at monitoring for long periods. Pilots have lost basic manual and cognitive flying skills because of a lack of practice and feel for the aircraft. In addition, the need to “manage” the automation, particularly when involving data entry or retrieval through a key-pad, increased rather than decreased the pilot workload. And when systems fail, poorly designed user interfaces reduce a pilot’s situational awareness and can create cognitive overload.

Today, pilot errors — not mechanical failures– cause at least 70-80% of commercial airplane accidents. The FAA and NTSB have been analyzing crashes and have been writing extensively on how flight deck automation is affecting pilots. (Crashes like Asiana 214 happened when pilots selected the wrong mode on a computer screen.) The FAA has written the definitive document how people and automated systems ought to interact.

In the meantime, the National Highway Traffic Safety Administration (NHTSA) has found that 94% of car crashes are due to human error – bad choices drivers make such as inattention, distraction, driving too fast, poor judgment/performance, drunk driving, lack of sleep.

NHTSA has begun to investigate how people will interact with both displays and automation in cars. They’re beginning to figure out:

  • What’s the right way to design a driver-to-vehicle interface on a screen to show:
    • Vehicle status gauges and knobs (speedometer, fuel/range, time, climate control)
    • Navigation maps and controls
    • Media/entertainment systems
  • How do you design for situation awareness?
    • What’s the best driver-to-vehicle interface to display the state of vehicle automation and Autonomous/Self-Driving features?
    • How do you manage the information available to understand what’s currently happening and project what will happen next?
  • What’s the right level of cognitive load when designing interfaces for decisions that have to be made in milliseconds?
    • What’s the distraction level from mobile devices? For example, how does your car handle your phone? Is it integrated into the system or do you have to fumble to use it?
  • How do you design a user interface for millions of users whose age may span from 16-90; with different eyesight, reaction time, and ability to learn new screen layouts and features?

Some of their findings are in the document Human-centric design guidance for driver-vehicle interfaces. But what’s striking is that very little of the NHSTA documents reference the decades of expensive lessons that the aircraft industry has learned. Glass cockpits and aircraft autonomy have traveled this road before. Even though aviation safety lessons have to be tuned to the different reaction times needed in cars (airplanes fly 10 times faster, yet pilots often have seconds or minutes to respond to problems, while in a car the decisions often have to be made in milliseconds) there’s a lot they can learn together. Aviation has gone 9 years in the U.S. with just one fatality, yet in 2017 37,000 people died in car crashes in the U.S.

There Are No Safety Ratings for Your Car As You Drive
In the U.S. aircraft safety has been proactive. Since 1927 new types aircraft (and each sub-assembly) are required to get a type approval from the FAA before it can be sold and be issued an Airworthiness Certificate.

Unlike aircraft, car safety in the U.S. has been reactive. New models don’t require a type approval, instead each car company self-certifies that their car meets federal safety standards. NHTSA waits until a defect has emerged and then can issue a recall.

If you want to know how safe your model of car will be during a crash, you can look at the National Highway Traffic Safety Administration (NHTSA) New Car Assessment Program (NCAP) crash-tests, or the Insurance Institute for Highway Safety (IIHS) safety ratings. Both summarize how well the active and passive safety systems will perform in frontal, side, and rollover crashes. But today, there are no equivalent ratings for how safe cars are while you’re driving them. What is considered a good vs. bad user interface and do they have different crash rates? Does the transition from Level 1, 2 and 3 autonomy confuse drivers to the point of causing crashes? How do you measure and test these systems? What’s the role of regulators in doing so?

Given the NHTSA and the FAA are both in the Department of Transportation (DoT), It makes you wonder whether these government agencies actively talk to and collaborate with each other and have integrated programs and common best practices. And whether they have extracted best practices from the NTSB. And from the early efforts of Tesla, Audi, Volvo, BMW, etc., it’s not clear they’ve looked at the airplane lessons either.

It seems like the logical thing for NHTSA to do during this autonomous transition is 1) start defining “best practices” in U/I and automation safety interfaces and 2) to test Level 2-4 cars for safety while you drive (like the crash tests but for situational awareness, cognitive load, etc. in a set of driving scenarios. (There are great university programs already doing that research.)

However, the DoT’s Automated Vehicles 3.0 plan moves the agency further from owning the role of “best practices” in U/I and automation safety interfaces. It assumes that car companies will do a good job self-certifying these new technologies. And has no plans for safety testing and rating these new Level 2-4 autonomous features.

(Keep in mind that publishing best practices and testing for autonomous safety features is not the same as imposing regulations to slow down innovation.)

It looks like it might take an independent agency like the SAE to propose some best practices and ratings. (Or the slim possibility that the auto industry comes together and set defacto standards.)

The Chaotic Transition
It took 30 years, from 1900 to 1930, to transition from horses and buggies in city streets to automobiles dominating traffic. During that time former buggy drivers had to learn a completely new set of rules to control their cars. And the roads in those 30 years were a mix of traffic – it was chaotic.
In New York City the tipping point was 1908 when the number of cars passed the number of horses. The last horse-drawn trolley left the streets of New York in 1917. (It took another decade or two to displace the horse from farms, public transport and wagon delivery systems.) Today, we’re about to undergo the same transition.

Cars are on the path for full autonomy, but we’re seeing two different approaches on how to achieve Level 4 and 5 “hands off” driverless cars. Existing car manufacturers, locked into the existing car designs, are approaching this step-wise – adding additional levels of autonomy over time – with new models or updates; while new car startups (Waymo, Zoox, Cruise, etc.) are attempting to go right to Level 4 and 5.

We’re going to have 20 or so years with the roads full of a mix of millions of cars – some being manually driven, some with Level 2 and 3 driver assistance features, and others autonomous vehicles with “hands-off” Level 4 and 5 autonomy. It may take at least 20 years before autonomous vehicles become the dominant platforms. In the meantime, this mix of traffic is going to be chaotic. (Some suggest that during this transition we require autonomous vehicles to have signs in their rear window, like student drivers, but this time saying, “Caution AI on board.”)

As there will be no government best practices for U/I or scores for autonomy safety, learning and discovery will be happening on the road. That makes the ability for car companies to have over-the-air updates for both the dashboard user interface and the automated driving features essential. Incremental and iterative updates will add new features, while fixing bad ones. Engaging customers to make them realize they’re part of the journey will ultimately make this a successful experiment.

My bet is much like when airplanes went to glass cockpits with increasingly automated systems, we’ll create new ways drivers crash their cars, while ultimately increasing overall vehicle safety.

But in the next decade or two, with the government telling car companies “roll your own”, it’s going to be one heck of a ride.

Lessons Learned

  • There’s a (r)evolution as car dashboards move from dials and buttons to computer screens and the introduction of automated driving
    • Computer screens and autonomy will both create new problems for drivers
    • There are no standards to measure the safety of these systems
    • There are no standards for how information is presented
  • Aircraft cockpits are 10 to 20 years ahead of car companies in studying and solving this problem
    • Car and aircraft regulators need to share their learnings
    • Car companies can reduce crashes and deaths if they look to aircraft cockpit design for car user interface lessons
  • The Department of Transportation has removed barriers to the rapid adoption of autonomous vehicles
    • Car companies “self-certify” whether their U/I and autonomy are safe
    • There are no equivalents of crash safety scores for driving safety with autonomous features
  • Over-the-air updates for car software will become essential
    • But the downside is they could dramatically change the U/I without warning
  • On the path for full autonomy we’ll have three generations of cars on the road
    • The transition will be chaotic, so hang on it’s going to a bumpy ride, but the destination – safety for everyone on the road – will be worth it

The Red Queen Problem – Innovation in the DoD and Intelligence Community

“…it takes all the running you can do to keep in the same place. ”
The Red Queen Alice in Wonderland

Innovation, disruption, accelerators, have all become urgent buzzwords in the Department of Defense and Intelligence community. They are a reaction to the “red queen problem” but aren’t actually solving the problem. Here’s why.


In the 20th century our nation faced a single adversary – the Soviet Union. During the Cold War the threat from the Soviets was quantifiable and often predictable. We could specify requirements, budget and acquire weapons based on a known foe. We could design warfighting tactics based on knowing the tactics of our opponent. Our defense department and intelligence community owned proprietary advanced tools and technology. We and our contractors had the best technology domain experts. We could design and manufacture the best systems. We used these tools to keep pace with the Soviet threats and eventually used silicon, semiconductors and stealth to create an offset strategy to leapfrog their military.

That approach doesn’t work anymore. In the 21st century you need a scorecard to keep track of the threats: Russia, China, North Korea, Iran, ISIS in Yemen/Libya/Philippines, Taliban, Al-Qaeda, hackers for hire, etc. Some are strategic peers, some are near peers in specific areas, some are threats as non-state disrupters operating with no rules.

In addition to the proliferation of threats, most of the tools and technologies that were uniquely held by the DoD/IC or only within the reach of large nation states are now commercially available (Cyber, GPS, semiconductors, analytics, centrifuges, drones, genetic engineering, agile and lean methodologies, ubiquitous Internet, crypto and smartphones, etc.). In most industries, manufacturing is no longer a core competence of the U.S.

U.S. agencies that historically owned technology superiority and fielded cutting-edge technologies now find that off-the-shelf solutions may be more advanced than the solutions they are working on, or that adversaries can rapidly create asymmetric responses using these readily available technologies.

The result is that our systems, organizations, headcount and budget – designed for 20th century weapons procurements and warfighting tactics on a predictable basis – can’t scale to meet all these simultaneous and unpredictable challenges. Today, our DoD and national security agencies are running as hard as they can just to stay in place, but our adversaries are continually innovating faster than our traditional systems can respond. They have gotten inside our OODA loop (Observe, Orient, Decide and Act).

We believe that continuous disruption can only be met with a commitment to continuous innovation.

Pete Newell and I have spent a lot of time bringing continuous innovation to government organizations. Newell ran the U.S. Army’s Rapid Equipping Force on the battlefields of Iraq and Afghanistan finding and deploying technology solutions against agile insurgents. He’s spent the last four years in Silicon Valley out of uniform continuing that work. I’ve spent the last six years teaching our country’s scientists how to rapidly turn scientific breakthroughs into deliverable products by creating the curriculum for the National Science Foundation Innovation Corps – now taught in 53 universities. Together Pete, Joe Felter and I created Hacking for Defense, a nationwide program to teach university students how use Lean methodologies to solve defense and national security problems.

The solution to continuous disruption requires new ways to think about, organize, and build and deploy national security people, organizations and solutions.

Here are our thoughts about how to confront the Red Queen trap and adapt a government agency to infuse continuous innovation in its culture and practices.

Problem 1: Regardless of a high-level understanding that business as usual can’t go on, all agencies are given “guidance and metrics (what they are supposed to do (their “mission”) and how they are supposed to measure success). To no one’s surprise the guidance is “business as usual but more of it.” And to fulfill that guidance agencies create structure (divisions, directorates, etc.) designed to execute repeatable processes and procedures to deliver solutions that meet the requirements of the overall guidance.

Inevitably, while all of our defense and national security agencies will tell you that innovation is one of their pillars, innovation actually is an ill-defined and amorphous aspirational goal, while the people, budget and organization continue to flow to execution of mission (as per guidance.)

There is no guidance or acknowledgement that in our national security agencies, even as we execute our current mission, our capabilities decline every year due to security breaches, technology timing out, tradecraft obsolescence, etc. And there is no explicit requirement for creation of new capabilities that give us the advantage.

Solution 1: Extend agency guidance to include the requirements to create a continuous innovation process that a) resupplies the continual attrition of capabilities and b) creates new capabilities that gives us a mission advantage. The result will be agency leadership creating new organizational structures that make innovation a continual process rather than an ad hoc series of heroic efforts.

Problem 2: The word “Innovation” actually describes three very different types of activities.

Solution 2: Use the McKinsey Three Horizons Model to differentiate among the three types. Horizon 1 ideas provide continuous innovation to a company’s existing mission model and core capabilities. Horizon 2 ideas extend a company’s existing mission model and core capabilities to new stakeholders, customers, or targets. Horizon 3 is the creation of new capabilities to take advantage of or respond to disruptive technologies/opportunities or to counter disruption.

We’d add a new category, Horizon 0, which kills ideas that are not viable or feasible (something that Silicon Valley is tremendously efficient at doing).

These Horizons also apply to government agencies and other large organizations. Agencies and commands need to support all three horizons.

Problem 3: Risk equals failure and failure is to be avoided as it indicates a lack of competence.

Solution 3: The three-horizon model allows everyone to understand that failure in a Horizon 1/existing mission activity is different than failure in a Horizon 3 “never been done before” activity. We want to take risks in Horizon 3. If we aren’t failing with some efforts, we aren’t trying hard enough. An innovation process embraces and understands the different types of failure and risk.

Problem 4: Innovators tend to create activities rather than deployable solutions that can be used on the battlefield or by the mission. Accelerators, hubs, cafes, open-sourcing, crowd-souring, maker spaces, Chief Innovation Officers, etc. are all great but they tend to create innovation theater – lots of motion but no action. Great demos are shown and there are lots of coffee cups and posters, but if you look at the deliverables for the mission over a period of years the result is disappointing. Most of the executors and operators have seen little or no value from any of these activities. While the activities individually may produce things of value, they aren’t valued within the communities they serve because they aren’t connected to a complete pipeline that harnesses that value and turns it into a deliverable on the battlefield where it matters.

Solution 4: What we have been missing is an innovation pipeline focused on deployment not demos.

The Lean Innovation process is a self-regulating, evidence-based innovation pipeline. It is a process that operates with speed and urgency, where innovators and stakeholders curate and prioritize their own problems/Challenges/ideas/technology. It is evidence based, data driven, accountable, disciplined, rapid and mission- and deployment-focused.

The process recognizes that Innovation isn’t a single activity (an incubator, a class, etc.) it is a process from start to deployment.
The canonical innovation pipeline:

As you see in the diagram, there are 6 steps to the innovation pipeline: sourcing, challenge/curation, prioritization, solution exploration and hypothesis testing, incubation and integration.

Innovation sourcing: a list of problems/challenges, ideas, and technologies that might be worth investing in. These can come from hackathons, research groups, needs from operators in the field, etc.

Challenge/Curation: innovators get out of their own offices and talk to colleagues and customers with the goal of finding other places in the DoD where a problem or challenge might exist in a slightly different form, to identify related internal projects already in existence, and to find commercially available solutions to problems. It also seeks to identify legal issues, security issues, and support issues.

This process also helps identify who the customers for possible solutions would be, who the internal stakeholders would be, and even what initial minimum viable products might look like.

This phase also includes building initial minimal viable products (MVPs.) Some ideas drop out when the team recognizes that they may be technically, financially, or legally unfeasible or they may discover that other groups have already built a similar product.

Prioritization: Once a list of innovation ideas has been refined by curation, it needs to be prioritized using the McKinsey Three Horizons Model.

Once projects have been classified, the team prioritizes them, starting by asking: is this project worth pursing for another few months full time? This prioritization is not done by a committee of executives but by the innovation teams themselves.

Solution exploration and hypotheses testing: The ideas that pass through the prioritization filter enter an incubation process like Hacking for Defense/I-Corps, the system adopted by all U.S. government federal research agencies to turn ideas into products.

This six- to ten-week process delivers evidence for defensible, data-based decisions. For each idea, the innovation team fills out a mission model canvas. Everything on that canvas is a hypothesis. This not only includes the obvious – is there solution/mission fit? — but the other “gotchas” that innovators always seem to forget. The framework has the team talking not just to potential customers but also with people responsible for legal, support, contracting, policy, and finance. It also requires that they think through compatibility, scalability and deployment long before this gets presented to engineering. There is now another major milestone for the team: to show compelling evidence that this project deserves to be a new mainstream capability. Alternatively, the team might decide that it should be spun into its own organization or that it should be killed.

Incubation: Once hypothesis testing is complete, many projects will still need a period of incubation as the teams championing the projects gather additional data about the application, further build the minimum viable product (MVP), and get used to working together. Incubation requires dedicated leadership oversight from the horizon 1 organization to insure the fledgling project does not die of malnutrition (a lack of access to resources) or become an orphan (continue to work with no parent to guide them).

Integration and refactoring: At this point, if the innovation is Horizon 1 or 2, its time to integrate it into the existing organization. (Horizon 3 innovations are more likely set up as their own entities or at least divisions.) Trying to integrate new, unbudgeted, and unscheduled innovation projects into an engineering organization that has line item budgets for people and resources results in chaos and frustration. In addition, innovation projects carry both technical and organizational debt. This creates an impedance mismatch between the organizations that can be easily be resolved with a small dedicated refactoring team. Innovation then becomes a continuous cycle rather than a bottleneck.

Problem 5: The question being asked across the Department of Defense and national security community is, “Can we innovate like startups in Silicon Valley” and insert speed, urgency and agility into our work?

Solution 5: The reality is that the DoD/IC is not Silicon Valley. In fact, it’s much more like a large company with existing customers, existing products and the organizations built to support and service them. And much like large companies they are being disrupted by forces outside their control.

But what’s unique is, that unlike a large company that doesn’t know how to move rapidly, on the battlefields of Iraq and Afghanistan our combatant commands and national security community were more agile, creative and Lean than any startup. They wrote the book on how to collaborate (read Team of Teams) or adopt new technologies (see the Rapid Equipping Force.) The problem isn’t that these agencies and commands don’t know how to be innovative. The problem is they don’t know how to be innovative in peacetime when innovation succumbs to the daily demands of execution. Part of the reason is that large agencies are run by leaders who tend to be excellent Horizon 1 managers of existing people, process and resources but have no experience in building and leading Horizon 3 organizations.

The solution is to understand that an innovation pipeline requires different people, processes, procedures, and metrics, then execution.

Problem 6: How to get started? How to get leadership behind continuous innovation?

Solution 6: To leadership, incubators, cafes, accelerators and hackathons appear to be just background noise unrelated to their guidance and mission. Part of the problem lies with the innovators themselves. Lots of innovation activities celebrate the creation of demos, funding, new makerspaces, etc. but there is little accountability for the actual rapid deployment of useful tools. Once we can convince and demonstrate to leadership that continuous innovation can solve the Red Queen problem, we’ll have their attention and support.

We know how to do this. Our country requires it.
Let’s get started.

Lessons Learned

  • Organizations must constantly adapt and evolve, to survive when pitted against ever-evolving opposition in an ever-changing environment
  • Government agencies need to both innovate and execute
  • In peacetime innovation succumbs to the demands of execution
  • We need explicit guidance for innovation to agencies and their leadership requiring an innovation organization and process, that operates in parallel with the execution of current mission
  • We need an innovation pipeline that delivers rapid results, not separate, disconnected innovation activities

National Security Innovation just got a major boost in Washington

Two good things just happened in Washington – these days that should be enough of a headline.

First, someone ideal was just appointed to be Deputy Assistant Secretary of Defense.

Second, funding to teach our Hacking for Defense class across the country just was added to the National Defense Authorization Act.

Interestingly enough, both events are about how the best and brightest can serve their country – and are testament to the work of two dedicated men.

Soldier, Scholar, Entrepreneur
Joe Felter was just appointed Deputy Assistant Secretary of Defense for South and Southeast Asia. As a result, our country just became a bit safer and smarter. That’s because Joe brings a wealth of real-world experience and leadership to the role.

I got lucky to know and teach with Joe at Stanford. When we met, my first impression was that of a very smart and pragmatic academic. And I also noticed that there was always a cloud of talented grad students who wanted to follow him. (I learned later I was watching one of the qualities of a great leader.) Joe had appointments at Stanford’s Center for International Security and Cooperation (CISAC), where he was the co-director of the Empirical Studies of Conflict Project and at the Hoover Institute where he was a research fellow. I learned he’d gone to Harvard to get his MPA at the Kennedy School of Government in conflict resolution. But the thing that really caught my attention: his Stanford Ph.D thesis in Political Science had the world’s best title: “Taking Guns to a Knife Fight: A Case for Empirical Study of Counterinsurgency.” I wondered how this academic knew anything about counterinsurgency.

This was another reminder that when you reach a certain age, people you encounter may have lived multiple lives, had multiple careers, and had multiple acts. It took me a while to realize that Joe had one heck of a first act before coming to Stanford in 2011.

As I later discovered, Joe’s first act was 24 years in the Army Special Operations Forces (SOF), retiring as a Colonel.
His Special Forces time was with the 1st Special Forces Group as a team leader and later as a company commander. He did a tour with the 75th Ranger Regiment as a platoon leader. In 2005, he returned to West Point (where he earned his undergrad degree) and ran the Combating Terrorism Center. Putting theory into practice, he went to Iraq in 2008 as part of the 75th Ranger Regiment, in support of a Joint Special Operations Task Force. In 2010 Joe was in Afghanistan as the Commander of the Counterinsurgency Advisory and Assistance Team. At various points his Special Forces career took him to countries in Southeast Asia where counterinsurgency was not just academics.

Ironically, I was first introduced to Joe not at Stanford but through one of his other lives – that of an entrepreneur and businessman – at the company he founded, BMNT Partners. It was there that Joe and I along with another retired Army Colonel, Pete Newell, came up with the idea of creating the Hacking for Defense class. We combined the Lean Startup methodology – used by the National Science Foundation to commercialize science  – with the rapid problem sourcing and solution methodology Pete developed on the battlefields in Afghanistan and Iraq when he ran the US Army’s Rapid Equipping Force.

My interest was to get Stanford students engaged in national service and exposed to parts of the U.S. government where their traditional academic path and business career would never take them. (I have a strong belief that we’ve run a 44-year experiment with what happens when you disconnect the majority of Americans from any form of national service. And the result hasn’t been good for our country. Today if college students want to give back to their country, they think of Teach for America, the Peace Corps, or Americorps or perhaps the US Digital Service or the GSA’s 18F. Few consider opportunities to make the world safer with the Department of Defense, State Department, Intelligence Community or other government agencies.)

Joe, Pete and I would end up building a curriculum that would turn into a series of classes — first, Hacking for Defense, then Hacking for Diplomacy (with the State Department and Professor Jeremy Weinstein), Hacking for Energy, Hacking for Impact, etc.

Hacking For Defense
Our first Hacking for Defense class in 2016 blew past our expectations – and we had set a pretty high bar. (See the final class presentations here and here).

Our primary goal was to teach students entrepreneurship while they engaged in national public service.

Our second goal was to introduce our sponsors – the innovators inside the Department of Defense and Intelligence Community –  to a methodology that can help them understand and better respond to rapidly evolving asymmetric threats. We believed if we could get teams to rapidly discover the real problems in the field using Lean methods, and only then articulate the requirements to solve them, then defense acquisition programs could operate at speed and urgency and deliver timely and needed solutions.

Finally, we also wanted to show our sponsors in the Department of Defense that students can make meaningful contributions to understanding problems and rapid prototyping of solutions to real-world national security problems.

The Innovation Insurgency Spreads
Fast forward a year. Hacking for Defense is now offered at eight universities in addition to Stanford – Georgetown, University of PittsburghBoise StateUC San Diego, James Madison University, University of Southern Mississippi, and later this year University of Southern California and Columbia University. We established Hacking for Defense.org, a non-profit to train educators and provide a single point of contact for connecting the DOD/IC sponsor problems to these universities.

By the middle of this year Hacking For Defense started to feel like it had the same momentum as when my Lean LaunchPad class at Stanford got adopted by the National Science Foundation and became the Innovation Corps (I-Corps). I-Corps uses Lean Startup methods to teach scientists how to turn their discoveries into entrepreneurial, job-producing businesses. Over 1,000 teams of our nation’s best scientists have been through the program. It has changed how federally funded research is commercialized.

Recognizing that it’s a model for a government program that’s gotten the balance between public/private partnerships just right, last fall Congress passed the American Innovation and Competitiveness Act, making the National Science Foundation Innovation Corps a permanent part of the nation’s science ecosystem.

It dawned on Pete, Joe and me that perhaps we could get Congress to fund the national expansion of Hacking for Defense the same way. But serendipitously, the best person we were going to ask for help had already been thinking about this.

The Congressman From Science and Innovation
Before everyone else thought that teaching scientists how to build companies using Lean Methods might be a good for the country, there was one congressman who got it first.

In 2012, Rep. Dan Lipinski (D-Il), ranking member on the House Research and Technology Subcommittee, got on an airplane and flew to Stanford to see first-hand the class that would become I-Corps. For the first few years Lipinski was a lonely voice in Congress saying that we’ve found a better way to train our scientists to create companies and jobs. But over time, his colleagues became convinced that it was a non-partisan good idea. Rep. Lipinski was responsible for helping I-Corps proliferate through the federal government.

While Joe Felter and Pete Newell were thinking about approaching Congressman Lipinski about funding for Hacking for Defense Lipinski had already been planning to do so. As he recalled, “I was listening to your podcast as I was working in my backyard cutting, digging, chopping, etc. (yes, I do really work in my backyard,) when it dawned on me that funding Hacking for Defense as a national program – just like I did for the Innovation Corps – would be great for our nation’s defense when we are facing new unique threats. I tasked my staff to draft an amendment to the National Defense Authorization Act and I sponsored the amendment.”

(The successful outcome of I-Corps has given the Congressman credibility on entrepreneurship education among his peers. And it doesn’t hurt that he has a Ph.D and was a university professor before he ended up in Congress.)

Joe Felter and Pete Newell mobilized a network of Hacking for Defense supporters. Joe and Pete’s reputations preceded them on Capitol Hill, but in part a testament to the strength of Hacking for Defense, there’s now a large network of people who have experienced and believe in the program, and were willing to help out by writing letters of support, reaching out to other members of Congress to ask for support, and providing Congressman Lipinski’s office with information and background.

Congressman Lipinski led the amendment. He brought on co-sponsors from both sides of the aisle: Representatives Steve Knight (R-CA 25), Ro Khanna (D-CA 17), Anna Eshoo (D-CA 18), Seth Moulton (D-MA 6) and Carol Shea-Porter (D-NH 1).

On the floor of the House, Lipinski said, “Rapid, low-cost technological innovation is what makes Silicon Valley revolutionary, but the DOD hasn’t historically had the mechanisms in place to harness this American advantage. Hacking for Defense creates ways for talented scientists and engineers to work alongside veterans, military leaders, and business mentors to innovate solutions that make America safer.”

Last Friday the House unanimously approved an amendment to the National Defense Authorization Act authorizing the Hacking for Defense (H4D) program and enabling the Secretary of Defense to expend up to $15 million to support development of curriculum, best practices, and recruitment materials for the program.

This week the H4D amendment moves on to the Senate and Joe Felter moves on to the Pentagon. Both of those events have the potential to make our world a much safer place – today and tomorrow.

Innovation, Change and the Rest of Your Life

I gave the Alumni Day talk at U.C. Santa Cruz and had a few things to say about innovation.

—-

Even though I live just up the coast, I’ve never had the opportunity to start a talk by saying “Go Banana Slugs.”

I’m honored for the opportunity to speak here today.

We’re standing 15 air miles away from the epicenter of technology innovation. The home of some of the most valuable and fastest growing companies in the world.

I’ve spent my life in innovation, eight startups in 21 years, and the last 15 years in academia teaching it.

I lived through the time when working in my first job in Ann Arbor Michigan we had to get out a map to find out that San Jose was not only in Puerto Rico but there was a city with that same name in California.  And that’s where my plane ticket ought to take me to install some computer equipment.

39 years ago I got on that plane and never went back.

I’ve seen the Valley grow from Sunnyvale to Santa Clara to today where it stretches from San Jose to South of Market in San Francisco.  I’ve watched the Valley go from Microwave Valley – to Defense Valley – to Silicon Valley to Internet Valley. And to today, when its major product is simply innovation.  And I’ve been lucky enough to watch innovation happen not only in hardware and software but in Life Sciences – in Therapeutics, Medical Devices, Diagnostics and now Digital Health.

I’ve been asked to talk today about the future of Innovation – typically that involves giving you a list of hot technologies to pay attention to – technologies like machine learning.  The applications that will pour of this just one technology will transform every industry – from autonomous vehicles to automated radiology/oncology diagnostics.

Equally transformative on the life science side, CRISPR and CAS enable rapid editing of the genome, and that will change life sciences as radically as machine intelligence.

But today’s talk about the future of innovation is not about these technologies, or the applications or the new industries they will spawn.

In fact, it’s not about any specific new technologies.

The future of innovation is really about seven changes that have made innovation itself possible in a way that never existed before.

We’ve created a world where innovation is not just each hot new technology, but a perpetual motion machine.

So how did this happen?  Where is it going?

Silicon Valley emerged by the serendipitous intersection of:

  • Cold War research in microwaves and electronics at Stanford University,
  • a Stanford Dean of Engineering who encouraged startup culture over pure academic research,
  • Cold War military and intelligence funding driving microwave and military products for the defense industry in the 1950’s,
  • a single Bell Labs researcher deciding to start his semiconductor company next to Stanford in the 1950’s which led to
  • the wave of semiconductor startups in the 1960’s/70’s,
  • the emergence of Venture Capital as a professional industry,
  • the personal computer revolution in 1980’s,
  • the rise of the Internet in the 1990’s and finally
  • the wave of internet commerce applications in the first decade of the 21st century.
  • The flood of risk capital into startups at a size and scale that was not only unimaginable at its start, but in the middle of the 20th century would have seemed laughable.

Up until the beginning of this century, the pattern for the Valley seemed to be clear. Each new wave of innovation – microwaves, defense, silicon, disk drives, PCs, Internet, therapeutics, – was like punctuated equilibrium – just when you thought the wave had run its course into stasis, there emerged a sudden shift and radical change into a new family of technology. 

But in the 20th Century there were barriers to Entrepreneurship
In the last century, while startups continued to innovate in each new wave of technology, the rate of innovation was constrained by limitations we only now can understand. Startups in the past were constrained by:

  1. customers were initially the government and large companies and they adopted technology slowly,
  2. long technology development cycles (how long it takes to get from idea to product),
  3. disposable founders,
  4. the high cost of getting to first customers (how many dollars to build the product),
  5. the structure of the Venture Capital industry (there were a limited number of VC firms each needing to invest millions per startups),
  6. the failure rate of new ventures (startups had no formal rules and acted like smaller versions of large companies),
  7. the information and expertise about how to build startups (information was clustered in specific regions like Silicon Valley, Boston, New York, etc.), and there were no books, blogs or YouTube videos about entrepreneurship.

What we’re now seeing is The Democratization of Entrepreneurship
What’s happening today is something more profound than a change in technology. What’s happening is that these seven limits to startups and innovation have been removed.

The first thing that’s changed is that Consumer Internet and Genomics are Driving Innovation at scale
In the 1950’s and ‘60’s U.S. Defense and Intelligence organizations drove the pace of innovation in Silicon Valley by providing research and development dollars to universities, and defense companies built weapons systems that used the Valley’s first microwave devices and semiconductor components.

In the 1970’s, 80’s and 90’s, momentum shifted to the enterprise as large businesses supported innovation in PCs, communications hardware and enterprise software. Government and the enterprise are now followers rather than leaders.

Today, for hardware and software it’s consumers – specifically consumer Internet companies – that are the drivers of innovation. When the product and channel are bits, adoption by 10’s and 100’s of millions and even billions of users can happen in years versus decades.

For life sciences it was the Genentech IPO in 1980 that proved to investors that life science startups could make them a ton of money.

The second thing that’s changed is that we’re now Compressing the Product Development Cycle
In the 20th century startups I was part of, the time to build a first product release was measured in years as we turned out the founder’s vision of what customers wanted. This meant building every possible feature the founding team envisioned into a monolithic “release” of the product.

Yet time after time, after the product shipped, startups would find that customers didn’t use or want most of the features. The founders were simply wrong about their assumptions about customer needs. It turns out the term “visionary founder” was usually a synonym for someone who was hallucinating. The effort that went into making all those unused features was wasted.

Today startups build products differently. Instead of building the maximum number of features, founders treat their vision as a series of untested hypotheses, then get out of the building and test a minimum feature set in the shortest period of time.  This lets them deliver a series of minimal viable products to customers in a fraction of the time.

For products that are simply “bits” delivered over the web, a first product can be shipped in weeks rather than years.

The third thing is that Founders Need to Run the Company Longer
Today, we take for granted new mobile apps and consumer devices appearing seemingly overnight, reaching tens of millions of users – and just as quickly falling out of favor. But in the 20th century, dominated by hardware, software, and life sciences, technology swings inside an existing market happened slowly — taking years, not months. And while new markets were created (i.e. the desktop PC market), they were relatively infrequent.

This meant that disposing of the founder, and the startup culture responsible for the initial innovation, didn’t hurt a company’s short-term or even mid-term prospects.  So, almost like clockwork 20th century startups fired the innovators/founders when they scaled. A company could go public on its initial wave of innovation, then coast on its current technology for years. In this business environment, hiring a new CEO who had experience growing a company around a single technical innovation was a rational decision for venture investors.

That’s no longer the case.

The pace of technology change in the second decade of the 21st century is relentless. It’s hard to think of a hardware/software or life science technology that dominates its space for years. That means new companies face continuous disruption before their investors can cash out.

To stay in business in the 21st century, startups must do three things their 20th century counterparts didn’t:

  • A company is no longer built on a single innovation. It needs to be continuously innovating – and who best to do that? The founders.
  • To continually innovate, companies need to operate at startup speed and cycle time much longer their 20th century counterparts did. This requires retaining a startup culture for years – and who best to do that? The founders.
  • Continuous innovation requires the imagination and courage to challenge the initial hypotheses of your current business model (channel, cost, customers, products, supply chain, etc.) This might mean competing with and if necessary killing your own products. (Think of the relentless cycle of iPod then iPhone innovation.) Professional CEOs who excel at growing existing businesses find this extremely hard.  Who best to do that? The founders.

The fourth thing that’s changed is that you can start a company on your laptop For Thousands Rather than Millions of Dollars
Startups traditionally required millions of dollars of funding just to get their first product to customers. A company developing software would have to buy computers and license software from other companies and hire the staff to run and maintain it. A hardware startup had to spend money building prototypes and equipping a factory to manufacture the product.

Today open source software has slashed the cost of software development from millions of dollars to thousands. My students think of computing power as a utility like I think of electricity. They can get to more computing power via their laptop through Amazon Web Services than existed in the entire world when I started in Silicon Valley.

And for consumer hardware, no startup has to build their own factory as the costs are absorbed by offshore manufacturers.  China has simply become the factory.

The cost of getting the first product out the door for an Internet commerce startup has dropped by a factor of a 100 or more in the last decade.  Ironically, while the cost of getting the first product out the door has plummeted, it now can take 10’s or 100’s of millions of dollars to scale.

The fifth change is the New Structure of how startups get funded
The plummeting cost of getting a first product to market (particularly for Internet startups) has shaken up the Venture Capital industry.

Venture Capital used to be a tight club clustered around formal firms located in Silicon Valley, Boston, and New York. While those firms are still there (and getting larger), the pool of money that invests risk capital in startups has expanded, and a new class of investors has emerged.

First, Venture Capital and angel investing is no longer a U.S. or Euro-centric phenomenon. Risk capital has emerged in China, India and other countries where risk taking, innovation and liquidity are encouraged, on a scale previously only seen in the U.S.

Second, new groups of VCs, super angels, smaller than the traditional multi-hundred-million-dollar VC fund, can make small investments necessary to get a consumer Internet startup launched. These angels make lots of early bets and double-down when early results appear. (And the results do appear years earlier than in a traditional startup.)

Third, venture capital has now become Founder-friendly.

A 20th century VC was likely to have an MBA or finance background. A few, like John Doerr at Kleiner Perkins and Don Valentine at Sequoia, had operating experience in a large tech company. But out of the dot-com rubble at the turn of the 21st century, new VCs entered the game – this time with startup experience. The watershed moment was in 2009 when the co-founder of Netscape, Marc Andreessen, formed a venture firm and started to invest in founders with the goal to teach them how to be CEOs for the long term. Andreessen realized that the game had changed. Continuous innovation was here to stay and only founders – not hired execs – could play and win.  Founder-friendly became a competitive advantage for his firm Andreessen Horowitz. In a seller’s market, other VCs adopted this “invest in the founder” strategy.

Fourth, in the last decade, corporate investors and hedge funds have jumped into later stage investing with a passion. Their need to get into high-profile deals has driven late-stage valuations into unicorn territory.  A unicorn is a startup with a market capitalization north of a billion dollars.

What this means is that the emergence of incubators and super angels have dramatically expanded the sources of seed capital. VCs have now ceded more control to founders. Corporate investors and hedge funds have dramatically expanded the amount of money available. And the globalization of entrepreneurship means the worldwide pool of potential startups has increased at least 100-fold since the turn of this century.  And today there are over 200 startups worth over a billion dollars.

Change Number 6 is that Starting a Company means you no longer Act Like A Big Company
Since the turn of the century, there’s been a radical shift in how startups thought of themselves.  Until then investors and entrepreneurs acted like startups were simply smaller versions of large companies. Everything a large company did, a startup should do – write a business plan; hire sales, marketing, engineering; spec all the product features on day one and build everything for a big first customer ship.

We now understand that’s wrong.  Not kind of wrong but going out of business wrong.

What used to happen is you’d build the product, have a great launch event, everyone high-five the VP of Marketing for great press and then at the first board meeting ask the VP of Sales how he was doing versus the sales plan.  The response was inevitably “great pipeline.”  (Great pipeline means no real sales.)

This would continue for months, as customers weren’t behaving as per the business plan.  Meanwhile every other department in the company would be making their plan – meaning the company was burning cash without bringing in revenue.  Finally the board would fire the VP of sales.  This cycle would continue then you’d fire the VP of Marketing, then the CEO.

What we’ve learned is that while companies execute business models, startups search for a business model. It means that unlike in big companies startups are guessing about who their customers are, what features they want, where and how they want to buy the product, how much they want to pay.  We now understand that startups are just temporary organizations designed to search for a scalable and repeatable business models.

We now have specific management tools to grow startups. Entrepreneurs first map their assumptions and then test these hypotheses with customers out in the field (customer development) and use an iterative and incremental development methodology (agile development) to build the product. When founders discover their assumptions are wrong, as they inevitably will, the result isn’t a crisis, it’s a learning event called a pivot — and an opportunity to change the business model.

The result, startups now have tools that speed up the search for customers, reduce time to market and slash the cost of development. I’m glad to have been part of the team inventing the Lean Startup methodology.

Change number 7 – the last one – is perhaps the most profound and one students graduating today don’t even recognize. And it’s that Information is everywhere

In the 20th century learning the best practices of a startup CEO was limited by your coffee bandwidth. That is, you learned best practices from your board and by having coffee with other, more experienced CEOs. Today, every founder can read all there is to know about running a startup online. Incubators and accelerators like Y-Combinator have institutionalized experiential training in best practices (product/market fit, pivots, agile development, etc.); provide experienced and hands-on mentorship; and offer a growing network of founding CEOs.

The result is that today’s CEOs have exponentially more information than their predecessors. This is ironically part of the problem. Reading about, hearing about and learning about how to build a successful company is not the same as having done it. As we’ll see, information does not mean experience, maturity or wisdom. 

The Entrepreneurial Singularity
The barriers to entrepreneurship are not just being removed. In each case, they’re being replaced by innovations that are speeding up each step, some by a factor of ten.

And while innovation is moving at Internet speed, it’s not limited to just Internet commerce startups. It has spread to the enterprise and ultimately every other business segment. We’re seeing the effect of Amazon on retailers.  Malls are shutting down. Most students graduating today have no idea what a Blockbuster record/video store was. Many have never gotten their news from a physical newspaper.

If we are at the cusp of a revolution as important as the scientific and industrial revolutions what does it mean? Revolutions are not obvious when they happen. When James Watt started the industrial revolution with the steam engine in 1775 no one said, “This is the day everything changes.”  When Karl Benz drove around Mannheim in 1885, no one said, “There will be 500 million of these driving around in a century.” And certainly in 1958 when Noyce and Kilby invented the integrated circuit, the idea of a quintillion (10 to the 18th) transistors being produced each year seemed ludicrous.

It’s possible that we’ll look back to this decade as the beginning of our own revolution. We may remember this as the time when scientific discoveries and technological breakthroughs were integrated into the fabric of society faster than they had ever been before. When the speed of how businesses operated changed forever.

As the time when we reinvented the American economy and our Gross Domestic Product began to take off and the U.S. and the world reached a level of wealth never seen before.  It may be the dawn of a new era for a new American economy built on entrepreneurship and innovation.

Innovation – something both parties can agree on

icorps-logoOn the last day Congress was in session in 2016, Democrats and Republicans agreed on a bill that increased innovation and research for the country.

For me, seeing Congress pass this bill, the American Innovation and Competitiveness Act, was personally satisfying. It made the program I helped start, the National Science Foundation Innovation Corps (I-Corps) a permanent part of the nation’s science ecosystem. I-Corps uses Lean Startup methods to teach scientists how to turn their discoveries into entrepreneurial, job-producing businesses.  I-Corps bridges the gap between public support of basic science and private capital funding of new commercial ventures. It’s a model for a government program that’s gotten the balance between public/private partnerships just right. Over 1,000 teams of our nation’s best scientists have been through the program.

The bill directs the expansion of I-Corps to additional federal agencies and academic institutions, as well as through state and local governments.  The new I-Corps authority also supports prototype or proof-of-concept development activities, which will better enable researchers to commercialize their innovations. The bill also explicitly says that turning federal research into companies is a national goal to promote economic growth and benefit society. For the first time, Congress has recognized the importance of government-funded entrepreneurship and commercialization education, training, and mentoring programs specifically saying that this will improve the nation’s competitiveness. And finally this bill acknowledges that networks of entrepreneurs and mentors are critical in getting technologies translated from the lab to the marketplace.

uncle-sam-2This bipartisan legislation was crafted by senators Cory Gardner (R–CO) and Gary Peters (D–MI). Senator John Thune (R–SD) chairs the Senate commerce and science committee that crafted S. 3084. After years of contention over reauthorizing the National Science Foundation, House Science Committee Chairman Lamar Smith and Ranking Member Eddie Bernice Johnson worked to negotiate the agreement that enabled both the House and the Senate to pass this bill.

While I was developing the class at Stanford, it was my counterparts at the NSF who had the vision to make the class a national program.  Thanks to Errol Arkilic, Don Millard, Babu Dasgupta, Anita LaSalle (as well as current program leaders Lydia McClure, Steven Konsek) and the over 100 instructors at the 53 universities who teach the program across the U.S.

NSF I-Corps Oct 2011But I haven’t forgotten that before everyone else thought that teaching scientists how to build companies using Lean Methods might be a good for the country, there was one congressman who got it first.  lipinskiIN 2012, Representative Dan Lipinski (D-Il), co-chair of the House STEM Education Caucus, got on an airplane and flew to Stanford to see the class first-hand.

For the first few years Lipinski was a lonely voice in Congress saying that we’ve found a better way to train our scientists to create companies and jobs.

This bill is a reauthorization of the 2010 America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Act, which set out policies that govern the NSF, the National Institute of Standards and Technology (NIST), and federal programs on innovation, manufacturing, and science and math education. Reauthorization bills don’t fund an agency, but they provide policy guidance.  It resolved partisan differences over how NSF should conduct peer review and manage research.

I-Corps is the  accelerator that helps scientists bridge the commercialization gap between their research in their labs and wide-scale commercial adoption and use.

Why This Matters
While a few of the I-Corps teams are in web/mobile/cloud, most are working on advanced technology projects that don’t make TechCrunch. You’re more likely to see their papers (in material science, robotics, diagnostics, medical devices, computer hardware, etc.) in Science or Nature.

I-Corps uses everything we know about building Lean Startups and Evidence-based Entrepreneurship to connect innovation to entrepreneurship. It’s curriculum is built on a framework of business model design, customer development and agile engineering – and its emphasis on evidence, Lessons Learned versus demos, makes it the worlds most advanced accelerator. It’s success is measured not only by the technologies that leave the labs, but how many U.S. scientists and engineers we train as entrepreneurs and how many of them pass on their knowledge to students. I-Corps is our secret weapon to integrate American innovation and entrepreneurship into every U.S. university lab.

Every time I go to Washington and spend time at the National Science Foundation or National Institute of Health I’m reminded why the U.S. leads the world in support of basic and applied science.  It’s not just the money we pour into these programs (~$125 billion/year), but the people who have dedicated themselves to make the world a better place by advancing science and technology for the common good.

Congratulations to everyone in making the Innovation Corps a national standard.

So Here’s What I’ve Been Thinking…

I was interviewed at the Stanford Business School and in listening to the podcast, I realize I repeated some of my usual soundbites but embedded in the conversation were a few things I’ve never shared before about service.

Listen here:

Steve Blank on Silicon Valley, AI and the Future of Innovation

Download the .mp3 here:

Download Episode

The Innovation Insurgency Scales – Hacking For Defense (H4D)

Hacking for Defense is a battle-tested problem-solving methodology that runs at Silicon Valley speed. We just held our first Hacking for Defense Educators Class with 75 attendees.

h4d-ed-classThe results: 13 Universities will offer the course in the next year, government sponsors committed to keep sending hard problems to the course, the Department of Defense is expanding their use of H4D to include a classified version, and corporate partners are expanding their efforts to support the course and to create their own internal H4D courses.

It was a good three days.

————-

Another Tool for Defense Innovation
Last week we held our first 3-day Hacking for Defense Educator and Sponsor Class. Our goal in this class was to:

  1. Train other educators on how to teach the class at their schools.
  2. Teach Department of Defense /Intelligence Community sponsors how to deliver problems to these schools and how to get the most out of student teams.
  3. Create a national network of colleges and universities that use the Hacking for Defense Course to provide hundreds of solutions to critical national security problems every year.

What our sponsors have recognized is that Hacking for Defense is a new tool in the country’s Defense Innovation toolkit. In 1957 after the Soviet Union launched the Sputnik satellite the U.S. felt that it was the victim of a strategic technological surprise. DARPA was founded in 1958 to ensure that from then on the United States would be the initiator of technological surprises. It does so by funding research that promises the Department of Defense transformational change instead of incremental advances.

darpa-iqt-h4dBy the end of the 20th century the Central Intelligence Agency (CIA) realized that it was no longer the technology leader it had been when it developed the U-2, SR-71, and CORONA reconnaissance programs in the 1950’s and 1960’s. Its systems were struggling to manage the rapidly increasing torrent of information being collected. They realized that commercial applications of technology were often more advanced than those used internally. The CIA set up In-Q-Tel to be the venture capital arm of the intelligence community to speed the insertion of technologies. In-Q-Tel invests in startups developing technologies that provide ready-soon innovation (within 36 months) vital to the IC mission. More than 70 percent of the In-Q-Tel portfolio companies have never before done business with the government .

In the 21st century the DOD/IC community have realized that adversaries are moving at a speed that our traditional acquisition systems could not keep up with. Hacking for Defense combines the rapid problem sourcing and curation methodology developed on the battlefields in Afghanistan and Iraq by Colonel Pete Newell and the US Army’s Rapid Equipping Force with the Lean Startup practices that I pioneered in Silicon Valley and which are now the mainstay of the National Science Foundations’ I-Corps program. Hacking for Defense is a problem-solving methodology that offers the DOD/IC community a collaborative approach to innovation that provides ready-now innovation (within 12-36 months).

Train the Trainers
Pete Newell, Joe Felter and I learned a lot developing the Hacking for Defense class, more as we taught it, and even more as we worked with the problem sponsors in the DOD/Intel community.u-pitt-h4d Since one of our goals is to make this class available nationally, now it was time to pass on what we had learned and to train other educators how to teach the class and sponsors how to craft problems that student teams could work on.

(If you want a great overview of the Hacking for Defense class, stop and read this article from War on The Rocks. Seriously.)

sponsor-guide-coverWhen we developed our Hacking for Defense class, we created a ton of course materials (syllabus, slides, videos). In addition, for the Educator Class we captured all we knew about setting up and teaching the class and wrote a 290-page educator’s guide with suggested best practices, sample lesson plans, and detailed lecture scripts and slides for each class session. We developed a separate sponsor guide with ideas about how to get the most out of the student teams and the university.

The Educator Class: What We Learned
One of the surprises for me was seeing the value of having the Department of Defense and other government agency sponsors working together with the university educators.  (One bit of learning was that the sponsors portion of the workshop could have been a day shorter.)

Two other things we learned has us modifying the pedagogy of the class.

First, our mantra to the students has been to learn about “Deployment not Demos.” That meant we were asking the students to understand all parts of the mission model canvas, not just the beneficiaries and the value proposition. We wanted them to learn what it takes to get their product/service deployed to the field, not just have another demo to a general. This meant that the minimal viable products the students built were focused on maximizing their learning of what to build, not just building prototypes. While that worked great for the students, we learned from our sponsors that for some of them getting to deployment actually required demos as part of the means to reach this end. They wanted the students to start delivering MVPs early and often and use the sponsor feedback to accelerate their learning.

This conversation made us realize that we had skewed the class to maximize student learning without really appreciating what specific deliverables would make the sponsors feel that the time they’ve invested in the class was worthwhile. So for our next round of classes we will:

  • require sponsors to specifically define what success from their student team would look like
  • have students in the first week of class present what sponsors say success looks like
  • still encourage MVPs that maximize student learning, but also recognize that for some sponsors, learning could be accelerated with earlier functional MVPs

u-sd-h4dOur second insight that has changed the pedagogy also came from our sponsors. As most of our students have no military experience, we teach a 3-hour introduction to the DOD and Intel Community workshop. While that provides a 30,000-foot overview, it doesn’t describe any detail about the teams’ specific sponsoring organization (NSA, ARCYBER, 7th Fleet, etc.). (By the end of the quarter every team figures out how their sponsor ecosystem works.) The sponsors suggested that they offer a workshop early in the class and brief their student team on their organizations, budget, issues, etc.  We thought this was a great idea as this will greatly accelerate how teams target their customer discovery.  When we update the sponsor guide, we will suggest this to all sponsors.

Another surprise was how applicable the “Hacking for…” methodology is for other problems. Working with the State Department we are offering a Hacking for Diplomacy class at Stanford starting later this month. And we now have lots of interest from organizations that have realized that this problem-solving methodology is equally applicable to solving public safety, policy, community and social issues internationally and within our own communities. We’ll soon launch a series of new modules to address these deserving communities.

Lessons Learned

  • Hacking for Defense = problem-solving methodology for innovation insurgents inside the DOD/Intel Community
  • The program will scale to 13+ universities in 2017
  • There is demand to apply the problem-solving methodology to a range of public sector organizations where success is measured by impact and mission achievement versus revenue and profit.
%d bloggers like this: